Kawrakow
|
99009e72f8
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
|
2 years ago |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 years ago |
0cc4m
|
dcb2ed4826
OpenCL: Fix duplication of layers in VRAM and RAM, add GPU mul kernel (#1653)
|
2 years ago |
Georgi Gerganov
|
7552ac5863
ggml : sync cgraph import / export API
|
2 years ago |
Georgi Gerganov
|
5d1830b99d
ggml : fix bug in ggml_alibi
|
2 years ago |
apcameron
|
a6704643b6
ggml : add support for the RISCV architecture (#1616)
|
2 years ago |
Georgi Gerganov
|
93618031c7
ggml : add ggml_tensor_overhead()
|
2 years ago |
Georgi Gerganov
|
bdbda1b17a
ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())
|
2 years ago |
0cc4m
|
2e6cd4b025
OpenCL Token Generation Acceleration (#1459)
|
2 years ago |
Georgi Gerganov
|
265db9834e
ggml : output 3d sizes in ggml_graph_dump_dot()
|
2 years ago |
Georgi Gerganov
|
fab49c685e
ggml : update WASM SIMD
|
2 years ago |
Georgi Gerganov
|
3de84b2606
ggml : add ggml_clamp() (#1539)
|
2 years ago |
Johannes Gäßler
|
affc76edfd
cuda : loading models directly into VRAM, norm calculation on GPU, broadcasting for ggml_mul (#1483)
|
2 years ago |
Maxime
|
503db28849
llama : fix name shadowing and C4146 (#1526)
|
2 years ago |
Georgi Gerganov
|
4fd3e29297
ggml : fix scalar implementation of Q4_1 dot
|
2 years ago |
Georgi Gerganov
|
2d5db48371
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
|
2 years ago |
Ilya Kurdyukov
|
42627421ec
~7% faster Q5_1 AVX2 code (#1477)
|
2 years ago |
xaedes
|
79b2d5b69d
ggml : alternative fix for race condition bug in non-inplace ggml_compute_forward_diag_mask_f32 (#1454)
|
2 years ago |
Georgi Gerganov
|
13c351ad72
ggml : various fixes (#1450)
|
2 years ago |
katsu560
|
60f8c361ca
ggml : add AVX support based on AVX2 code (#1430)
|
2 years ago |
Georgi Gerganov
|
66841fdb0e
ggml : multi-thread mul and diag_mask ops (#1428)
|
2 years ago |
Johannes Gäßler
|
905d87b70a
ggml : GPU-accelerated token generation (#1412)
|
2 years ago |
xaedes
|
f954edda93
ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360)
|
2 years ago |
Georgi Gerganov
|
f048af0230
ggml : sync alibi fix from ggml repo
|
2 years ago |
3ooabkhxtn
|
ac0cd259d5
Adding SSE instructions to ggml_vec_dot_q4_0_q8_0 (#1413)
|
2 years ago |
Georgi Gerganov
|
b9fd7eee57
ggml : remove bit shuffling (#1405)
|
2 years ago |
Sami Farin
|
9f8dbc4787
use pause asm insn in busyloop to run the CPU (13600K) 10 °C cooler (#1314)
|
2 years ago |
swittk
|
1b0fd45465
ggml : Allow usage of CLBlast alongside Accelerate.framework (#1336)
|
2 years ago |
Ron Jailall
|
20fbf2a2a0
ggml : change immintrin.h to intrin.h for compatibility (#1307)
|
2 years ago |
Georgi Gerganov
|
799fdc1b5d
ggml : vectorize Q8_0 quantization
|
2 years ago |