Georgi Gerganov
|
2d5db48371
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
|
2 years ago |
Georgi Gerganov
|
13c351ad72
ggml : various fixes (#1450)
|
2 years ago |
Georgi Gerganov
|
601a033475
ggml : add GGML_QNT_VERSION to track quantization format changes
|
2 years ago |
Johannes Gäßler
|
905d87b70a
ggml : GPU-accelerated token generation (#1412)
|
2 years ago |
xaedes
|
f954edda93
ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360)
|
2 years ago |
Georgi Gerganov
|
b9fd7eee57
ggml : remove bit shuffling (#1405)
|
2 years ago |
slaren
|
2d099e5193
ggml: add names to tensors (#1268)
|
2 years ago |
slaren
|
58b367c2d7
cuBLAS: refactor and optimize f16 mat mul performance (#1259)
|
2 years ago |
Georgi Gerganov
|
6bc4400e67
ggml : add Q5 WASM SIMD + GGML_FTYPE
|
2 years ago |
Georgi Gerganov
|
0b5a935099
ggml : fix visibility and unused warnings
|
2 years ago |
Stephan Walter
|
36d19a603b
Remove Q4_3 which is no better than Q5 (#1218)
|
2 years ago |
Georgi Gerganov
|
55390bcaf2
ggml : sync ggml (ggml_alibi)
|
2 years ago |
0cc4m
|
7296c961d9
ggml : add CLBlast support (#1164)
|
2 years ago |
Georgi Gerganov
|
574406dc7e
ggml : add Q5_0 and Q5_1 quantization (#1187)
|
2 years ago |
Georgi Gerganov
|
7a32fcb3b2
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
|
2 years ago |
Georgi Gerganov
|
8a0f8673ba
ggml : export symbols (#1155)
|
2 years ago |
Georgi Gerganov
|
12b5900dbc
ggml : sync ggml (add GPT-NeoX RoPE implementation)
|
2 years ago |
Kawrakow
|
38de86a711
llama : multi-threaded quantization (#1075)
|
2 years ago |
Georgi Gerganov
|
e0305ead3a
ggml : add Q4_3 quantization (#1082)
|
2 years ago |
slaren
|
8944a13296
Add NVIDIA cuBLAS support (#1044)
|
2 years ago |
Georgi Gerganov
|
77a73403ca
ggml : add new Q4_2 quantization (ARM only) (#1046)
|
2 years ago |
slaren
|
315a95a4d3
Add LoRA support (#820)
|
2 years ago |
Ivan Komarov
|
f266259ad9
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
|
2 years ago |
Georgi Gerganov
|
e95b6554b4
ggml : add Q8_0 quantization for intermediate results (#951)
|
2 years ago |
Pavol Rusnak
|
c56b715269
Expose type name from ggml (#970)
|
2 years ago |
Kerfuffle
|
c9a59b70a5
ggml : add unary and binary map operations (#874)
|
2 years ago |
Georgi Gerganov
|
a3a2a0eda8
ggml : add GGML_DEFAULT_N_THREADS
|
2 years ago |
Stephan Walter
|
3e6e70d8e8
Add enum llama_ftype, sync ggml_type to model files (#709)
|
2 years ago |
Georgi Gerganov
|
c3ac702e5e
ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dst
|
2 years ago |
comex
|
f963b63afa
Rewrite loading code to try to satisfy everyone:
|
2 years ago |