Georgi Gerganov
|
ec9cdb6752
ggml : do not print perf ops that have not been used at all
|
2 years ago |
Georgi Gerganov
|
e4422e299c
ggml : better PERF prints + support "LLAMA_PERF=1 make"
|
2 years ago |
Stephan Walter
|
53c8434398
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)
|
2 years ago |
Yishuo Wang
|
c9e2c26f41
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 (#1119)
|
2 years ago |
Georgi Gerganov
|
0e018fe008
ggml : fix Q4_3 cuBLAS
|
2 years ago |
Stephan Walter
|
c50b628810
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
|
2 years ago |
Georgi Gerganov
|
872c365a91
ggml : fix AVX build + update to new Q8_0 format
|
2 years ago |
Georgi Gerganov
|
955ef9a5d5
ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
|
2 years ago |
Stephan Walter
|
c5aa5e5777
ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
|
2 years ago |
slaren
|
50cb666b8a
Improve cuBLAS performance by using a memory pool (#1094)
|
2 years ago |
Kawrakow
|
1bfc153e2f
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
|
2 years ago |
Georgi Gerganov
|
12b5900dbc
ggml : sync ggml (add GPT-NeoX RoPE implementation)
|
2 years ago |
Georgi Gerganov
|
9ff334f3c9
ggml : fix bug in ggml_compute_forward_dup_f32()
|
2 years ago |
Georgi Gerganov
|
8a1756abdf
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
|
2 years ago |
Georgi Gerganov
|
66aab46079
ggml : fix Q4_3 quantization
|
2 years ago |
Kawrakow
|
38de86a711
llama : multi-threaded quantization (#1075)
|
2 years ago |
Georgi Gerganov
|
e0305ead3a
ggml : add Q4_3 quantization (#1082)
|
2 years ago |
Stephan Walter
|
c8c2c52482
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
|
2 years ago |
slaren
|
02d6988121
Improve cuBLAS performance by dequantizing on the GPU (#1065)
|
2 years ago |
Kawrakow
|
f7d05095b4
Q4_2 quantization with rmse-optimized scale and quants (#1062)
|
2 years ago |
Georgi Gerganov
|
884e7d7a2b
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
|
2 years ago |
Stephan Walter
|
f3d4edf504
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
|
2 years ago |
slaren
|
8944a13296
Add NVIDIA cuBLAS support (#1044)
|
2 years ago |
slaren
|
6667401238
Multi-threaded ggml_cpy (#1035)
|
2 years ago |
Georgi Gerganov
|
77a73403ca
ggml : add new Q4_2 quantization (ARM only) (#1046)
|
2 years ago |
Georgi Gerganov
|
50a8a2af97
ggml : scratch that - vmlaq_n_f32 is always better
|
2 years ago |
Georgi Gerganov
|
dcdd65e296
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
|
2 years ago |
slaren
|
315a95a4d3
Add LoRA support (#820)
|
2 years ago |
Georgi Gerganov
|
69b740289f
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
|
2 years ago |
Ivan Komarov
|
f266259ad9
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
|
2 years ago |