Yann Follet
|
04aaae1d79
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
|
2 years ago |
Stephan Walter
|
0b2da20538
ggml : slightly faster AVX2 implementation for Q5 (#1197)
|
2 years ago |
Georgi Gerganov
|
574406dc7e
ggml : add Q5_0 and Q5_1 quantization (#1187)
|
2 years ago |
Georgi Gerganov
|
7a32fcb3b2
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
|
2 years ago |
unbounded
|
dd0eabc049
ggml : use full range for Q4_0 and Q4_2 quantization (#729)
|
2 years ago |
xaedes
|
54bb60e268
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
|
2 years ago |
Stephan Walter
|
2ec83428de
Fix build for gcc 8 and test in CI (#1154)
|
2 years ago |
Georgi Gerganov
|
ec9cdb6752
ggml : do not print perf ops that have not been used at all
|
2 years ago |
Georgi Gerganov
|
e4422e299c
ggml : better PERF prints + support "LLAMA_PERF=1 make"
|
2 years ago |
Stephan Walter
|
53c8434398
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)
|
2 years ago |
Yishuo Wang
|
c9e2c26f41
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX512 (#1119)
|
2 years ago |
Georgi Gerganov
|
0e018fe008
ggml : fix Q4_3 cuBLAS
|
2 years ago |
Stephan Walter
|
c50b628810
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
|
2 years ago |
Georgi Gerganov
|
872c365a91
ggml : fix AVX build + update to new Q8_0 format
|
2 years ago |
Georgi Gerganov
|
955ef9a5d5
ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
|
2 years ago |
Stephan Walter
|
c5aa5e5777
ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
|
2 years ago |
slaren
|
50cb666b8a
Improve cuBLAS performance by using a memory pool (#1094)
|
2 years ago |
Kawrakow
|
1bfc153e2f
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
|
2 years ago |
Georgi Gerganov
|
12b5900dbc
ggml : sync ggml (add GPT-NeoX RoPE implementation)
|
2 years ago |
Georgi Gerganov
|
9ff334f3c9
ggml : fix bug in ggml_compute_forward_dup_f32()
|
2 years ago |
Georgi Gerganov
|
8a1756abdf
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
|
2 years ago |
Georgi Gerganov
|
66aab46079
ggml : fix Q4_3 quantization
|
2 years ago |
Kawrakow
|
38de86a711
llama : multi-threaded quantization (#1075)
|
2 years ago |
Georgi Gerganov
|
e0305ead3a
ggml : add Q4_3 quantization (#1082)
|
2 years ago |
Stephan Walter
|
c8c2c52482
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
|
2 years ago |
slaren
|
02d6988121
Improve cuBLAS performance by dequantizing on the GPU (#1065)
|
2 years ago |
Kawrakow
|
f7d05095b4
Q4_2 quantization with rmse-optimized scale and quants (#1062)
|
2 years ago |
Georgi Gerganov
|
884e7d7a2b
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
|
2 years ago |
Stephan Walter
|
f3d4edf504
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
|
2 years ago |
slaren
|
8944a13296
Add NVIDIA cuBLAS support (#1044)
|
2 years ago |