Clint Herron
|
e9a9cb0c54
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107)
|
2 years ago |
xaedes
|
b6e7f9b09e
llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105)
|
2 years ago |
slaren
|
50cb666b8a
Improve cuBLAS performance by using a memory pool (#1094)
|
2 years ago |
apaz
|
25d7abbd1f
llama : fixed rlimit error message (#888)
|
2 years ago |
源文雨
|
018f2279f5
cmake : link threads publicly to ggml (#1042)
|
2 years ago |
Alex Klinkhamer
|
9411288271
main : evaluate tokens in batches after swapping context (#1014)
|
2 years ago |
xaedes
|
8687c1f258
llama : remember and restore kv cache data pointers (#1104)
|
2 years ago |
Kawrakow
|
1bfc153e2f
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
|
2 years ago |
slaren
|
3d59769c3b
Show perplexity ETA in hours and minutes (#1096)
|
2 years ago |
Georgi Gerganov
|
d40fded93e
llama : fix comment for "output.weight" tensor
|
2 years ago |
Stephan Walter
|
2510c1831f
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
|
2 years ago |
Georgi Gerganov
|
12b5900dbc
ggml : sync ggml (add GPT-NeoX RoPE implementation)
|
2 years ago |
Georgi Gerganov
|
9ff334f3c9
ggml : fix bug in ggml_compute_forward_dup_f32()
|
2 years ago |
slaren
|
2005469ea1
Add Q4_3 support to cuBLAS (#1086)
|
2 years ago |
Georgi Gerganov
|
8a1756abdf
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
|
2 years ago |
Georgi Gerganov
|
66aab46079
ggml : fix Q4_3 quantization
|
2 years ago |
Kawrakow
|
38de86a711
llama : multi-threaded quantization (#1075)
|
2 years ago |
Georgi Gerganov
|
e0305ead3a
ggml : add Q4_3 quantization (#1082)
|
2 years ago |
Ivan Komarov
|
6a9661ea5a
ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the CI (#1074)
|
2 years ago |
源文雨
|
5addcb120c
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
|
2 years ago |
Stephan Walter
|
c8c2c52482
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
|
2 years ago |
slaren
|
02d6988121
Improve cuBLAS performance by dequantizing on the GPU (#1065)
|
2 years ago |
CRD716
|
834695fe3a
Minor: Readme fixed grammar, spelling, and misc updates (#1071)
|
2 years ago |
Kawrakow
|
f7d05095b4
Q4_2 quantization with rmse-optimized scale and quants (#1062)
|
2 years ago |
Georgi Gerganov
|
884e7d7a2b
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
|
2 years ago |
Georgi Gerganov
|
7cd5c4a3e9
readme : add warning about Q4_2 and Q4_3
|
2 years ago |
Stephan Walter
|
f3d4edf504
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
|
2 years ago |
slaren
|
8944a13296
Add NVIDIA cuBLAS support (#1044)
|
2 years ago |
slaren
|
6667401238
Multi-threaded ggml_cpy (#1035)
|
2 years ago |
Georgi Gerganov
|
77a73403ca
ggml : add new Q4_2 quantization (ARM only) (#1046)
|
2 years ago |