Georgi Gerganov
|
84ca9c2ecf
examples : fix save-load-state + rename llama-util.h
|
2 years ago |
Georgi Gerganov
|
334637e43e
common : change default parameters to pre-#1126 (#1223)
|
2 years ago |
Ivan Stepanov
|
dd7eff57d8
llama : new sampling algorithms (#1126)
|
2 years ago |
slaren
|
7fc50c051a
cuBLAS: use host pinned memory and dequantize while copying (#1207)
|
2 years ago |
Henri Vasserman
|
b1ee8f59b4
cuBLAS: non-contiguous tensor support (#1215)
|
2 years ago |
Stephan Walter
|
36d19a603b
Remove Q4_3 which is no better than Q5 (#1218)
|
2 years ago |
Georgi Gerganov
|
7f15c5c477
readme : update hot topics
|
2 years ago |
Georgi Gerganov
|
55390bcaf2
ggml : sync ggml (ggml_alibi)
|
2 years ago |
CRD716
|
5fba3c016b
examples : add Jeopardy example (#1168)
|
2 years ago |
Evan Jones
|
1481a9cf25
llama : add session file format and saved sessions in main (#1169)
|
2 years ago |
Georgi Gerganov
|
11d902364b
ggml : add helper debug printf in soft_max
|
2 years ago |
0cc4m
|
7296c961d9
ggml : add CLBlast support (#1164)
|
2 years ago |
Folko-Ven
|
78ec543733
Correcting link to w64devkit (#1214)
|
2 years ago |
Johannes Gäßler
|
92a6e13a31
Add Manjaro CUDA include and lib dirs to Makefile (#1212)
|
2 years ago |
Yann Follet
|
04aaae1d79
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
|
2 years ago |
Stephan Walter
|
0b2da20538
ggml : slightly faster AVX2 implementation for Q5 (#1197)
|
2 years ago |
Georgi Gerganov
|
f9be42add0
readme : add quantization info
|
2 years ago |
Georgi Gerganov
|
574406dc7e
ggml : add Q5_0 and Q5_1 quantization (#1187)
|
2 years ago |
Ásgeir Bjarni Ingvarsson
|
87a6f846d3
Allow setting the rng seed after initialization. (#1184)
|
2 years ago |
DaniAndTheWeb
|
ea3ad7eb60
Updating build instructions to include BLAS support (#1183)
|
2 years ago |
Pavol Rusnak
|
859fee6dfb
quantize : use `map` to assign quantization type from `string` (#1191)
|
2 years ago |
Stephan Walter
|
4afcc37869
Update SHA256SUMS after quantization change (#1181)
|
2 years ago |
ostix360
|
667c501334
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)
|
2 years ago |
Pavol Rusnak
|
bb98e77be7
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
|
2 years ago |
Georgi Gerganov
|
7a32fcb3b2
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
|
2 years ago |
unbounded
|
dd0eabc049
ggml : use full range for Q4_0 and Q4_2 quantization (#729)
|
2 years ago |
xaedes
|
54bb60e268
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
|
2 years ago |
Georgi Gerganov
|
8a0f8673ba
ggml : export symbols (#1155)
|
2 years ago |
xaedes
|
0c5692345d
examples : add save_load_state example (#1150)
|
2 years ago |
Georgi Gerganov
|
957c8ae21d
llama : increase scratch buffer size for 65B (ref #1152)
|
2 years ago |