klosax
|
b5fe67f8c6
Perplexity: Compute scores correlated to HellaSwag (#2312)
|
2 år sedan |
Guillaume "Vermeille" Sanchez
|
ab0e26bdfb
llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280)
|
2 år sedan |
Georgi Gerganov
|
d01bccde9f
ci : integrate with ggml-org/ci (#2250)
|
2 år sedan |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 år sedan |
Bach Le
|
c9c74b4e3f
llama : add classifier-free guidance (#2135)
|
2 år sedan |
WangHaoranRobin
|
d7d2e6a0f0
server: add option to output probabilities for completion (#1962)
|
2 år sedan |
Howard Su
|
b8c8dda75f
Use unsigned for random seed (#2006)
|
2 år sedan |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 år sedan |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 år sedan |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 år sedan |
Kerfuffle
|
fa84c4b3e8
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
|
2 år sedan |
Willy Tarreau
|
35a84916fb
main: add the possibility to open the prompt cache read-only (#1640)
|
2 år sedan |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 år sedan |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 år sedan |
Vladimir Zorin
|
337aea1139
examples : add --alias option to gpt_params to set use friendly model name (#1614)
|
2 år sedan |
Georgi Gerganov
|
4b7e245adf
minor : fix compile warnings
|
2 år sedan |
Stephan Walter
|
dc271c52ed
Remove unused n_parts parameter (#1509)
|
2 år sedan |
András Salamon
|
9560655409
define default model path once, sync path with readme (#1366)
|
2 år sedan |
Johannes Gäßler
|
905d87b70a
ggml : GPU-accelerated token generation (#1412)
|
2 år sedan |
Evan Jones
|
cf348a60e0
main : add option to save full output to session (#1338)
|
2 år sedan |
DannyDaemonic
|
41654efea8
Interface improvements and `--multiline-input` (previously `--author-mode`) (#1040)
|
2 år sedan |
44670
|
2edbdb0f99
main : add --in-suffix option (#1318)
|
2 år sedan |
Ron Evans
|
67c77799e0
examples : add llama_init_from_gpt_params() common function (#1290)
|
2 år sedan |
jon-chuang
|
a5d30b1f53
common : better default number of threads (#934)
|
2 år sedan |
Georgi Gerganov
|
334637e43e
common : change default parameters to pre-#1126 (#1223)
|
2 år sedan |
Ivan Stepanov
|
dd7eff57d8
llama : new sampling algorithms (#1126)
|
2 år sedan |
Evan Jones
|
1481a9cf25
llama : add session file format and saved sessions in main (#1169)
|
2 år sedan |
mgroeber9110
|
9b0a4d4214
examples/main README improvements and some light refactoring (#1131)
|
2 år sedan |
eiery
|
10f19c1121
llama : have n_batch default to 512 (#1091)
|
2 år sedan |
slaren
|
315a95a4d3
Add LoRA support (#820)
|
2 år sedan |