Xiao-Yong Jin
|
0c06204fb3
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
|
2 years ago |
slaren
|
41c674161f
make rms_norm_eps a parameter (#2374)
|
2 years ago |
Evan Jones
|
84e09a7d8b
llama : add grammar-based sampling (#1773)
|
2 years ago |
wzy
|
57921ca6db
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
|
2 years ago |
Georgi Gerganov
|
e76d630df1
llama : grouped-query attention + LLaMAv2 70B support (#2276)
|
2 years ago |
maddes8cht
|
1d0824b247
llama : print help to stdout (#2338)
|
2 years ago |
Georgi Gerganov
|
b47b8a9cfe
llama : optimize memory buffers (#2325)
|
2 years ago |
klosax
|
b5fe67f8c6
Perplexity: Compute scores correlated to HellaSwag (#2312)
|
2 years ago |
Guillaume "Vermeille" Sanchez
|
ab0e26bdfb
llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280)
|
2 years ago |
Georgi Gerganov
|
ae178ab46b
llama : make tensor_split ptr instead of array (#2272)
|
2 years ago |
Georgi Gerganov
|
d01bccde9f
ci : integrate with ggml-org/ci (#2250)
|
2 years ago |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 years ago |
Howard Su
|
32c5411631
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
|
2 years ago |
Bach Le
|
c9c74b4e3f
llama : add classifier-free guidance (#2135)
|
2 years ago |
Howard Su
|
2347463201
Support using mmap when applying LoRA (#2095)
|
2 years ago |
Nigel Bosch
|
db4047ad5c
main : escape prompt prefix/suffix (#2151)
|
2 years ago |
Howard Su
|
b8c8dda75f
Use unsigned for random seed (#2006)
|
2 years ago |
Johannes Gäßler
|
7f9753fa12
CUDA GPU acceleration for LoRAs + f16 models (#1970)
|
2 years ago |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 years ago |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 years ago |
Johannes Gäßler
|
2c9380dd2f
Only one CUDA stream per device for async compute (#1898)
|
2 years ago |
Borislav Stanimirov
|
9cbf50c041
build : fix and ignore MSVC warnings (#1889)
|
2 years ago |
Johannes Gäßler
|
6b8312e797
Better error when using both LoRA + GPU layers (#1861)
|
2 years ago |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 years ago |
Kerfuffle
|
fa84c4b3e8
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
|
2 years ago |
Willy Tarreau
|
35a84916fb
main: add the possibility to open the prompt cache read-only (#1640)
|
2 years ago |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 years ago |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 years ago |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
2 years ago |
Vladimir Zorin
|
337aea1139
examples : add --alias option to gpt_params to set use friendly model name (#1614)
|
2 years ago |