Georgi Gerganov
|
b47b8a9cfe
llama : optimize memory buffers (#2325)
|
2 лет назад |
klosax
|
b5fe67f8c6
Perplexity: Compute scores correlated to HellaSwag (#2312)
|
2 лет назад |
Guillaume "Vermeille" Sanchez
|
ab0e26bdfb
llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280)
|
2 лет назад |
Georgi Gerganov
|
ae178ab46b
llama : make tensor_split ptr instead of array (#2272)
|
2 лет назад |
Georgi Gerganov
|
d01bccde9f
ci : integrate with ggml-org/ci (#2250)
|
2 лет назад |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 лет назад |
Howard Su
|
32c5411631
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
|
2 лет назад |
Bach Le
|
c9c74b4e3f
llama : add classifier-free guidance (#2135)
|
2 лет назад |
Howard Su
|
2347463201
Support using mmap when applying LoRA (#2095)
|
2 лет назад |
Nigel Bosch
|
db4047ad5c
main : escape prompt prefix/suffix (#2151)
|
2 лет назад |
Howard Su
|
b8c8dda75f
Use unsigned for random seed (#2006)
|
2 лет назад |
Johannes Gäßler
|
7f9753fa12
CUDA GPU acceleration for LoRAs + f16 models (#1970)
|
2 лет назад |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 лет назад |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 лет назад |
Johannes Gäßler
|
2c9380dd2f
Only one CUDA stream per device for async compute (#1898)
|
2 лет назад |
Borislav Stanimirov
|
9cbf50c041
build : fix and ignore MSVC warnings (#1889)
|
2 лет назад |
Johannes Gäßler
|
6b8312e797
Better error when using both LoRA + GPU layers (#1861)
|
2 лет назад |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 лет назад |
Kerfuffle
|
fa84c4b3e8
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
|
2 лет назад |
Willy Tarreau
|
35a84916fb
main: add the possibility to open the prompt cache read-only (#1640)
|
2 лет назад |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 лет назад |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 лет назад |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
2 лет назад |
Vladimir Zorin
|
337aea1139
examples : add --alias option to gpt_params to set use friendly model name (#1614)
|
2 лет назад |
DannyDaemonic
|
d2c59b8ba4
Fix for mingw (#1462)
|
2 лет назад |
Jason McCartney
|
7694b52b9a
main : make reverse prompt option act as a stop token in non-interactive mode (#1032)
|
2 лет назад |
Georgi Gerganov
|
4b7e245adf
minor : fix compile warnings
|
2 лет назад |
Stephan Walter
|
dc271c52ed
Remove unused n_parts parameter (#1509)
|
2 лет назад |
zrm
|
63d20469b8
fix get_num_physical_cores() (#1436)
|
2 лет назад |
Johannes Gäßler
|
905d87b70a
ggml : GPU-accelerated token generation (#1412)
|
2 лет назад |