zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 jaren geleden |
anon998
|
c2a08f87b8
fix server sampling: top k sampler first (#1977)
|
2 jaren geleden |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 jaren geleden |
Henri Vasserman
|
20568fe60f
[Fix] Reenable server embedding endpoint (#1937)
|
2 jaren geleden |
Randall Fitzgerald
|
794db3e7b9
Server Example Refactor and Improvements (#1570)
|
2 jaren geleden |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 jaren geleden |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 jaren geleden |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
2 jaren geleden |
Vladimir Zorin
|
337aea1139
examples : add --alias option to gpt_params to set use friendly model name (#1614)
|
2 jaren geleden |
Kerfuffle
|
0df7d63e5b
Include server in releases + other build system cleanups (#1610)
|
2 jaren geleden |
Steward Garcia
|
7e4ea5beff
examples : add server example with REST API (#1443)
|
2 jaren geleden |