Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 лет назад |
Tobias Lütke
|
31cfbb1013
Expose generation timings from server & update completions.js (#2116)
|
2 лет назад |
jwj7140
|
f257fd2550
Add an API example using server.cpp similar to OAI. (#2009)
|
2 лет назад |
Tobias Lütke
|
7ee76e45af
Simple webchat for server (#1998)
|
2 лет назад |
Henri Vasserman
|
1cf14ccef1
fix server crashes (#2076)
|
2 лет назад |
WangHaoranRobin
|
d7d2e6a0f0
server: add option to output probabilities for completion (#1962)
|
2 лет назад |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 лет назад |
anon998
|
c2a08f87b8
fix server sampling: top k sampler first (#1977)
|
2 лет назад |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 лет назад |
Henri Vasserman
|
20568fe60f
[Fix] Reenable server embedding endpoint (#1937)
|
2 лет назад |
Randall Fitzgerald
|
794db3e7b9
Server Example Refactor and Improvements (#1570)
|
2 лет назад |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 лет назад |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 лет назад |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
2 лет назад |
Vladimir Zorin
|
337aea1139
examples : add --alias option to gpt_params to set use friendly model name (#1614)
|
2 лет назад |
Kerfuffle
|
0df7d63e5b
Include server in releases + other build system cleanups (#1610)
|
2 лет назад |
Steward Garcia
|
7e4ea5beff
examples : add server example with REST API (#1443)
|
2 лет назад |