Xiao-Yong Jin
|
b8ad1b66b2
server : allow json array in prompt or content for direct token input (#2306)
|
2 жил өмнө |
Johannes Gäßler
|
c63bb1d16a
CUDA: use mul_mat_q kernels by default (#2683)
|
2 жил өмнө |
Jhen-Jie Hong
|
226255b44e
server : fallback to default if client param is null (#2688)
|
2 жил өмнө |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 жил өмнө |
Jhen-Jie Hong
|
3ebb00935f
server : add missing /json-schema-to-grammar.mjs (#2616)
|
2 жил өмнө |
Cheng Shao
|
d75561df20
server : add --numa support (#2524)
|
2 жил өмнө |
Equim
|
53dc399472
server: fixed wrong variable name in timing json (#2579)
|
2 жил өмнө |
Martin Krasser
|
1638757767
Fix grammar-based sampling issue in server (#2566)
|
2 жил өмнө |
Martin Krasser
|
f5bfea0580
Allow passing grammar to completion endpoint (#2532)
|
2 жил өмнө |
Stephen Nichols
|
5f631c2679
Fixing race condition in server and partial stream handling in frontend. (#2391)
|
2 жил өмнө |
Johannes Gäßler
|
0728c5a8b9
CUDA: mmq CLI option, fixed mmq build issues (#2453)
|
2 жил өмнө |
slaren
|
d5512b782b
server: add rms_norm_eps parameter (#2380)
|
2 жил өмнө |
IgnacioFDM
|
4f06592cc6
Add gqa parameter support to the server (#2351)
|
2 жил өмнө |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 жил өмнө |
Howard Su
|
32c5411631
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
|
2 жил өмнө |
Howard Su
|
2347463201
Support using mmap when applying LoRA (#2095)
|
2 жил өмнө |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 жил өмнө |
Tobias Lütke
|
31cfbb1013
Expose generation timings from server & update completions.js (#2116)
|
2 жил өмнө |
jwj7140
|
f257fd2550
Add an API example using server.cpp similar to OAI. (#2009)
|
2 жил өмнө |
Tobias Lütke
|
7ee76e45af
Simple webchat for server (#1998)
|
2 жил өмнө |
Henri Vasserman
|
1cf14ccef1
fix server crashes (#2076)
|
2 жил өмнө |
WangHaoranRobin
|
d7d2e6a0f0
server: add option to output probabilities for completion (#1962)
|
2 жил өмнө |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 жил өмнө |
anon998
|
c2a08f87b8
fix server sampling: top k sampler first (#1977)
|
2 жил өмнө |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 жил өмнө |
Henri Vasserman
|
20568fe60f
[Fix] Reenable server embedding endpoint (#1937)
|
2 жил өмнө |
Randall Fitzgerald
|
794db3e7b9
Server Example Refactor and Improvements (#1570)
|
2 жил өмнө |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 жил өмнө |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 жил өмнө |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
2 жил өмнө |