Cebtenzzre
|
a5661d7e71
llama : allow gguf RoPE keys to be overridden with defaults (#3240)
|
2 лет назад |
Cebtenzzre
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
2 лет назад |
Cebtenzzre
|
00d62adb79
fix some warnings from gcc and clang-tidy (#3038)
|
2 лет назад |
Cebtenzzre
|
de2fe892af
examples : replace fprintf to stdout with printf (#3017)
|
2 лет назад |
Jhen-Jie Hong
|
571083f508
server : avoid aniprompt in probabilities of final response (#2849)
|
2 лет назад |
Cebtenzzre
|
ef15649972
build : fix most gcc and clang warnings (#2861)
|
2 лет назад |
Johannes Gäßler
|
6b73ef1201
YAML result logging + preset script (#2657)
|
2 лет назад |
Georgi Gerganov
|
edd4c14817
llama : more tokenizer fixes (#2810)
|
2 лет назад |
Bruce MacDonald
|
c1ac54b77a
server : add `/detokenize` endpoint (#2802)
|
2 лет назад |
Matt Pulver
|
c82742ac9c
llama : add llama_beam_search() (#2267)
|
2 лет назад |
Jhen-Jie Hong
|
29674ab4e8
server : display token probabilities in the UI (#2489)
|
2 лет назад |
Xiao-Yong Jin
|
b8ad1b66b2
server : allow json array in prompt or content for direct token input (#2306)
|
2 лет назад |
Johannes Gäßler
|
c63bb1d16a
CUDA: use mul_mat_q kernels by default (#2683)
|
2 лет назад |
Jhen-Jie Hong
|
226255b44e
server : fallback to default if client param is null (#2688)
|
2 лет назад |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 лет назад |
Jhen-Jie Hong
|
3ebb00935f
server : add missing /json-schema-to-grammar.mjs (#2616)
|
2 лет назад |
Cheng Shao
|
d75561df20
server : add --numa support (#2524)
|
2 лет назад |
Equim
|
53dc399472
server: fixed wrong variable name in timing json (#2579)
|
2 лет назад |
Martin Krasser
|
1638757767
Fix grammar-based sampling issue in server (#2566)
|
2 лет назад |
Martin Krasser
|
f5bfea0580
Allow passing grammar to completion endpoint (#2532)
|
2 лет назад |
Stephen Nichols
|
5f631c2679
Fixing race condition in server and partial stream handling in frontend. (#2391)
|
2 лет назад |
Johannes Gäßler
|
0728c5a8b9
CUDA: mmq CLI option, fixed mmq build issues (#2453)
|
2 лет назад |
slaren
|
d5512b782b
server: add rms_norm_eps parameter (#2380)
|
2 лет назад |
IgnacioFDM
|
4f06592cc6
Add gqa parameter support to the server (#2351)
|
2 лет назад |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 лет назад |
Howard Su
|
32c5411631
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
|
2 лет назад |
Howard Su
|
2347463201
Support using mmap when applying LoRA (#2095)
|
2 лет назад |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 лет назад |
Tobias Lütke
|
31cfbb1013
Expose generation timings from server & update completions.js (#2116)
|
2 лет назад |
jwj7140
|
f257fd2550
Add an API example using server.cpp similar to OAI. (#2009)
|
2 лет назад |