Ziad Ben Hadj-Alouane
|
1d144112c0
server : add --log-disable to disable logging to file (#4260)
|
2 лет назад |
Ziad Ben Hadj-Alouane
|
f43f09366d
server : add single-client multi-prompt support (#4232)
|
2 лет назад |
Georgi Gerganov
|
af19d35734
server : OAI API compatibility (#4198)
|
2 лет назад |
Haohui Mai
|
55978ce09b
Fix incorrect format strings and uninitialized variables. (#4133)
|
2 лет назад |
SoftwareRenderer
|
936c79b227
server : relay error messages (#4131)
|
2 лет назад |
Kerfuffle
|
91f6499393
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
|
2 лет назад |
Alexey Parfenov
|
d96ca7ded7
server : fix crash when prompt exceeds context size (#3996)
|
2 лет назад |
Mihai
|
57ad015dc3
server : add min_p param (#3877)
|
2 лет назад |
cebtenzzre
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 лет назад |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 лет назад |
Adrian Hesketh
|
ca190bca8e
server : re-enable completion and embedded at the same time (#3876)
|
2 лет назад |
Kerfuffle
|
6e08281e58
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2 лет назад |
Georgi Gerganov
|
34b2a5e1ee
server : do not release slot on image input (#3798)
|
2 лет назад |
cebtenzzre
|
ad93962657
server : add parameter -tb N, --threads-batch N (#3584) (#3768)
|
2 лет назад |
Georgi Gerganov
|
1717521cdb
server : do not block system prompt update (#3767)
|
2 лет назад |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
Georgi Gerganov
|
438c2ca830
server : parallel decoding and multimodal (#3677)
|
2 лет назад |
Georgi Gerganov
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 лет назад |
Georgi Gerganov
|
a0edf73bda
server : fix uninitialized sampling context (close #3685)
|
2 лет назад |
Georgi Gerganov
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
2 лет назад |
Georgi Gerganov
|
57dd55e2c7
server : fix kv cache management (#3588)
|
2 лет назад |
Michael Coppola
|
a8bdd65525
server : add parameter -tb N, --threads-batch N (#3584)
|
2 лет назад |
Kerfuffle
|
70c29da118
common : fix mirostat state when using multiple sequences (#3543)
|
2 лет назад |
vvhg1
|
11ea5c7d96
infill. : fix tokenization (#3508)
|
2 лет назад |
Jhen-Jie Hong
|
97af49fa39
server : reuse llama_sample_token common util (#3494)
|
2 лет назад |
Kenvix ⭐
|
45eba9369f
build : use std::make_tuple() for compatibility with older GCC versions (#3488)
|
2 лет назад |
Jhen-Jie Hong
|
e8b8d32e86
server : fix incorrect num_tokens_predicted (#3480)
|
2 лет назад |
Georgi Gerganov
|
ac2219fef3
llama : fix session saving/loading (#3400)
|
2 лет назад |
vvhg1
|
c97f01c362
infill : add new example + extend server API (#3296)
|
2 лет назад |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 лет назад |