Siwen Yu
|
9fb13f9584
common : add `--version` option to show build info in CLI (#4433)
|
2 лет назад |
Georgi Gerganov
|
bcc0eb4591
llama : per-layer KV cache + quantum K cache (#4309)
|
2 лет назад |
Kerfuffle
|
5aa365d88f
llama : allow overriding GGUF metadata when loading model (#4092)
|
2 лет назад |
MaggotHATE
|
52c8bc3cf3
sampling : custom samplers order (#4285)
|
2 лет назад |
Georgi Gerganov
|
6b0a7420d0
llama : KV cache view API + better KV cache management (#4170)
|
2 лет назад |
Seb C
|
881800d1f0
main : Add ChatML functionality to main example (#4046)
|
2 лет назад |
kchro3
|
262005ad9d
common : comma should be semicolon (#4137)
|
2 лет назад |
Jannis Schönleber
|
9e87ef60e1
common : improve yaml log escaping (#4080)
|
2 лет назад |
Kerfuffle
|
91f6499393
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
|
2 лет назад |
slaren
|
2833a6f63c
ggml-cuda : fix f16 mul mat (#3961)
|
2 лет назад |
Kerfuffle
|
d9ccce2e33
Allow common process_escapes to handle \x sequences (#3928)
|
2 лет назад |
Georgi Gerganov
|
8f961abdc4
speculative : change default p_accept to 0.5 + CLI args (#3919)
|
2 лет назад |
cebtenzzre
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 лет назад |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 лет назад |
Georgi Gerganov
|
ff8f9a88da
common : minor (#3715)
|
2 лет назад |
bandoti
|
0e40806c1c
common : allow caller to handle help/argument exceptions (#3715)
|
2 лет назад |
kalomaze
|
238657db23
samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)
|
2 лет назад |
Kerfuffle
|
6e08281e58
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2 лет назад |
Georgi Gerganov
|
ee1a0ec9cb
llama : add option for greedy sampling with probs (#3813)
|
2 лет назад |
Henk Poley
|
177461104b
common : print that one line of the syntax help *also* to standard output (#3823)
|
2 лет назад |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
vvhg1
|
d3956aea53
main : escape prompt for cfg_negative_prompt and consecutive inputs in main with interactive (#3623)
|
2 лет назад |
Georgi Gerganov
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 лет назад |
Georgi Gerganov
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
2 лет назад |
staviq
|
1a159553f9
tokenizer : special token handling (#3538)
|
2 лет назад |
M. Yusuf Sarıgöz
|
370359e5ba
examples: support LLaVA v1.5 (multimodal model) (#3436)
|
2 лет назад |
Kerfuffle
|
70c29da118
common : fix mirostat state when using multiple sequences (#3543)
|
2 лет назад |
Kerfuffle
|
a16e89cec8
Fix trying to strip newline from empty prompt and cfg prompt file content (#3534)
|
2 лет назад |
pudepiedj
|
a8777ad84e
parallel : add option to load external prompt file (#3416)
|
2 лет назад |
Jhen-Jie Hong
|
97af49fa39
server : reuse llama_sample_token common util (#3494)
|
2 лет назад |