slaren
|
2833a6f63c
ggml-cuda : fix f16 mul mat (#3961)
|
2 yıl önce |
Kerfuffle
|
d9ccce2e33
Allow common process_escapes to handle \x sequences (#3928)
|
2 yıl önce |
Georgi Gerganov
|
8f961abdc4
speculative : change default p_accept to 0.5 + CLI args (#3919)
|
2 yıl önce |
cebtenzzre
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 yıl önce |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 yıl önce |
Georgi Gerganov
|
ff8f9a88da
common : minor (#3715)
|
2 yıl önce |
bandoti
|
0e40806c1c
common : allow caller to handle help/argument exceptions (#3715)
|
2 yıl önce |
kalomaze
|
238657db23
samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)
|
2 yıl önce |
Kerfuffle
|
6e08281e58
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2 yıl önce |
Georgi Gerganov
|
ee1a0ec9cb
llama : add option for greedy sampling with probs (#3813)
|
2 yıl önce |
Henk Poley
|
177461104b
common : print that one line of the syntax help *also* to standard output (#3823)
|
2 yıl önce |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 yıl önce |
vvhg1
|
d3956aea53
main : escape prompt for cfg_negative_prompt and consecutive inputs in main with interactive (#3623)
|
2 yıl önce |
Georgi Gerganov
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 yıl önce |
Georgi Gerganov
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
2 yıl önce |
staviq
|
1a159553f9
tokenizer : special token handling (#3538)
|
2 yıl önce |
M. Yusuf Sarıgöz
|
370359e5ba
examples: support LLaVA v1.5 (multimodal model) (#3436)
|
2 yıl önce |
Kerfuffle
|
70c29da118
common : fix mirostat state when using multiple sequences (#3543)
|
2 yıl önce |
Kerfuffle
|
a16e89cec8
Fix trying to strip newline from empty prompt and cfg prompt file content (#3534)
|
2 yıl önce |
pudepiedj
|
a8777ad84e
parallel : add option to load external prompt file (#3416)
|
2 yıl önce |
Jhen-Jie Hong
|
97af49fa39
server : reuse llama_sample_token common util (#3494)
|
2 yıl önce |
Kenvix ⭐
|
45eba9369f
build : use std::make_tuple() for compatibility with older GCC versions (#3488)
|
2 yıl önce |
staviq
|
acec9eaaa9
common : process escape sequences in reverse prompts (#3461)
|
2 yıl önce |
goerch
|
ff5a3f0c09
Work on the BPE tokenizer (#3252)
|
2 yıl önce |
vvhg1
|
c97f01c362
infill : add new example + extend server API (#3296)
|
2 yıl önce |
Cebtenzzre
|
bc39553c90
build : enable more non-default compiler warnings (#3200)
|
2 yıl önce |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 yıl önce |
xaedes
|
0e76a8992c
train : finetune LORA (#2632)
|
2 yıl önce |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 yıl önce |
Cebtenzzre
|
a5661d7e71
llama : allow gguf RoPE keys to be overridden with defaults (#3240)
|
2 yıl önce |