crasm
|
6391817cd1
llama : document logits_all deprecation (#4418)
|
2 years ago |
Georgi Gerganov
|
bcc0eb4591
llama : per-layer KV cache + quantum K cache (#4309)
|
2 years ago |
Kerfuffle
|
5aa365d88f
llama : allow overriding GGUF metadata when loading model (#4092)
|
2 years ago |
crasm
|
3014b5415d
Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)
|
2 years ago |
Georgi Gerganov
|
6b0a7420d0
llama : KV cache view API + better KV cache management (#4170)
|
2 years ago |
slaren
|
e85bb1a8e7
llama : add functions to get the model's metadata (#4013)
|
2 years ago |
Kerfuffle
|
91f6499393
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
|
2 years ago |
Georgi Gerganov
|
05816027d6
common : YAYF (yet another YARN fix) (#3925)
|
2 years ago |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 years ago |
kalomaze
|
238657db23
samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)
|
2 years ago |
Kerfuffle
|
6e08281e58
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2 years ago |
Georgi Gerganov
|
d69d777c02
ggml : quantization refactoring (#3833)
|
2 years ago |
Georgi Gerganov
|
ee1a0ec9cb
llama : add option for greedy sampling with probs (#3813)
|
2 years ago |
Georgi Gerganov
|
2f9ec7e271
cuda : improve text-generation and batched decoding performance (#3776)
|
2 years ago |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 years ago |
Georgi Gerganov
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 years ago |
Georgi Gerganov
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
2 years ago |
staviq
|
1a159553f9
tokenizer : special token handling (#3538)
|
2 years ago |
Georgi Gerganov
|
ac2219fef3
llama : fix session saving/loading (#3400)
|
2 years ago |
Alex Klinkhamer
|
48be797ffb
llama : expose model's rope_freq_scale in the API (#3418)
|
2 years ago |
vvhg1
|
c97f01c362
infill : add new example + extend server API (#3296)
|
2 years ago |
slaren
|
40e07a60f9
llama.cpp : add documentation about rope_freq_base and scale values (#3401)
|
2 years ago |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 years ago |
xaedes
|
0e76a8992c
train : finetune LORA (#2632)
|
2 years ago |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 years ago |
Rickard Hallerbäck
|
dc6897404e
metal : reusing llama.cpp logging (#3152)
|
2 years ago |
goerch
|
b08e75baea
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (#3170)
|
2 years ago |
Cebtenzzre
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
2 years ago |
Cebtenzzre
|
e64f5b5578
examples : make n_ctx warning work again (#3066)
|
2 years ago |
Georgi Gerganov
|
921772104b
speculative : add grammar support (#2991)
|
2 years ago |