bmwl
|
f486f6e1e5
ggml : add numa options (#5377)
|
1 år sedan |
Georgi Gerganov
|
6b0a7420d0
llama : KV cache view API + better KV cache management (#4170)
|
2 år sedan |
Daniel Bevenius
|
9d5949f04b
examples : fix typo in parallel example doc comment (#4181)
|
2 år sedan |
cebtenzzre
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 år sedan |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 år sedan |
Georgi Gerganov
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 år sedan |
Georgi Gerganov
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
2 år sedan |
Kerfuffle
|
70c29da118
common : fix mirostat state when using multiple sequences (#3543)
|
2 år sedan |
Georgi Gerganov
|
fcca0a7004
refact : fix convert script + zero out KV cache to avoid nans (#3523)
|
2 år sedan |
pudepiedj
|
a8777ad84e
parallel : add option to load external prompt file (#3416)
|
2 år sedan |
Georgi Gerganov
|
ac2219fef3
llama : fix session saving/loading (#3400)
|
2 år sedan |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 år sedan |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 år sedan |