Georgi Gerganov
|
c3ee46fab4
batch : remove logits_all flag (#14141)
|
hai 7 meses |
Georgi Gerganov
|
9596506965
kv-cache : fix split_equal handling in unified implementation (#14130)
|
hai 7 meses |
compilade
|
a20b2b05bc
context : round n_tokens to next multiple of n_seqs when reserving (#14140)
|
hai 7 meses |
Georgi Gerganov
|
745aa5319b
llama : deprecate llama_kv_self_ API (#14030)
|
hai 7 meses |
Georgi Gerganov
|
487a5e0401
context : fix SWA-related warning for multiple sequences (#14045)
|
hai 7 meses |
Sigbjørn Skjæret
|
d17a809ef0
llama : support multiple classifier outputs and labels (#13940)
|
hai 7 meses |
Georgi Gerganov
|
7f37b6cf1e
memory : migrate from llama_kv_cache to more generic llama_memory (#14006)
|
hai 7 meses |
Georgi Gerganov
|
9e31bec4fd
context : fix pos_min initialization upon error decode (#14008)
|
hai 7 meses |
Georgi Gerganov
|
3e63a58ef7
kv-cache : refactor the update/defrag mechanism (#13988)
|
hai 7 meses |
Georgi Gerganov
|
803f8baf4f
llama : deprecate explicit kv_self defrag/update calls (#13921)
|
hai 7 meses |
Georgi Gerganov
|
3600cc2886
llama : use n_swa + n_ubatch cells for SWA cache (#13833)
|
hai 7 meses |
Georgi Gerganov
|
3f55f781f1
llama : auto-batch preparation (#13845)
|
hai 7 meses |
Georgi Gerganov
|
12d0188c0d
kv-cache : refactor + add llama_memory_state_i (#13746)
|
hai 7 meses |
Georgi Gerganov
|
4f81b33e32
llama : validate seq id batch input (#13809)
|
hai 7 meses |
Georgi Gerganov
|
79c137f776
examples : allow extracting embeddings from decoder contexts (#13797)
|
hai 7 meses |
Georgi Gerganov
|
de2ef53a4b
kv-cache : rework kv_cell (#13706)
|
hai 7 meses |
Georgi Gerganov
|
797f2ac062
kv-cache : simplify the interface (#13660)
|
hai 8 meses |
Georgi Gerganov
|
a4090d1174
llama : remove llama_kv_cache_view API + remove deprecated (#13653)
|
hai 8 meses |
Georgi Gerganov
|
e298d2fbd0
kv-cache : add SWA support (#13194)
|
hai 8 meses |
Sigbjørn Skjæret
|
f5170c1d7a
editorconfig : fix trailing whitespace from #13542 (#13546)
|
hai 8 meses |
Gilad S.
|
017f10b5fa
fix: crash when calling `llama_state_get_size` on a context without a KV cache (#13542)
|
hai 8 meses |
Johannes Gäßler
|
10d2af0eaa
llama/ggml: add LLM training support (#10544)
|
hai 8 meses |
Georgi Gerganov
|
064cc596ac
context : fix state io for memory-less contexts (#13470)
|
hai 8 meses |
David Huang
|
7f323a589f
Add `--no-op-offload` to improve `-ot` pp perf in MoE models like llama4 400B (#13386)
|
hai 8 meses |
Georgi Gerganov
|
6562e5a4d6
context : allow cache-less context for embeddings (#13108)
|
hai 8 meses |
Georgi Gerganov
|
51fb96b1ff
context : remove logits_all flag (#13284)
|
hai 8 meses |
Georgi Gerganov
|
a75cb30dc9
context : fix reorder logic (#13267)
|
hai 8 meses |
Georgi Gerganov
|
c642bc014c
kv-cache : separate recurrent vs non-recurrent impl (#12799)
|
hai 8 meses |
ddh0
|
16a457facd
fix typo: `n_ctx_pre_seq` -> `n_ctx_per_seq` (#13221)
|
hai 8 meses |
pockers21
|
fb0471d175
context : do not clear output buffer on reserve (#13152)
|
hai 8 meses |