Tarek Dakhran
|
ad8d85bd94
memory : add llama_memory_hybrid_iswa (#18601)
|
2 weeks ago |
Junwon Hwang
|
60591f01d4
model : add EXAONE MoE (#18543)
|
3 weeks ago |
Prabod
|
5755e52d15
model : Maincoder-1B support (#18534)
|
1 month ago |
momonga
|
9c675c7140
model : Plamo3 support (#17304)
|
1 month ago |
Xuan-Son Nguyen
|
4cbafad4f0
model: support MiMo-V2-Flash (#18328)
|
1 month ago |
Ryan Mangeno
|
dfc959b886
model : Granite Embedding support (#15641)
|
1 month ago |
Rhys-T
|
63908b631a
cmake: fix Mach-O current version number (#17877)
|
2 months ago |
philip-essential
|
1d2a1ab73d
model : support Rnj-1 (#17811)
|
2 months ago |
Xuan-Son Nguyen
|
cd3c118908
model: support Ministral3 (#17644)
|
2 months ago |
Piotr Wilkin (ilintar)
|
ff55414c42
model : Qwen3 Next (#16095)
|
2 months ago |
william pan
|
4902eebe33
models : Added support for RND1 Diffusion Language Model (#17433)
|
2 months ago |
Bartowski
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
2 months ago |
Mike Abbott
|
4a5b8aff40
cmake : add version to all shared object files (#17091)
|
2 months ago |
Li Pengzhan
|
9f052478c2
model : add openPangu-Embedded (#16941)
|
3 months ago |
Piotr Wilkin (ilintar)
|
bea04522ff
refactor : llama-model.cpp (#16252)
|
3 months ago |
Georgi Gerganov
|
715a6db02c
kv-cache : drop the "unified" prefix (#15467)
|
5 months ago |
Gabe Goodhart
|
edc4a29eff
memory : Hybrid recurrent cache (#13979)
|
7 months ago |
Georgi Gerganov
|
7f37b6cf1e
memory : migrate from llama_kv_cache to more generic llama_memory (#14006)
|
8 months ago |
Georgi Gerganov
|
0fc16b42e8
kv-cache : split implementation in separate sources (#13920)
|
8 months ago |
Georgi Gerganov
|
34b7c0439e
cmake : add llama-cparams.cpp to build (#13832)
|
8 months ago |
Johannes Gäßler
|
10d2af0eaa
llama/ggml: add LLM training support (#10544)
|
9 months ago |
Georgi Gerganov
|
13b4548877
cmake : do not include ./src as public for libllama (#13062)
|
9 months ago |
Plamen Minev
|
381603a775
ci: detach common from the library (#12827)
|
10 months ago |
Georgi Gerganov
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
11 months ago |
Olivier Chafik
|
6171c9d258
Add Jinja template support (#11016)
|
1 year ago |
Georgi Gerganov
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
Diego Devesa
|
cb13ef85a4
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797)
|
1 year ago |
Diego Devesa
|
7cc2d2c889
ggml : move AMX to the CPU backend (#10570)
|
1 year ago |
Georgi Gerganov
|
ab96610b1e
cmake : enable warnings in llama (#10474)
|
1 year ago |
Diego Devesa
|
ae8de6d50a
ggml : build backends as libraries (#10256)
|
1 year ago |