Xuan-Son Nguyen
|
3f96aeff39
llama : one-off chat template fix for Mistral-Small-2503 (#13398)
|
há 8 meses atrás |
piDack
|
2af6880178
llama-chat : reset glmedge chat template (#13253)
|
há 8 meses atrás |
matteo
|
e0f572c846
llama-chat : update GLM4 chat template (#13238)
|
há 8 meses atrás |
Xuan-Son Nguyen
|
4e87962e34
mtmd : fix glm-edge redundant token count (#13139)
|
há 8 meses atrás |
Xuan-Son Nguyen
|
e5d6c2554e
llama-chat : fix typo GML --> GLM (#13143)
|
há 8 meses atrás |
matteo
|
ced44be342
llama-chat : fix wrong template in GLM4-0414 (#13140)
|
há 8 meses atrás |
Xuan-Son Nguyen
|
dc39a5e7a8
mtmd : support SmolVLM (version 1 and 2) (#13050)
|
há 9 meses atrás |
Xuan-Son Nguyen
|
84a9bf2fc2
mtmd : merge llava, gemma3 and minicpmv CLI into single `llama-mtmd-cli` (#13012)
|
há 9 meses atrás |
Xuan-Son Nguyen
|
1466621e73
llama : Support llama 4 text-only (#12791)
|
há 9 meses atrás |
Sigbjørn Skjæret
|
2c3f8b850a
llama : support BailingMoE (Ling) (#12634)
|
há 9 meses atrás |
Sergei Vorobyov
|
7242dd9675
llama-chat : Add Yandex instruct model template support (#12621)
|
há 9 meses atrás |
mgroeber9110
|
5bbe6a9fe9
ggml : portability fixes for VS 2017 (#12150)
|
há 10 meses atrás |
piDack
|
0cec062a63
llama : add support for GLM-Edge and GLM-Edge-V series models (#10573)
|
há 11 meses atrás |
Xuan Son Nguyen
|
ec7f3ac9ab
llama : add support for Deepseek-R1-Qwen distill model (#11310)
|
há 1 ano atrás |
Xuan Son Nguyen
|
d9feae1c06
llama-chat : add phi 4 template (#11148)
|
há 1 ano atrás |
fairydreaming
|
9394bbd484
llama : Add support for DeepSeek V3 (#11049)
|
há 1 ano atrás |
Georgi Gerganov
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
há 1 ano atrás |