duduta
|
73460f6278
ggml-cpu: templateify ggml_compute_forward_rope_f32 and _f16 (#16805)
|
2 ماه پیش |
JJJYmmm
|
d261223d24
model: add support for qwen3vl series (#16780)
|
2 ماه پیش |
HimariO
|
ba1cb19cdd
llama : add Qwen2VL support + multimodal RoPE (#10361)
|
1 سال پیش |
Diego Devesa
|
9f40989351
ggml : move CPU backend to a separate file (#10144)
|
1 سال پیش |
Faisal Zaghloul
|
42c76d1358
Threadpool: take 2 (#8672)
|
1 سال پیش |
Clint Herron
|
07a3fc0608
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258)
|
1 سال پیش |
Georgi Gerganov
|
2b3389677a
ggml : refactor rope norm/neox (#7634)
|
1 سال پیش |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 سال پیش |