compilade
|
dad5c44398
kv-cache : avoid modifying recurrent cells when setting inputs (#13834)
|
7 месяцев назад |
Sigbjørn Skjæret
|
3678b838bb
llama : support GEGLU for jina-bert-v2 (#14090)
|
7 месяцев назад |
Georgi Gerganov
|
201b31dc2e
graph : fix geglu (#14077)
|
7 месяцев назад |
Đinh Trọng Huy
|
91a8ee6a6f
add geglu activation function (#14074)
|
7 месяцев назад |
Xuan-Son Nguyen
|
3ac67535c8
llama-graph : use ggml_repeat_4d (#13998)
|
7 месяцев назад |
Georgi Gerganov
|
0fc16b42e8
kv-cache : split implementation in separate sources (#13920)
|
7 месяцев назад |
Georgi Gerganov
|
12d0188c0d
kv-cache : refactor + add llama_memory_state_i (#13746)
|
7 месяцев назад |
Xuan-Son Nguyen
|
763d06edb7
llama : fix KV shift for qwen2vl (#13870)
|
7 месяцев назад |
Đinh Trọng Huy
|
e0e3aa231d
llama : add support for BertForSequenceClassification reranker (#13858)
|
7 месяцев назад |
0cc4m
|
259469c4b5
Move GLM4 f32 attention fix to the correct function (#13750)
|
7 месяцев назад |
Georgi Gerganov
|
b44890df2e
model : disable SWA for Phi models (#13676)
|
8 месяцев назад |
0cc4m
|
c9c64dee57
Set GLM4 blk.*.attn_output.weight, kqv_out-* matmul to GGML_PREC_F32 to fix infinity values in output (#13639)
|
8 месяцев назад |
Georgi Gerganov
|
e298d2fbd0
kv-cache : add SWA support (#13194)
|
8 месяцев назад |
Johannes Gäßler
|
10d2af0eaa
llama/ggml: add LLM training support (#10544)
|
8 месяцев назад |
Johannes Gäßler
|
0cf6725e9f
CUDA: FA support for Deepseek (Ampere or newer) (#13306)
|
8 месяцев назад |
Xuan-Son Nguyen
|
2f54e348ad
llama : fix build_ffn without gate (#13336)
|
8 месяцев назад |
Georgi Gerganov
|
c642bc014c
kv-cache : separate recurrent vs non-recurrent impl (#12799)
|
8 месяцев назад |
Xuan-Son Nguyen
|
b6ce7430b7
llama-graph : fix text position for mrope (#13159)
|
8 месяцев назад |
AT
|
5f5e39e1ba
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466)
|
8 месяцев назад |
Xuan-Son Nguyen
|
d2b2031e5f
llama : (mrope) allow using normal 1D position for text token (#13138)
|
8 месяцев назад |
City
|
558a764713
Force FP32 compute in GLM4 FFN Down (#13101)
|
8 месяцев назад |
Georgi Gerganov
|
2f74c354c0
graph : make FA compatible with MLA + add initial Metal kernels (#12953)
|
9 месяцев назад |
Juk Armstrong
|
daa422881a
llama : DeepSeek V2/V3 MLA implementation (#12801)
|
9 месяцев назад |
Georgi Gerganov
|
a19b5cef16
llama : fix FA when KV cache is not used (i.e. embeddings) (#12825)
|
9 месяцев назад |
Xuan-Son Nguyen
|
1466621e73
llama : Support llama 4 text-only (#12791)
|
9 месяцев назад |
Xuan-Son Nguyen
|
af6ae1efb2
llama : fix non-causal mask for gemma 3 (#12615)
|
9 месяцев назад |
Georgi Gerganov
|
75422e8bc4
graph : normalize Q, K, V shapes + sync cross attention (#12449)
|
10 месяцев назад |
fairydreaming
|
8fcb563613
Load all MoE experts during warmup (#11571)
|
10 месяцев назад |
Georgi Gerganov
|
c522ce4143
graph : simplify attn input build for unified KV cache (#12381)
|
10 месяцев назад |
Georgi Gerganov
|
081bee8c64
hparams : add SWA rope parameters (#12374)
|
10 месяцев назад |