Komit Sejarah

Pembuat SHA1 Pesan Tanggal
  Xuan-Son Nguyen b6ce7430b7 llama-graph : fix text position for mrope (#13159) 8 bulan lalu
  AT 5f5e39e1ba model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466) 8 bulan lalu
  Xuan-Son Nguyen d2b2031e5f llama : (mrope) allow using normal 1D position for text token (#13138) 8 bulan lalu
  City 558a764713 Force FP32 compute in GLM4 FFN Down (#13101) 8 bulan lalu
  Georgi Gerganov 2f74c354c0 graph : make FA compatible with MLA + add initial Metal kernels (#12953) 9 bulan lalu
  Juk Armstrong daa422881a llama : DeepSeek V2/V3 MLA implementation (#12801) 9 bulan lalu
  Georgi Gerganov a19b5cef16 llama : fix FA when KV cache is not used (i.e. embeddings) (#12825) 9 bulan lalu
  Xuan-Son Nguyen 1466621e73 llama : Support llama 4 text-only (#12791) 9 bulan lalu
  Xuan-Son Nguyen af6ae1efb2 llama : fix non-causal mask for gemma 3 (#12615) 9 bulan lalu
  Georgi Gerganov 75422e8bc4 graph : normalize Q, K, V shapes + sync cross attention (#12449) 10 bulan lalu
  fairydreaming 8fcb563613 Load all MoE experts during warmup (#11571) 10 bulan lalu
  Georgi Gerganov c522ce4143 graph : simplify attn input build for unified KV cache (#12381) 10 bulan lalu
  Georgi Gerganov 081bee8c64 hparams : add SWA rope parameters (#12374) 10 bulan lalu
  Georgi Gerganov 84d5475541 llama : fix Gemma3 SWA KV cache shift (#12373) 10 bulan lalu
  Georgi Gerganov e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 bulan lalu