Историја ревизија

Аутор SHA1 Порука Датум
  Sigbjørn Skjæret e98b3692be llama : set qwen3 model type sizes (#13175) пре 8 месеци
  AT 5f5e39e1ba model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466) пре 8 месеци
  Johannes Gäßler 69699be48a CUDA: fix q_nope_absorbed prec for DS 2 Lite f16 (#13137) пре 9 месеци
  Georgi Gerganov 2f74c354c0 graph : make FA compatible with MLA + add initial Metal kernels (#12953) пре 9 месеци
  Juk Armstrong daa422881a llama : DeepSeek V2/V3 MLA implementation (#12801) пре 9 месеци
  Yuxuan Zhang 06bb53ad9b llama-model : add Glm4Model implementation for GLM-4-0414 (#12867) пре 9 месеци
  Xuan-Son Nguyen 8b91d5355a llama : correct rms norm for llama 4 (#12882) пре 9 месеци
  Bo Zheng d3bd7193ba llama : Support Qwen3 and Qwen3MoE (#12828) пре 9 месеци
  Xuan-Son Nguyen 1466621e73 llama : Support llama 4 text-only (#12791) пре 9 месеци
  Diego Devesa e0e912f49b llama : add option to override model tensor buffers (#11397) пре 9 месеци
  Sigbjørn Skjæret 2c3f8b850a llama : support BailingMoE (Ling) (#12634) пре 9 месеци
  Djip007 0bb2919335 llama : change cpu_buft_list order: ACCEL -> GPU host -> CPU extra -> CPU (#12632) пре 9 месеци
  Sigbjørn Skjæret 3714c3ee1a llama : fix incorrect Qwen2Moe ffn_moe_out graph callback (#12631) пре 10 месеци
  Si1w f125b8dccf llama : add PLM GGUF Conversion & Inference Support (#12457) пре 10 месеци
  HighDoping 953c2a62cf model : restore support for T5Encoder (#12590) пре 10 месеци
  Xuan-Son Nguyen fbdfefe74e llama : gemma3 : use output tensor if it exists in model weight (#12506) пре 10 месеци
  Georgi Gerganov af04481e6b model : do not repack if a GPU device is present (#12498) пре 10 месеци
  Sigbjørn Skjæret 960e726077 chore : cleanup llama_model_loader::TENSOR_ usage (#12492) пре 10 месеци
  Sigbjørn Skjæret dbb3a4739e llama : make Qwen2MoE QKV bias optional (#12477) пре 10 месеци
  Sigbjørn Skjæret 108e53c2f1 llama : add support for GPT2, Bloom and CodeShell tied word embeddings (#12456) пре 10 месеци
  Georgi Gerganov 75422e8bc4 graph : normalize Q, K, V shapes + sync cross attention (#12449) пре 10 месеци
  Xuan-Son Nguyen 99aa304fb9 llama : add support for EXAONE tied word embeddings (#12451) пре 10 месеци
  Molly Sophia 7dfad387e3 llama: Add support for RWKV v7 architecture (#12412) пре 10 месеци
  Sigbjørn Skjæret 8ba95dca20 llama : fix OLMo-2-0325-32B-Instruct K-norm size (#12400) пре 10 месеци
  Georgi Gerganov c522ce4143 graph : simplify attn input build for unified KV cache (#12381) пре 10 месеци
  Georgi Gerganov 081bee8c64 hparams : add SWA rope parameters (#12374) пре 10 месеци
  Georgi Gerganov 84d5475541 llama : fix Gemma3 SWA KV cache shift (#12373) пре 10 месеци
  Georgi Gerganov e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) пре 10 месеци
  Xuan-Son Nguyen 7841fc723e llama : Add Gemma 3 support (+ experimental vision capability) (#12343) пре 10 месеци
  Xuan-Son Nguyen c43a3e7996 llama : add Phi-4-mini support (supersede #12099) (#12108) пре 10 месеци