Commit History

Author SHA1 Message Date
  Bartowski e1fcf8b09b model : add AfmoeForCausalLM support (#16477) 2 months ago
  levkropp 2fc392ce35 convert : register UMT5Model architecture for T5 conversion (#17160) 2 months ago
  compilade 802cef44bf convert : parse safetensors directly (#15667) 2 months ago
  compilade 1c07c0c68c convert : handle compressed-tensors quant method (#17069) 2 months ago
  Li Pengzhan 9f052478c2 model : add openPangu-Embedded (#16941) 2 months ago
  Zhiyong Wang 6b9a52422b model: add Janus Pro for image understanding (#16906) 3 months ago
  Piotr Wilkin (ilintar) 0de0a01576 model : Minimax M2 (#16831) 3 months ago
  JJJYmmm d261223d24 model: add support for qwen3vl series (#16780) 3 months ago
  Tianyue-Zhao bacddc049a model: Add support for CogVLM model (#15002) 3 months ago
  Xuan-Son Nguyen c55d53acec model : add LightOnOCR-1B model (#16764) 3 months ago
  Sigbjørn Skjæret 73a48c9790 convert : enable expert group selection for all models with it (#16691) 3 months ago
  Galunid 5d195f17bc convert : handle mmproj filename/path properly (#16760) 3 months ago
  compilade 5cca2542ac convert : avoid dequantizing mxfp4 for GPT-OSS (#16756) 3 months ago
  compilade f8f071fadd convert : handle pre-quantized models (#14810) 3 months ago
  Julien Denize dd62dcfab9 convert : Make mistral-common dependency optional (#16738) 3 months ago
  Sigbjørn Skjæret 84bf3c6778 model : add BailingMoeV2 support (#16063) 3 months ago
  amirai21 477a66b035 convert : correctly handle LLaMA tokenizer for Jamba (#16470) 3 months ago
  Saba Fallah e08db42595 model: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (#16367) 3 months ago
  Tarek Dakhran aeaf8a36f0 llama : support LiquidAI LFM2-MoE hybrid model (#16464) 3 months ago
  Gabe Goodhart ca71fb9b36 model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) 3 months ago
  Piotr Wilkin (ilintar) 34fcc5a4ac model : Apertus model implementation (#15852) 4 months ago
  Shunta Saito ded67b9444 llama : parameter conversion and loading fixes for PLaMo2 variants (#16075) 4 months ago
  Sigbjørn Skjæret 835b2b915c model : add GroveMoE support (#15510) 4 months ago
  Douglas Hanley b5bd037832 llama : add support for qwen3 reranker (#15824) 4 months ago
  Gabe Goodhart 1d0125bcf1 feat: Add conversion support in GraniteHybrid for non-hybrid (all attn) (#16177) 4 months ago
  Xuan-Son Nguyen 8f8f2274ee convert : add Llama4ForCausalLM (#16042) 4 months ago
  Shane A 85286f3548 model : add OLMo3 support (#16015) 4 months ago
  Aman Gupta 6d758839ff Add LLaDA-7b-MoE diffusion model (#16003) 4 months ago
  Sigbjørn Skjæret b8e09f08b9 model : add grok-2 support (#15539) 4 months ago
  Jie Fu (傅杰) 4f658855fa llama : support T5 models with unequal number of encoder-decoder layers (#15909) 4 months ago