Commit History

Autor SHA1 Mensaxe Data
  Sigbjørn Skjæret 07b0e7a5ac convert : use self.block_count everywhere instead of reading hparams (#17359) hai 1 mes
  Sigbjørn Skjæret 662192e1dc convert : remove unnecessary chat template patching (#17289) hai 2 meses
  Sigbjørn Skjæret 9a8860cf5d convert : use all parts in safetensors index (#17286) hai 2 meses
  Sigbjørn Skjæret 9d3ef4809f convert : set expert gating func in base class (#17279) hai 2 meses
  Bartowski e1fcf8b09b model : add AfmoeForCausalLM support (#16477) hai 2 meses
  levkropp 2fc392ce35 convert : register UMT5Model architecture for T5 conversion (#17160) hai 2 meses
  compilade 802cef44bf convert : parse safetensors directly (#15667) hai 2 meses
  compilade 1c07c0c68c convert : handle compressed-tensors quant method (#17069) hai 2 meses
  Li Pengzhan 9f052478c2 model : add openPangu-Embedded (#16941) hai 2 meses
  Zhiyong Wang 6b9a52422b model: add Janus Pro for image understanding (#16906) hai 2 meses
  Piotr Wilkin (ilintar) 0de0a01576 model : Minimax M2 (#16831) hai 2 meses
  JJJYmmm d261223d24 model: add support for qwen3vl series (#16780) hai 2 meses
  Tianyue-Zhao bacddc049a model: Add support for CogVLM model (#15002) hai 2 meses
  Xuan-Son Nguyen c55d53acec model : add LightOnOCR-1B model (#16764) hai 2 meses
  Sigbjørn Skjæret 73a48c9790 convert : enable expert group selection for all models with it (#16691) hai 2 meses
  Galunid 5d195f17bc convert : handle mmproj filename/path properly (#16760) hai 2 meses
  compilade 5cca2542ac convert : avoid dequantizing mxfp4 for GPT-OSS (#16756) hai 2 meses
  compilade f8f071fadd convert : handle pre-quantized models (#14810) hai 2 meses
  Julien Denize dd62dcfab9 convert : Make mistral-common dependency optional (#16738) hai 2 meses
  Sigbjørn Skjæret 84bf3c6778 model : add BailingMoeV2 support (#16063) hai 2 meses
  amirai21 477a66b035 convert : correctly handle LLaMA tokenizer for Jamba (#16470) hai 3 meses
  Saba Fallah e08db42595 model: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (#16367) hai 3 meses
  Tarek Dakhran aeaf8a36f0 llama : support LiquidAI LFM2-MoE hybrid model (#16464) hai 3 meses
  Gabe Goodhart ca71fb9b36 model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) hai 3 meses
  Piotr Wilkin (ilintar) 34fcc5a4ac model : Apertus model implementation (#15852) hai 3 meses
  Shunta Saito ded67b9444 llama : parameter conversion and loading fixes for PLaMo2 variants (#16075) hai 3 meses
  Sigbjørn Skjæret 835b2b915c model : add GroveMoE support (#15510) hai 3 meses
  Douglas Hanley b5bd037832 llama : add support for qwen3 reranker (#15824) hai 3 meses
  Gabe Goodhart 1d0125bcf1 feat: Add conversion support in GraniteHybrid for non-hybrid (all attn) (#16177) hai 3 meses
  Xuan-Son Nguyen 8f8f2274ee convert : add Llama4ForCausalLM (#16042) hai 4 meses