Historique des commits

Auteur SHA1 Message Date
  Georgi Gerganov ab336a9d5e code : normalize enum names (#5697) il y a 1 an
  Anas Ahouzi 69917dfa55 py : fix StableLM conversion after config.json changes (#5703) il y a 1 an
  Pierrick Hymbert 9e359a4f47 server: continue to update other slots on embedding concurrent request (#5699) il y a 1 an
  Kawrakow 4c4cb30736 IQ3_S: a much better alternative to Q3_K (#5676) il y a 1 an
  Pierrick Hymbert 525213d2f5 server: init functional tests (#5566) il y a 1 an
  AlpinDale fd43d66f46 server : add KV cache quantization options (#5684) il y a 1 an
  Jared Van Bortel 54fbcd2ce6 convert : fix missing ftype for gemma (#5690) il y a 1 an
  Jared Van Bortel 15499eb942 mpt : do not duplicate token_embd.weight on disk (#5670) il y a 1 an
  Georgi Gerganov 96633eeca1 gemma : use more bits for the token_embd.weight tensor (#5650) il y a 1 an
  Georgi Gerganov 847eedbdb2 py : add Gemma conversion from HF models (#5647) il y a 1 an
  Georgi Gerganov 7e4f339c40 ggml : always define ggml_fp16_t as uint16_t (#5666) il y a 1 an
  Georgi Gerganov 334f76fa38 sync : ggml il y a 1 an
  Georgi Gerganov efd56b1c21 ggml : 32-bit arm compat (whisper/1891) il y a 1 an
  Someone 201294ae17 nix: init singularity and docker images (#5056) il y a 1 an
  Georgi Gerganov 5a9e2f60ba py : minor fixes (#5668) il y a 1 an
  Xuan Son Nguyen 373ee3fbba Add Gemma chat template (#5665) il y a 1 an
  Someone 4cb4d8b22d workflows: nix: hardcode cachix ids, build unconditionally (#5663) il y a 1 an
  Georgi Gerganov 3a03541ced minor : fix trailing whitespace (#5638) il y a 1 an
  Georgi Gerganov 56d03d92be readme : update hot topics il y a 1 an
  Xuan Son Nguyen a46f50747b server : fallback to chatml, add AlphaMonarch chat template (#5628) il y a 1 an
  Alexey Parfenov c5688c6250 server : clarify some params in the docs (#5640) il y a 1 an
  Dat Quoc Nguyen 4ef245a92a mpt : add optional bias tensors (#5638) il y a 1 an
  slaren 973053d8b0 llama : fix loading models with shared tok_embd and output (#5651) il y a 1 an
  Xuan Son Nguyen 7c8bcc11dc Add docs for llama_chat_apply_template (#5645) il y a 1 an
  slaren 7fe4678b02 llama : fix session save/load with quantized KV (#5649) il y a 1 an
  slaren ba2135ccae gemma : allow offloading the output tensor (#5646) il y a 1 an
  Jared Van Bortel 89febfed93 examples : do not assume BOS when shifting context (#5622) il y a 1 an
  Georgi Gerganov 5022cf242d sync : ggml il y a 1 an
  Pierrick Hymbert 1ecea255eb server: health: fix race condition on slots data using tasks queue (#5634) il y a 1 an
  Ettore Di Giacinto a00a35cef9 readme : add LocalAI to the availables UI (#5629) il y a 1 an