Commit History

Author SHA1 Message Date
  fairydreaming 6fcbf68235 llama : implement Unigram tokenizer needed by T5 and FLAN-T5 model families (#5763) 1 year ago
  Daniel Bevenius e6bf007744 llama : return nullptr from llama_grammar_init (#8093) 1 year ago
  Olivier Chafik 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) 1 year ago
  slaren dd047b476c disable docker CI on pull requests (#8110) 1 year ago
  joecryptotoo 925c30956d Add healthchecks to llama-server containers (#8081) 1 year ago
  Brian c8ad35955a Gguf dump start data offset via --data-offset and some extra refactor (#8054) 1 year ago
  Xuan Son Nguyen 49c03c79cd cvector: better prompt handling, add "mean vector" method (#8069) 1 year ago
  Xuan Son Nguyen 48e6b92cc3 Add chat template support for llama-cli (#8068) 1 year ago
  HanishKVC 3791ad2193 SimpleChat v3.1: Boolean chat request options in Settings UI, cache_prompt (#7950) 1 year ago
  HatsuneMikuUwU33 f702a90e24 Update control vector help (#8104) 1 year ago
  Meng, Hengyu 083bacce14 [SYCL] Re-enabled mul_mat_batched_sycl (#8095) 1 year ago
  Johannes Gäßler 2df373ac40 CUDA: fix matrix multiplication algorithm choice (#8102) 1 year ago
  Johannes Gäßler 3b099bcd9c CUDA: fix MMQ writeback for int8 tensor cores (#8100) 1 year ago
  Johannes Gäßler a818f3028d CUDA: use MMQ instead of cuBLAS by default (#8075) 1 year ago
  fairydreaming d62e4aaa02 gguf-py : fix tensor groups for encoder-decoder models in gguf-dump.py (#8090) 1 year ago
  Johannes Gäßler 9a590c8226 CUDA: optimize MMQ int8 tensor core performance (#8062) 1 year ago
  Christian Zhou-Zheng 52fc8705a0 Option to split during conversion (#6942) 1 year ago
  slaren 8cb508d0d5 disable publishing the full-rocm docker image (#8083) 1 year ago
  Yann Follet 646ef4a9cf embedding : more cli arguments (#7458) 1 year ago
  fairydreaming de0d6a68ac gguf-py, convert-hf : model conversion support for T5 and FLAN-T5 model variants (#5763) 1 year ago
  slaren 95f57bb5d5 ggml : remove ggml_task_type and GGML_PERF (#8017) 1 year ago
  Eddie-Wang e112b610a1 llama : add support for BitnetForCausalLM (#7931) 1 year ago
  Aarni Koskela 6a2f298bd7 server : fix JSON-Scheme typo (#7975) 1 year ago
  Daniel Bevenius 11318d9aa1 Fix typo in llama_set_embeddings comment (#8077) 1 year ago
  slaren b6b9a8e606 fix CI failures (#8066) 1 year ago
  0cc4m 45c0e2e4c1 Refactor Vulkan backend to allow multiple contexts (#7961) 1 year ago
  Clint Herron b5a5f34efa Removing extra blank lines that were breaking Lint. (#8067) 1 year ago
  Xuan Son Nguyen 3e58b0ee35 cvector: fix CI + correct help message (#8064) 1 year ago
  HatsuneMikuUwU33 adf480c3ab cvector-generator: Moe Moe Fixie-Fixie for Lots of Formats~! ♡(ᐢ ᴥ ᐢ)♡ (#8052) 1 year ago
  0xspringtime 3aa184a8c7 convert-hf : change assert to exception (#8015) 1 year ago