Commit History

Author SHA1 Message Date
  Johannes Gäßler c8771ab5f8 CUDA: fix misaligned shared memory read (#8123) 1 year ago
  Eddie-Wang 494165f3b6 llama : extend llm_build_ffn() to support _scale tensors (#8103) 1 year ago
  Olivier Chafik 9b2f16f805 `json`: better support for "type" unions (e.g. nullable arrays w/ typed items) (#7863) 1 year ago
  Olivier Chafik 6777c544bd `json`: fix additionalProperties, allow space after enum/const (#7840) 1 year ago
  jukofyork 163d50adaf fixes #7999 (adds control vectors to all `build_XXX()` functions in `llama.cpp` [needs testing] (#8060) 1 year ago
  fairydreaming 6fcbf68235 llama : implement Unigram tokenizer needed by T5 and FLAN-T5 model families (#5763) 1 year ago
  Daniel Bevenius e6bf007744 llama : return nullptr from llama_grammar_init (#8093) 1 year ago
  Olivier Chafik 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) 1 year ago
  slaren dd047b476c disable docker CI on pull requests (#8110) 1 year ago
  joecryptotoo 925c30956d Add healthchecks to llama-server containers (#8081) 1 year ago
  Brian c8ad35955a Gguf dump start data offset via --data-offset and some extra refactor (#8054) 1 year ago
  Xuan Son Nguyen 49c03c79cd cvector: better prompt handling, add "mean vector" method (#8069) 1 year ago
  Xuan Son Nguyen 48e6b92cc3 Add chat template support for llama-cli (#8068) 1 year ago
  HanishKVC 3791ad2193 SimpleChat v3.1: Boolean chat request options in Settings UI, cache_prompt (#7950) 1 year ago
  HatsuneMikuUwU33 f702a90e24 Update control vector help (#8104) 1 year ago
  Meng, Hengyu 083bacce14 [SYCL] Re-enabled mul_mat_batched_sycl (#8095) 1 year ago
  Johannes Gäßler 2df373ac40 CUDA: fix matrix multiplication algorithm choice (#8102) 1 year ago
  Johannes Gäßler 3b099bcd9c CUDA: fix MMQ writeback for int8 tensor cores (#8100) 1 year ago
  Johannes Gäßler a818f3028d CUDA: use MMQ instead of cuBLAS by default (#8075) 1 year ago
  fairydreaming d62e4aaa02 gguf-py : fix tensor groups for encoder-decoder models in gguf-dump.py (#8090) 1 year ago
  Johannes Gäßler 9a590c8226 CUDA: optimize MMQ int8 tensor core performance (#8062) 1 year ago
  Christian Zhou-Zheng 52fc8705a0 Option to split during conversion (#6942) 1 year ago
  slaren 8cb508d0d5 disable publishing the full-rocm docker image (#8083) 1 year ago
  Yann Follet 646ef4a9cf embedding : more cli arguments (#7458) 1 year ago
  fairydreaming de0d6a68ac gguf-py, convert-hf : model conversion support for T5 and FLAN-T5 model variants (#5763) 1 year ago
  slaren 95f57bb5d5 ggml : remove ggml_task_type and GGML_PERF (#8017) 1 year ago
  Eddie-Wang e112b610a1 llama : add support for BitnetForCausalLM (#7931) 1 year ago
  Aarni Koskela 6a2f298bd7 server : fix JSON-Scheme typo (#7975) 1 year ago
  Daniel Bevenius 11318d9aa1 Fix typo in llama_set_embeddings comment (#8077) 1 year ago
  slaren b6b9a8e606 fix CI failures (#8066) 1 year ago