提交历史

作者 SHA1 备注 提交日期
  Xuan Son Nguyen b115105f05 add llama_lora_adapter_clear (#8653) 1 年之前
  Xuan Son Nguyen de280085e7 examples : Fix `llama-export-lora` example (#8607) 1 年之前
  Vali Malinoiu b841d07408 server : fix URL.parse in the UI (#8646) 1 年之前
  Joe Todd 64cf50a0ed sycl : Add support for non-release DPC++ & oneMKL (#8644) 1 年之前
  Georgi Gerganov 938943cdbf llama : move vocab, grammar and sampling into separate files (#8508) 1 年之前
  0cc4m 751fcfc6c3 Vulkan IQ4_NL Support (#8613) 1 年之前
  Jeroen Mostert 46e47417aa Allow all RDNA2 archs to use sdot4 intrinsic (#8629) 1 年之前
  Georgi Gerganov e7e6487ba0 contrib : clarify PR squashing + module names (#8630) 1 年之前
  luoyu-intel 063d99ad11 [SYCL] fix scratch size of softmax (#8642) 1 年之前
  Keke Han 081fe431aa llama : fix codeshell support (#8599) 1 年之前
  Jason Stillerman d94c6e0ccb llama : add support for SmolLm pre-tokenizer (#8609) 1 年之前
  Jiří Podivín 566daa5a5b *.py: Stylistic adjustments for python (#8233) 1 年之前
  Georgi Gerganov 6f11a83e4e llama : allow overrides for tokenizer flags (#8614) 1 年之前
  Georgi Gerganov e093dd2382 tests : re-enable tokenizer tests (#8611) 1 年之前
  Douglas Hanley 50e05353e8 llama : add Mistral Nemo inference support (#8604) 1 年之前
  Jan Boon 628154492a server : update doc to clarify n_keep when there is bos token (#8619) 1 年之前
  Mark Zhuang 04bab6b7da ggml: fix compile error for RISC-V (#8623) 1 年之前
  devojony b7c11d36e6 examples: fix android example cannot be generated continuously (#8621) 1 年之前
  Georgi Gerganov 45f2c19cc5 flake.lock: Update (#8610) 1 年之前
  M-A 22f281aa16 examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 1 年之前
  compilade 328884f421 gguf-py : fix some metadata name extraction edge cases (#8591) 1 年之前
  compilade c69c63039c convert_hf : fix Gemma v1 conversion (#8597) 1 年之前
  Johannes Gäßler 69c487f4ed CUDA: MMQ code deduplication + iquant support (#8495) 1 年之前
  Georgi Gerganov 07283b1a90 gguf : handle null name during init (#8587) 1 年之前
  Michael Coppola 940362224d llama : add support for Tekken pre-tokenizer (#8579) 1 年之前
  Huifeng Ou 69b9945b44 llama.swiftui: fix end of generation bug (#8268) 1 年之前
  Brian c3776cacab gguf_dump.py: fix markddown kv array print (#8588) 1 年之前
  slaren 87e397d00b ggml : fix quant dot product with odd number of blocks (#8549) 1 年之前
  Brian 57b1d4f9eb convert-*.py: remove add_name from ChatGLMModel class (#8590) 1 年之前
  Georgi Gerganov d197545530 llama : bump max layers from 256 to 512 (#8530) 1 年之前