1
0

Коммит түүх

Эзэн SHA1 Мессеж Огноо
  Junyang Lin 3fec68be4e convert : add support of codeqwen due to tokenizer (#6707) 1 жил өмнө
  liuwei-git c8297c6af5 llama : add phi3 support (#6852) 1 жил өмнө
  Anas Ahouzi 4e96a812b3 [SYCL] Windows default build instructions without -DLLAMA_SYCL_F16 flag activated (#6767) 1 жил өмнө
  Justine Tunney 192090bae4 llamafile : improve sgemm.cpp (#6796) 1 жил өмнө
  Dave Airlie e931888d50 ggml : fix calloc argument ordering. (#6820) 1 жил өмнө
  Georgi Gerganov 8960fe86ae llama : fix typo in <|im_end|> token text (#6745) 1 жил өмнө
  Pierrick Hymbert c0956b09ba ci: fix job are cancelling each other (#6781) 1 жил өмнө
  github-actions[bot] e9b4a1bf68 flake.lock: Update 1 жил өмнө
  Olivier Chafik 5cf5e7d490 `build`: generate hex dump of server assets during build (#6661) 1 жил өмнө
  Georgi Gerganov 40f74e4d73 llama : add option to render special/control tokens (#6807) 1 жил өмнө
  Georgi Gerganov b9cc76d87e ggml : fix ggml_backend_cpu_supports_op() for CPY (#0) 1 жил өмнө
  Wouter 7dbdba5690 llama : add llama-3 chat template (#6751) 1 жил өмнө
  pmysl c1386c936e gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761) 1 жил өмнө
  Jan Boon e8d35f47cb doc : add link to falcon (#6789) 1 жил өмнө
  Mohammadreza Hendiani 2cca09d509 readme : add Fedora instructions (#6783) 1 жил өмнө
  Justine Tunney 89b0bf0d5d llava : use logger in llava-cli (#6797) 1 жил өмнө
  Pedro Cuenca b97bc3966e llama : support Llama 3 HF conversion (#6745) 1 жил өмнө
  Jan Boon b8109bc013 doc : server tests require llama to be built with curl enabled (#6788) 1 жил өмнө
  Georgi Gerganov aed82f6837 common : try to fix Android CI (#6780) 1 жил өмнө
  loonerin 0e4802b2ec ci: add ubuntu latest release and fix missing build number (mac & ubuntu) (#6748) 1 жил өмнө
  Pierrick Hymbert 637e9a86c2 server: static: upstream upgrade (#6765) 1 жил өмнө
  nopperl 9958c81b79 Implement the OLMo architecture (#6741) 1 жил өмнө
  Austin 8b1b1f4982 train : add general name (#6752) 1 жил өмнө
  Neo Zhang bca40e9814 fix wrong parameter in cmd in readme-sycl.md (#6755) 1 жил өмнө
  slaren 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) 1 жил өмнө
  Sigbjørn Skjæret 03c0946d73 convert : support models with multiple chat templates (#6588) 1 жил өмнө
  Ren Xuancheng e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738) 1 жил өмнө
  slaren c71bfd736e llama : fix compatibility with old 2 expert models (#6735) 1 жил өмнө
  Georgi Gerganov 3b8f1ec4b1 llamafile : tmp disable + build sgemm.o when needed (#6716) 1 жил өмнө
  Yaroslav 8dd1ec8b3f readme : add UI (#6724) 1 жил өмнө