Commit History

Author SHA1 Message Date
  JidongZhang-THU 15606309a0 llava : add MobileVLM support (#5132) 2 years ago
  Jared Van Bortel e8dc55d006 kompute : llama-bench support and ggml_cpu_has_kompute() (#5226) 2 years ago
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) 2 years ago
  0cc4m 2307523d32 ggml : add Vulkan backend (#2059) 2 years ago
  Abhilash Majumder 0f648573dd ggml : add unified SYCL backend for Intel GPUs (#2690) 2 years ago
  Georgi Gerganov 89758723c7 minor : clean-up some warnings and style (#5094) 2 years ago
  XiaotaoChen 3ce7e8f8e7 llava : MobileVLM support (#4954) 2 years ago
  Georgi Gerganov 38566680cd ggml : add IQ2 to test-backend-ops + refactoring (#4990) 2 years ago
  Georgi Gerganov ba69bbc84c imatrix : offload to GPU support (#4957) 2 years ago
  Justine Tunney a0b3ac8c48 ggml : introduce GGML_CALL function annotation (#4850) 2 years ago
  Kawrakow 147b17ac94 2-bit quantizations (#4897) 2 years ago
  slaren e7e4df031b llama : ggml-backend integration (#4766) 2 years ago
  Kawrakow 326b418b59 Importance Matrix calculation (#4861) 2 years ago
  Kawrakow 49662cbed3 ggml : SOTA 2-bit quants (add IQ2_XS) (#4856) 2 years ago
  Timothy Cronin f85a973aa1 ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693) 2 years ago
  leejet e739de7909 ggml : change GGML_MAX_NAME at compile time (ggml/682) 2 years ago
  Kawrakow dd5ae06405 SOTA 2-bit quants (#4773) 2 years ago
  automaticcat 24a447e20a ggml : add ggml_cpu_has_avx_vnni() (#4589) 2 years ago
  slaren 5bf3953d7e cuda : improve cuda pool efficiency using virtual memory (#4606) 2 years ago
  bobqianic 0137ef88ea ggml : extend `enum ggml_log_level` with `GGML_LOG_LEVEL_DEBUG` (#4579) 2 years ago
  Georgi Gerganov afefa319f1 ggml : change ggml_scale to take a float instead of tensor (#4573) 2 years ago
  slaren d232aca5a7 llama : initial ggml-backend integration (#4520) 2 years ago
  Eric Sommerlade 328b83de23 ggml : fixed check for _MSC_VER (#4535) 2 years ago
  Ebey Abraham b9e74f9bca llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490) 2 years ago
  slaren 6744dbe924 ggml : use ggml_row_size where possible (#4472) 2 years ago
  slaren cafcd4f895 ggml : remove n_dims from ggml_tensor (#4469) 2 years ago
  LostRuins 20a68a7030 ggml : add ggml_row_size() (fixes llama out of space) (#4461) 2 years ago
  Georgi Gerganov 4d98d9a656 sync : ggml (SD ops, tests, kernels) (#4444) 2 years ago
  slaren 799a1cb13b llama : add Mixtral support (#4406) 2 years ago
  Taikono-Himazin 41a11aaf99 ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424) 2 years ago