提交歷史

作者 SHA1 備註 提交日期
  Sigbjørn Skjæret 74fef4129f codeowners : update after refactor (#16905) 2 月之前
  Jeff Bolz 5d8bb900bc vulkan: Fix multi_add invalid descriptor usage (#16899) 2 月之前
  Jeff Bolz 2e76e01360 vulkan: fuse mul_mat+add and mul_mat_id+add_id (#16868) 2 月之前
  Oliver Simons d3dc9dd898 CUDA: Remove unneded bias/gate dims in fused mmvq (#16858) 2 月之前
  Piotr Wilkin (ilintar) bea04522ff refactor : llama-model.cpp (#16252) 2 月之前
  Piotr Wilkin (ilintar) 0de0a01576 model : Minimax M2 (#16831) 2 月之前
  Giuseppe Scrivano e58d585604 model : add Granite Hybrid nano types (#16896) 2 月之前
  Johannes Gäßler 31c511a968 CUDA: Volta tensor core support for MMF (#16843) 2 月之前
  Georgi Gerganov 6d39015a74 sync : ggml 2 月之前
  Aman Gupta 4146d6a1a6 CUDA: add expert reduce kernel (#16857) 2 月之前
  Georgi Gerganov 8da3c0e200 batch : fix consistency checks for the input positions (#16890) 2 月之前
  Georgi Gerganov c22473b580 server : don't print user inputs to console (#16871) 2 月之前
  Daniel Bevenius 0f715b4e75 server : fix typos in server.cpp comments [no ci] (#16883) 2 月之前
  Jeff Bolz d2d931f173 vulkan: disable spirv-opt for rope shaders (#16872) 2 月之前
  Masato Nakasaka 2976b0374d vulkan: Fix crash when FP16 mul_mat accumulation is not supported (#16796) 2 月之前
  Ruben Ortlam d2a2673dd1 vulkan: fix shmem overrun in mmq id shader (#16873) 2 月之前
  l3utterfly 13002a0896 ggml-hexagon: respect input size when getting/setting tensor data (#16836) 2 月之前
  Sigbjørn Skjæret 6eb208d17e ci : enable free-disk-space on cuda docker build (#16877) 2 月之前
  lhez 9984cbb61d opencl: fix boundary handling for mul_mm (#16875) 2 月之前
  RodriMora ce18efeaf1 convert : update transformers requirements (#16866) 2 月之前
  chansikpark 16724b5b68 server : bump request URI max length to 32768 (#16862) 2 月之前
  Georgi Gerganov b52edd2558 server : remove n_past (#16818) 2 月之前
  Max Krasnyansky 517b7170e1 cpu: introduce chunking for repack matmuls and enable matmul-id chunking on ARM64 (#16833) 2 月之前
  Shagun Bera 835e918d84 common: fix typo in cli help text (#16864) 2 月之前
  JJJYmmm d261223d24 model: add support for qwen3vl series (#16780) 2 月之前
  Max Krasnyansky dcca0d3ab8 cpu: introduce chunking for flash attention (#16829) 2 月之前
  Tianyue-Zhao bacddc049a model: Add support for CogVLM model (#15002) 2 月之前
  Sigbjørn Skjæret 229bf68628 cuda : fix argsort with 64k+ rows (#16849) 2 月之前
  Jan Boon d7395115ba llama : use std::abs instead of abs (#16853) 2 月之前
  Jeff Bolz 052df28b0e vulkan: Handle argsort with a large number of rows (#16851) 2 月之前