Commit History

Author SHA1 Message Date
  Johannes Gäßler 17c97fb062 CUDA: mul_mat_vec_q max. batch size 8 -> 4 (#5370) 2 years ago
  Johannes Gäßler 2c516611f1 CUDA: mul_mat_vec_q for batch sizes > 1 (#5351) 2 years ago
  slaren 8ca511cade cuda : fix LLAMA_CUDA_F16 (#5262) 2 years ago
  JidongZhang-THU 15606309a0 llava : add MobileVLM support (#5132) 2 years ago
  Georgi Gerganov 8f8ddfcfad sync : ggml (#0) 2 years ago
  John Balis 625a699b54 `ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686) 2 years ago
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) 2 years ago
  0cc4m 2307523d32 ggml : add Vulkan backend (#2059) 2 years ago
  slaren 62fead3ea0 cuda : fix tensor size calculation for non-split buffer (#5145) 2 years ago
  Engininja2 cd4fddb29f cuda : fix 2-bit quants on amd hip (#5105) 2 years ago
  Johannes Gäßler 9ecdd12e95 CUDA: more info when no device code (#5088) 2 years ago
  Kylin cca894f16a cuda : fix compile error in jetson platform (#4975) 2 years ago
  Georgi Gerganov 38566680cd ggml : add IQ2 to test-backend-ops + refactoring (#4990) 2 years ago
  Justine Tunney a0b3ac8c48 ggml : introduce GGML_CALL function annotation (#4850) 2 years ago
  Georgi Gerganov ddb008d845 cuda : fix dequantize kernel names (#4938) 2 years ago
  Kawrakow 4a3156de2f CUDA: faster dequantize kernels for Q4_0 and Q4_1 (#4938) 2 years ago
  Johannes Gäßler 3fe81781e3 CUDA: faster q8_0 -> f16 dequantization (#4895) 2 years ago
  slaren e7e4df031b llama : ggml-backend integration (#4766) 2 years ago
  Johannes Gäßler 1b280c9fff CUDA: fix softmax compile for old CUDA versions (#4862) 2 years ago
  Kawrakow 49662cbed3 ggml : SOTA 2-bit quants (add IQ2_XS) (#4856) 2 years ago
  Erik Scholz f34432ca1e fix : cuda order of synchronization when setting a buffer (ggml/679) 2 years ago
  Johannes Gäßler 8f900abfc0 CUDA: faster softmax via shared memory + fp16 math (#4742) 2 years ago
  Kawrakow dd5ae06405 SOTA 2-bit quants (#4773) 2 years ago
  Johannes Gäßler d5a410e855 CUDA: fixed redundant value dequantization (#4809) 2 years ago
  Konstantin Zhuravlyov 63ee677efd ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (#4787) 2 years ago
  Finn Voorhees 1bf681f90e ggml : add error handling to graph_compute (whisper/1714) 2 years ago
  Georgi Gerganov 7bed7eba35 cuda : simplify expression 2 years ago
  Georgi Gerganov d55356d3ba cuda : mark I16 and I32 ops as unsupported 2 years ago
  Johannes Gäßler 39d8bc71ed CUDA: fixed tensor cores not being used on RDNA3 (#4697) 2 years ago
  Johannes Gäßler a20f3c7465 CUDA: fix tensor core logic for Pascal and HIP (#4682) 2 years ago