Commit History

Author SHA1 Message Date
  leejet 7d43c585dc add some new ops, fix some operators and add batch operations to certain operators. (ggml/747) 1 year ago
  slaren 67be2ce101 cuda : fix data race in soft max (#5853) 1 year ago
  Kawrakow bbde6eb256 ggml : IQ3_S improvements (#5829) 1 year ago
  UEXTM.com 5f70671856 Introduce backend GUIDs (ggml/743) 1 year ago
  Kawrakow 7c4263d426 ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760) 1 year ago
  Kawrakow 0becb22ac0 IQ4_XS: a 4.25 bpw quantization (#5747) 1 year ago
  Engininja2 c24a2a6e60 cuda : replace remaining shfl_xor with calls to warp_reduce functions (#5744) 1 year ago
  Kawrakow a33e6a0d2a Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) 1 year ago
  Johannes Gäßler 47bb7b48c7 CUDA: fix DEBUG_CUDA_MALLOC (#5729) 1 year ago
  Georgi Gerganov ab336a9d5e code : normalize enum names (#5697) 1 year ago
  Kawrakow 4c4cb30736 IQ3_S: a much better alternative to Q3_K (#5676) 1 year ago
  Georgi Gerganov 7e4f339c40 ggml : always define ggml_fp16_t as uint16_t (#5666) 1 year ago
  Kawrakow a14679cc30 IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590) 1 year ago
  slaren 40c3a6c1e1 cuda : ignore peer access already enabled errors (#5597) 1 year ago
  Georgi Gerganov d0e3ce51f4 ci : enable -Werror for CUDA builds (#5579) 1 year ago
  slaren 3a9cb4ca64 cuda, metal : fix nans in soft_max (#5574) 1 year ago
  Kawrakow bd2d4e393b 1.5 bit quantization (#5453) 1 year ago
  Georgi Gerganov 8f1be0d42f ggml : add ALiBi support for ggml_soft_max_ext (#5488) 1 year ago
  slaren 9060a1e9df cuda : print message when initialization fails (#5512) 1 year ago
  Johannes Gäßler 3bdc4cd0f5 CUDA: mul_mat_vec_q tiling, refactor mul mat logic (#5434) 1 year ago
  Johannes Gäßler 8e6a9d2de0 CUDA: more warps for mmvq on NVIDIA (#5394) 1 year ago
  Johannes Gäßler aa7ab99be2 CUDA: fixed mmvq kernel for bs 2,3,4 and -sm row (#5386) 1 year ago
  Johannes Gäßler 17c97fb062 CUDA: mul_mat_vec_q max. batch size 8 -> 4 (#5370) 1 year ago
  Johannes Gäßler 2c516611f1 CUDA: mul_mat_vec_q for batch sizes > 1 (#5351) 1 year ago
  slaren 8ca511cade cuda : fix LLAMA_CUDA_F16 (#5262) 2 years ago
  JidongZhang-THU 15606309a0 llava : add MobileVLM support (#5132) 2 years ago
  Georgi Gerganov 8f8ddfcfad sync : ggml (#0) 2 years ago
  John Balis 625a699b54 `ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686) 2 years ago
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) 2 years ago
  0cc4m 2307523d32 ggml : add Vulkan backend (#2059) 2 years ago