Commit History

Autor SHA1 Mensaxe Data
  Johannes Gäßler 7f9753fa12 CUDA GPU acceleration for LoRAs + f16 models (#1970) %!s(int64=2) %!d(string=hai) anos
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) %!s(int64=2) %!d(string=hai) anos
  Howard Su 58970a4c39 Leverage mmap for offloading tensors to GPU (#1597) %!s(int64=2) %!d(string=hai) anos
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) %!s(int64=2) %!d(string=hai) anos
  Johannes Gäßler affc76edfd cuda : loading models directly into VRAM, norm calculation on GPU, broadcasting for ggml_mul (#1483) %!s(int64=2) %!d(string=hai) anos
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) %!s(int64=2) %!d(string=hai) anos
  slaren 58b367c2d7 cuBLAS: refactor and optimize f16 mat mul performance (#1259) %!s(int64=2) %!d(string=hai) anos
  slaren 7fc50c051a cuBLAS: use host pinned memory and dequantize while copying (#1207) %!s(int64=2) %!d(string=hai) anos
  Henri Vasserman b1ee8f59b4 cuBLAS: non-contiguous tensor support (#1215) %!s(int64=2) %!d(string=hai) anos
  Stephan Walter 36d19a603b Remove Q4_3 which is no better than Q5 (#1218) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 574406dc7e ggml : add Q5_0 and Q5_1 quantization (#1187) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 7a32fcb3b2 ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179) %!s(int64=2) %!d(string=hai) anos
  slaren 50cb666b8a Improve cuBLAS performance by using a memory pool (#1094) %!s(int64=2) %!d(string=hai) anos
  slaren 2005469ea1 Add Q4_3 support to cuBLAS (#1086) %!s(int64=2) %!d(string=hai) anos
  slaren 02d6988121 Improve cuBLAS performance by dequantizing on the GPU (#1065) %!s(int64=2) %!d(string=hai) anos