Commit History

Author SHA1 Message Date
  Kawrakow 99009e72f8 ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) 2 years ago
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) 2 years ago
  Georgi Gerganov 7552ac5863 ggml : sync cgraph import / export API 2 years ago
  Georgi Gerganov 93618031c7 ggml : add ggml_tensor_overhead() 2 years ago
  Georgi Gerganov bdbda1b17a ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name()) 2 years ago
  0cc4m 2e6cd4b025 OpenCL Token Generation Acceleration (#1459) 2 years ago
  Georgi Gerganov 3de84b2606 ggml : add ggml_clamp() (#1539) 2 years ago
  Georgi Gerganov 2d5db48371 ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) 2 years ago
  Georgi Gerganov 13c351ad72 ggml : various fixes (#1450) 2 years ago
  Georgi Gerganov 601a033475 ggml : add GGML_QNT_VERSION to track quantization format changes 2 years ago
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) 2 years ago
  xaedes f954edda93 ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2 years ago
  Georgi Gerganov b9fd7eee57 ggml : remove bit shuffling (#1405) 2 years ago
  slaren 2d099e5193 ggml: add names to tensors (#1268) 2 years ago
  slaren 58b367c2d7 cuBLAS: refactor and optimize f16 mat mul performance (#1259) 2 years ago
  Georgi Gerganov 6bc4400e67 ggml : add Q5 WASM SIMD + GGML_FTYPE 2 years ago
  Georgi Gerganov 0b5a935099 ggml : fix visibility and unused warnings 2 years ago
  Stephan Walter 36d19a603b Remove Q4_3 which is no better than Q5 (#1218) 2 years ago
  Georgi Gerganov 55390bcaf2 ggml : sync ggml (ggml_alibi) 2 years ago
  0cc4m 7296c961d9 ggml : add CLBlast support (#1164) 2 years ago
  Georgi Gerganov 574406dc7e ggml : add Q5_0 and Q5_1 quantization (#1187) 2 years ago
  Georgi Gerganov 7a32fcb3b2 ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179) 2 years ago
  Georgi Gerganov 8a0f8673ba ggml : export symbols (#1155) 2 years ago
  Georgi Gerganov 12b5900dbc ggml : sync ggml (add GPT-NeoX RoPE implementation) 2 years ago
  Kawrakow 38de86a711 llama : multi-threaded quantization (#1075) 2 years ago
  Georgi Gerganov e0305ead3a ggml : add Q4_3 quantization (#1082) 2 years ago
  slaren 8944a13296 Add NVIDIA cuBLAS support (#1044) 2 years ago
  Georgi Gerganov 77a73403ca ggml : add new Q4_2 quantization (ARM only) (#1046) 2 years ago
  slaren 315a95a4d3 Add LoRA support (#820) 2 years ago
  Ivan Komarov f266259ad9 Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933) 2 years ago