Historique des commits

Auteur SHA1 Message Date
  Qingyou Meng 1d656d6360 ggml : change ggml_graph_compute() API to not require context (#1999) il y a 2 ans
  Georgi Gerganov dfd9fce6d6 ggml : fix restrict usage il y a 2 ans
  Stephan Walter 1b107b8550 ggml : generalize `quantize_fns` for simpler FP16 handling (#1237) il y a 2 ans
  Georgi Gerganov ed9a54e512 ggml : sync latest (new ops, macros, refactoring) (#2106) il y a 2 ans
  Qingyou Meng b1ca8f36a9 ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995) il y a 2 ans
  Georgi Gerganov d9779021bd ggml : add support for ChatGLM RoPE il y a 2 ans
  David Yang eaa6ca5a61 ggml : increase max tensor name + clean up compiler warnings in train-text (#1988) il y a 2 ans
  zrm b853d45601 ggml : add NUMA support (#1556) il y a 2 ans
  Georgi Gerganov bd34cdde38 ggml : sync latest ggml (custom operators) il y a 2 ans
  slaren f2c754e1c3 ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978) il y a 2 ans
  Georgi Gerganov b97ca431db ggml : sync latest ggml repo (#1924) il y a 2 ans
  Georgi Gerganov ce2c7d72e2 metal : handle buffers larger than device's maxBufferLength (#1826) il y a 2 ans
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) il y a 2 ans
  xaedes e32089b2c2 train : improved training-from-scratch example (#1652) il y a 2 ans
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) il y a 2 ans
  Kawrakow 99009e72f8 ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) il y a 2 ans
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) il y a 2 ans
  Georgi Gerganov 7552ac5863 ggml : sync cgraph import / export API il y a 2 ans
  Georgi Gerganov 93618031c7 ggml : add ggml_tensor_overhead() il y a 2 ans
  Georgi Gerganov bdbda1b17a ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name()) il y a 2 ans
  0cc4m 2e6cd4b025 OpenCL Token Generation Acceleration (#1459) il y a 2 ans
  Georgi Gerganov 3de84b2606 ggml : add ggml_clamp() (#1539) il y a 2 ans
  Georgi Gerganov 2d5db48371 ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) il y a 2 ans
  Georgi Gerganov 13c351ad72 ggml : various fixes (#1450) il y a 2 ans
  Georgi Gerganov 601a033475 ggml : add GGML_QNT_VERSION to track quantization format changes il y a 2 ans
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) il y a 2 ans
  xaedes f954edda93 ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) il y a 2 ans
  Georgi Gerganov b9fd7eee57 ggml : remove bit shuffling (#1405) il y a 2 ans
  slaren 2d099e5193 ggml: add names to tensors (#1268) il y a 2 ans
  slaren 58b367c2d7 cuBLAS: refactor and optimize f16 mat mul performance (#1259) il y a 2 ans