Commit History

Author SHA1 Message Date
  Georgi Gerganov a9cae48003 tests : add non-cont unary tests (#7857) 1 year ago
  Georgi Gerganov 2b3389677a ggml : refactor rope norm/neox (#7634) 1 year ago
  Johannes Gäßler 9b596417af CUDA: quantized KV support for FA vec (#7527) 1 year ago
  Georgi Gerganov 55d62262a9 metal : remove invalid asserts (#7617) 1 year ago
  Georgi Gerganov 975ec63ff2 metal : add missing asserts (#7617) 1 year ago
  Georgi Gerganov fb76ec31a9 ggml : fix YARN + add tests + add asserts (#7617) 1 year ago
  Georgi Gerganov 0548a4187f ggml : generalize GGML_OP_CONCAT (#7563) 1 year ago
  Georgi Gerganov 1d8fca72ae metal : add GGML_OP_REPEAT kernels (#7557) 1 year ago
  Georgi Gerganov 62bfef5194 metal : disable FA kernel for HS=256 (#7556) 1 year ago
  Georgi Gerganov e84b71c2c6 ggml : drop support for QK_K=64 (#7473) 1 year ago
  liuwei-git 201cc11afa llama : add phi3 128K model support (#7225) 1 year ago
  John Balis 48aa8fd1f2 ggml : add `ggml_upscale_ext` (ggml/814) 1 year ago
  Georgi Gerganov e8a7fd4fb0 metal : support FA without mask + add asserts (#7278) 1 year ago
  Georgi Gerganov f308ea7059 metal : tune soft_max number of threads (whisper/0) 1 year ago
  Georgi Gerganov 6aeff24f8b metal : fix indent (ggml/0) 1 year ago
  Justina Cho f5ef34e428 feat: implemented sigmoid function (ggml/806) 1 year ago
  Georgi Gerganov 9cb317f77e ggml : full ALiBi support (#7192) 1 year ago
  Georgi Gerganov 18e437665c metal : fix flash attention kernel requirements (#7169) 1 year ago
  Gilad S 26458af1d6 metal : use `vm_allocate` instead of `posix_memalign` on macOS (#7078) 1 year ago
  Justine Tunney 3855416027 ggml : introduce bfloat16 support (#6412) 1 year ago
  Kevin Gibbons f364eb6fb5 switch to using localizedDescription (#7010) 1 year ago
  Georgi Gerganov 77e15bec62 metal : remove deprecated error code (#7008) 1 year ago
  Kevin Gibbons a68a1e7ed0 metal : log more info on error (#6987) 1 year ago
  Georgi Gerganov 9c67c2773d ggml : add Flash Attention (#5021) 1 year ago
  slaren 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) 1 year ago
  Shijie f4dea7da18 llama : add qwen2moe (#6074) 1 year ago
  Dave 422c2aff1c Added support for GGML_OP_CLAMP in Metal (#6662) 1 year ago
  slaren fbbc030ba9 metal : unify mul_mv_id kernels (#6556) 1 year ago
  slaren 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) 1 year ago
  compilade 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 year ago