1
0

Коммит түүх

Эзэн SHA1 Мессеж Огноо
  Georgi Gerganov 0548a4187f ggml : generalize GGML_OP_CONCAT (#7563) 1 жил өмнө
  Georgi Gerganov 1d8fca72ae metal : add GGML_OP_REPEAT kernels (#7557) 1 жил өмнө
  Georgi Gerganov 62bfef5194 metal : disable FA kernel for HS=256 (#7556) 1 жил өмнө
  Georgi Gerganov e84b71c2c6 ggml : drop support for QK_K=64 (#7473) 1 жил өмнө
  liuwei-git 201cc11afa llama : add phi3 128K model support (#7225) 1 жил өмнө
  John Balis 48aa8fd1f2 ggml : add `ggml_upscale_ext` (ggml/814) 1 жил өмнө
  Georgi Gerganov e8a7fd4fb0 metal : support FA without mask + add asserts (#7278) 1 жил өмнө
  Georgi Gerganov f308ea7059 metal : tune soft_max number of threads (whisper/0) 1 жил өмнө
  Georgi Gerganov 6aeff24f8b metal : fix indent (ggml/0) 1 жил өмнө
  Justina Cho f5ef34e428 feat: implemented sigmoid function (ggml/806) 1 жил өмнө
  Georgi Gerganov 9cb317f77e ggml : full ALiBi support (#7192) 1 жил өмнө
  Georgi Gerganov 18e437665c metal : fix flash attention kernel requirements (#7169) 1 жил өмнө
  Gilad S 26458af1d6 metal : use `vm_allocate` instead of `posix_memalign` on macOS (#7078) 1 жил өмнө
  Justine Tunney 3855416027 ggml : introduce bfloat16 support (#6412) 1 жил өмнө
  Kevin Gibbons f364eb6fb5 switch to using localizedDescription (#7010) 1 жил өмнө
  Georgi Gerganov 77e15bec62 metal : remove deprecated error code (#7008) 1 жил өмнө
  Kevin Gibbons a68a1e7ed0 metal : log more info on error (#6987) 1 жил өмнө
  Georgi Gerganov 9c67c2773d ggml : add Flash Attention (#5021) 1 жил өмнө
  slaren 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) 1 жил өмнө
  Shijie f4dea7da18 llama : add qwen2moe (#6074) 1 жил өмнө
  Dave 422c2aff1c Added support for GGML_OP_CLAMP in Metal (#6662) 1 жил өмнө
  slaren fbbc030ba9 metal : unify mul_mv_id kernels (#6556) 1 жил өмнө
  slaren 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) 1 жил өмнө
  compilade 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 жил өмнө
  Kawrakow 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 жил өмнө
  Georgi Gerganov b3e94f26ba metal : proper assert for mat-mat memory alignment (#6225) 1 жил өмнө
  Kawrakow 76aa30a263 Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183) 1 жил өмнө
  slaren 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
  Georgi Gerganov 381da2d9f0 metal : build metallib + fix embed path (#6015) 1 жил өмнө
  slaren f30ea47a87 llama : add pipeline parallelism support (#6017) 1 жил өмнө