Commit History

Author SHA1 Message Date
  Georgi Gerganov 2b3389677a ggml : refactor rope norm/neox (#7634) 1 year ago
  Georgi Gerganov 554c247caf ggml : remove OpenCL (#7735) 1 year ago
  Georgi Gerganov fb76ec31a9 ggml : fix YARN + add tests + add asserts (#7617) 1 year ago
  Radoslav Gerganov 210d99173d llama-bench : add support for the RPC backend (#7435) 1 year ago
  Georgi Gerganov 72de268bec ggml : restore ggml_rope_xpos_inplace (ggml/0) 1 year ago
  Georgi Gerganov 0548a4187f ggml : generalize GGML_OP_CONCAT (#7563) 1 year ago
  Masaya, Kato faa0e6979a ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433) 1 year ago
  Georgi Gerganov d48c88cbd5 ggml : remove ggml_flash_attn and ggml_flash_ff (#7463) 1 year ago
  Georgi Gerganov 3e5faa8503 cuda : fix rope + add tests (#7452) 1 year ago
  liuwei-git 201cc11afa llama : add phi3 128K model support (#7225) 1 year ago
  Srihari-mcw 33c8d50acc Add provisions for windows support for BF16 code including CMake provision for enabling AVX512_BF16 (#7258) 1 year ago
  slaren 344f9126cc ggml : tag ggml_tensor::backend as deprecated (#7290) 1 year ago
  John Balis 48aa8fd1f2 ggml : add `ggml_upscale_ext` (ggml/814) 1 year ago
  Georgi Gerganov e8a7fd4fb0 metal : support FA without mask + add asserts (#7278) 1 year ago
  Justina Cho f5ef34e428 feat: implemented sigmoid function (ggml/806) 1 year ago
  Georgi Gerganov 9cb317f77e ggml : full ALiBi support (#7192) 1 year ago
  Justine Tunney 3855416027 ggml : introduce bfloat16 support (#6412) 1 year ago
  Georgi Gerganov 9c67c2773d ggml : add Flash Attention (#5021) 1 year ago
  slaren 017e6999b5 add basic tensor data validation function (#6884) 1 year ago
  slaren 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) 1 year ago
  jiez 91c736015b llama : add gguf_remove_key + remove split meta during quantize (#6591) 1 year ago
  Carolinabanana 5dc9dd7152 llama : add Command R Plus support (#6491) 1 year ago
  slaren 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) 1 year ago
  compilade 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 year ago
  Kawrakow 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 year ago
  slaren 280345968d cuda : rename build flag to LLAMA_CUDA (#6299) 1 year ago
  Jared Van Bortel 94d1b3b411 use _wfopen instead of fopen on Windows (#6248) 1 year ago
  Ondřej Čertík 7ce2c77f88 gguf : add support for I64 and F64 arrays (#6062) 1 year ago
  Georgi Gerganov 3fe8d7a17f ggml : designate enum vals for integer types (#6050) 1 year ago
  Georgi Gerganov 5b09797321 ggml : remove old quantization functions (#5942) 1 year ago