Historique des commits

Auteur SHA1 Message Date
  slaren 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) il y a 1 an
  jiez 91c736015b llama : add gguf_remove_key + remove split meta during quantize (#6591) il y a 1 an
  Carolinabanana 5dc9dd7152 llama : add Command R Plus support (#6491) il y a 1 an
  slaren 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) il y a 1 an
  compilade 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) il y a 1 an
  Kawrakow 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) il y a 1 an
  slaren 280345968d cuda : rename build flag to LLAMA_CUDA (#6299) il y a 1 an
  Jared Van Bortel 94d1b3b411 use _wfopen instead of fopen on Windows (#6248) il y a 1 an
  Ondřej Čertík 7ce2c77f88 gguf : add support for I64 and F64 arrays (#6062) il y a 1 an
  Georgi Gerganov 3fe8d7a17f ggml : designate enum vals for integer types (#6050) il y a 1 an
  Georgi Gerganov 5b09797321 ggml : remove old quantization functions (#5942) il y a 1 an
  compilade c2101a2e90 llama : support Mamba Selective State Space Models (#5328) il y a 1 an
  Michael Podvitskiy 9fa2627347 ggml : introduce ggml_status (ggml/750) il y a 1 an
  leejet 7d43c585dc add some new ops, fix some operators and add batch operations to certain operators. (ggml/747) il y a 1 an
  UEXTM.com 5f70671856 Introduce backend GUIDs (ggml/743) il y a 1 an
  Kawrakow 0becb22ac0 IQ4_XS: a 4.25 bpw quantization (#5747) il y a 1 an
  Kawrakow a33e6a0d2a Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) il y a 1 an
  Georgi Gerganov ab336a9d5e code : normalize enum names (#5697) il y a 1 an
  Kawrakow 4c4cb30736 IQ3_S: a much better alternative to Q3_K (#5676) il y a 1 an
  Georgi Gerganov 7e4f339c40 ggml : always define ggml_fp16_t as uint16_t (#5666) il y a 1 an
  Kawrakow a14679cc30 IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590) il y a 1 an
  Kawrakow bd2d4e393b 1.5 bit quantization (#5453) il y a 1 an
  Georgi Gerganov 8f1be0d42f ggml : add ALiBi support for ggml_soft_max_ext (#5488) il y a 1 an
  bmwl f486f6e1e5 ggml : add numa options (#5377) il y a 1 an
  Georgi Gerganov 3b169441df sync : ggml (#5452) il y a 1 an
  snadampal a07d0fee1f ggml : add mmla kernels for quantized GEMM (#4966) il y a 1 an
  Michael Podvitskiy 4633d93af0 ggml : add abort_callback for cpu backend (ggml/725) il y a 1 an
  JidongZhang-THU 15606309a0 llava : add MobileVLM support (#5132) il y a 1 an
  Jared Van Bortel e8dc55d006 kompute : llama-bench support and ggml_cpu_has_kompute() (#5226) il y a 1 an
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) il y a 2 ans