Commit History

Autor SHA1 Mensaxe Data
  slaren 63351143b2 quantize : improve type name parsing (#9570) hai 1 ano
  compilade 9bc6db28d0 ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) hai 1 ano
  João Dinis Ferreira 8f824ffe8e quantize : fix typo in usage help of `quantize.cpp` (#9145) hai 1 ano
  Daniel Bevenius 725e3d9437 quantize : update usage comment in quantize.cpp (#8889) hai 1 ano
  Georgi Gerganov 0efec57787 llama : valign + remove unused ftype (#8502) hai 1 ano
  Dibakar Gope 0f1a39f343 ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780) hai 1 ano
  ddh0 5b48cd53a8 Update llama-quantize ppl/file size output from LLaMA-v1 to Llama-3 values (#8058) hai 1 ano
  Georgi Gerganov 6ff13987ad common : normalize naming style (#7462) hai 1 ano
  Fred Douglas 1ea2a0036e quantize : fix --keep-split check (#7374) hai 1 ano
  Justine Tunney 3855416027 ggml : introduce bfloat16 support (#6412) hai 1 ano
  Pierrick Hymbert 0c4d489e29 quantize: add imatrix and dataset metadata in GGUF (#6658) hai 1 ano
  jiez 1966eb2615 quantize : add '--keep-split' to quantize model into shards (#6688) hai 1 ano
  slaren 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) hai 1 ano
  Kawrakow 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) hai 1 ano
  Kawrakow d25b1c31b0 quantize : be able to override metadata by key (#6321) hai 1 ano
  Kawrakow 1d0331c12a quantize: options for output and token embedding tensors qtype (#6239) hai 1 ano
  Kawrakow 0becb22ac0 IQ4_XS: a 4.25 bpw quantization (#5747) hai 1 ano
  Kawrakow a33e6a0d2a Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) hai 1 ano
  Kawrakow 4c4cb30736 IQ3_S: a much better alternative to Q3_K (#5676) hai 1 ano
  Kawrakow a14679cc30 IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590) hai 1 ano
  Kawrakow bd2d4e393b 1.5 bit quantization (#5453) hai 1 ano
  bmwl f486f6e1e5 ggml : add numa options (#5377) hai 1 ano
  Michael Klimenko 52bb63c708 refactor : switch to emplace_back to avoid extra object (#5291) hai 1 ano
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) hai 1 ano
  Vladimir Malyutin 7359016c7c quantize : fix typo (#5211) hai 1 ano
  Kawrakow 66d575c45c llama : add Q3_K_XS (#5060) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 467a882fd2 Add ability to use importance matrix for all k-quants (#4930) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 147b17ac94 2-bit quantizations (#4897) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 469e75d0a3 llama : restore intended k-quants mixes for MoE models (#4872) %!s(int64=2) %!d(string=hai) anos
  cebtenzzre b12fa0d1c1 build : link against build info instead of compiling against it (#3879) %!s(int64=2) %!d(string=hai) anos