提交歷史

作者 SHA1 備註 提交日期
  Kawrakow 66d575c45c llama : add Q3_K_XS (#5060) 2 年之前
  Georgi Gerganov 44a1a4a41a backend : add eval callback (#4935) 2 年之前
  David Friehs 4483396751 llama : apply classifier-free guidance to logits directly (#4951) 2 年之前
  Kawrakow 147b17ac94 2-bit quantizations (#4897) 2 年之前
  David Friehs df845cc982 llama : minimize size used for state save/load (#4820) 2 年之前
  slaren e7e4df031b llama : ggml-backend integration (#4766) 2 年之前
  Kawrakow 469e75d0a3 llama : restore intended k-quants mixes for MoE models (#4872) 2 年之前
  Kawrakow 49662cbed3 ggml : SOTA 2-bit quants (add IQ2_XS) (#4856) 2 年之前
  Kawrakow dd5ae06405 SOTA 2-bit quants (#4773) 2 年之前
  Georgi Gerganov 52531fdff8 main : add self-extend support (#4815) 2 年之前
  Georgi Gerganov b0034d93ce examples : add passkey test (#3856) 2 年之前
  Marcus Dunn 0040d42eeb llama : replace all API facing `int`'s with `int32_t` (#4577) 2 年之前
  crasm c7e9701f86 llama : add ability to cancel model loading (#4462) 2 年之前
  Marcus Dunn 31f27758fa llama : allow getting n_batch from llama_context in c api (#4540) 2 年之前
  slaren c6c4fc081c lora : add support for non-llama models (#3333) 2 年之前
  crasm 6391817cd1 llama : document logits_all deprecation (#4418) 2 年之前
  Georgi Gerganov bcc0eb4591 llama : per-layer KV cache + quantum K cache (#4309) 2 年之前
  Kerfuffle 5aa365d88f llama : allow overriding GGUF metadata when loading model (#4092) 2 年之前
  crasm 3014b5415d Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189) 2 年之前
  Georgi Gerganov 6b0a7420d0 llama : KV cache view API + better KV cache management (#4170) 2 年之前
  slaren e85bb1a8e7 llama : add functions to get the model's metadata (#4013) 2 年之前
  Kerfuffle 91f6499393 Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040) 2 年之前
  Georgi Gerganov 05816027d6 common : YAYF (yet another YARN fix) (#3925) 2 年之前
  cebtenzzre 898aeca90a llama : implement YaRN RoPE scaling (#2268) 2 年之前
  kalomaze 238657db23 samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841) 2 年之前
  Kerfuffle 6e08281e58 Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2 年之前
  Georgi Gerganov d69d777c02 ggml : quantization refactoring (#3833) 2 年之前
  Georgi Gerganov ee1a0ec9cb llama : add option for greedy sampling with probs (#3813) 2 年之前
  Georgi Gerganov 2f9ec7e271 cuda : improve text-generation and batched decoding performance (#3776) 2 年之前
  Marcus Dunn 5be6c803fa llama : remove token functions with `context` args in favor of `model` (#3720) 2 年之前