Commit History

Author SHA1 Message Date
  Kawrakow bd2d4e393b 1.5 bit quantization (#5453) 1 year ago
  bmwl f486f6e1e5 ggml : add numa options (#5377) 1 year ago
  Douglas Hanley 4524290e87 Use correct type of pooling for embedding models (#5500) 1 year ago
  Douglas Hanley 03bf161eb6 llama : support batched embeddings (#5466) 1 year ago
  Douglas Hanley 2891c8aa9a Add support for BERT embedding models (#5423) 1 year ago
  Jared Van Bortel 1ec3332ade YaRN : store rope scaling type as int32_t in memory (#5285) 1 year ago
  Georgi Gerganov 5cb04dbc16 llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240) 2 years ago
  Kawrakow f4d7e54974 SOTA 3-bit quants (#5196) 2 years ago
  Jared Van Bortel fbf1ddec69 Nomic Vulkan backend (#4456) 2 years ago
  0cc4m 2307523d32 ggml : add Vulkan backend (#2059) 2 years ago
  Abhilash Majumder 0f648573dd ggml : add unified SYCL backend for Intel GPUs (#2690) 2 years ago
  l3utterfly 5eaf9964fc llama : dynamic temperature sampling (#4972) 2 years ago
  Kawrakow 66d575c45c llama : add Q3_K_XS (#5060) 2 years ago
  Georgi Gerganov 44a1a4a41a backend : add eval callback (#4935) 2 years ago
  David Friehs 4483396751 llama : apply classifier-free guidance to logits directly (#4951) 2 years ago
  Kawrakow 147b17ac94 2-bit quantizations (#4897) 2 years ago
  David Friehs df845cc982 llama : minimize size used for state save/load (#4820) 2 years ago
  slaren e7e4df031b llama : ggml-backend integration (#4766) 2 years ago
  Kawrakow 469e75d0a3 llama : restore intended k-quants mixes for MoE models (#4872) 2 years ago
  Kawrakow 49662cbed3 ggml : SOTA 2-bit quants (add IQ2_XS) (#4856) 2 years ago
  Kawrakow dd5ae06405 SOTA 2-bit quants (#4773) 2 years ago
  Georgi Gerganov 52531fdff8 main : add self-extend support (#4815) 2 years ago
  Georgi Gerganov b0034d93ce examples : add passkey test (#3856) 2 years ago
  Marcus Dunn 0040d42eeb llama : replace all API facing `int`'s with `int32_t` (#4577) 2 years ago
  crasm c7e9701f86 llama : add ability to cancel model loading (#4462) 2 years ago
  Marcus Dunn 31f27758fa llama : allow getting n_batch from llama_context in c api (#4540) 2 years ago
  slaren c6c4fc081c lora : add support for non-llama models (#3333) 2 years ago
  crasm 6391817cd1 llama : document logits_all deprecation (#4418) 2 years ago
  Georgi Gerganov bcc0eb4591 llama : per-layer KV cache + quantum K cache (#4309) 2 years ago
  Kerfuffle 5aa365d88f llama : allow overriding GGUF metadata when loading model (#4092) 2 years ago