Commit History

Author SHA1 Message Date
  slaren 226460cc0d llama-bench : add no-kv-offload parameter (#4812) 2 years ago
  Georgi Gerganov bcc0eb4591 llama : per-layer KV cache + quantum K cache (#4309) 2 years ago
  cebtenzzre b12fa0d1c1 build : link against build info instead of compiling against it (#3879) 2 years ago
  Kerfuffle 6e08281e58 Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843) 2 years ago
  Marcus Dunn 5be6c803fa llama : remove token functions with `context` args in favor of `model` (#3720) 2 years ago
  Cebtenzzre bc39553c90 build : enable more non-default compiler warnings (#3200) 2 years ago
  slaren 16bc66d947 llama.cpp : split llama_context_params into model and context params (#3301) 2 years ago
  Georgi Gerganov ec893798b7 llama : custom attention mask + parallel decoding + no context swaps (#3228) 2 years ago
  Rickard Hallerbäck dc6897404e metal : reusing llama.cpp logging (#3152) 2 years ago
  Georgi Gerganov 8c00b7a6ff sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192) 2 years ago
  slaren 15b67a66c2 llama-bench : use two tokens in the warmup run for prompt evals (#3059) 2 years ago
  Cebtenzzre de2fe892af examples : replace fprintf to stdout with printf (#3017) 2 years ago
  Cebtenzzre 3103568144 llama-bench : make cpp file non-executable (#2999) 2 years ago
  slaren 43033b7bb4 llama-bench : set locale to utf8 (#2832) 2 years ago
  slaren 154725c543 llama-bench : add model sizes (#2771) 2 years ago
  Henri Vasserman 6bbc598a63 ROCm Port (#1087) 2 years ago
  slaren 8e4364f2af llama-bench : minor fixes (#2695) 2 years ago
  Georgi Gerganov 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 years ago
  slaren 097e121e2f llama : add benchmark example (#2626) 2 years ago