Commit History

Author SHA1 Message Date
  Georgi Gerganov ec2e10c444 llama : add llama_init_backend() API (close #1527) 2 years ago
  Georgi Gerganov 8a203f9fa1 llama : fix compile warnings in llama_set_state_data() 2 years ago
  Georgi Gerganov 2d5db48371 ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) 2 years ago
  Stephan Walter dc271c52ed Remove unused n_parts parameter (#1509) 2 years ago
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) 2 years ago
  Georgi Gerganov 738ace394a llama : free ggml context in set / copy state data (close #1425) 2 years ago
  Georgi Gerganov b9fd7eee57 ggml : remove bit shuffling (#1405) 2 years ago
  Jed Fox 3924088512 Remove default arguments from sampling functions (#1343) 2 years ago
  Evan Jones e216aa0463 llama : only copy used KV cache in get / set state (#1272) 2 years ago
  Georgi Gerganov 0e6cbff1b7 llama : fix compile warnings 2 years ago
  Robert Brisita 2bb992f034 llama : allow 0 as a seed number. (#1275) 2 years ago
  Georgi Gerganov 70269cae37 llama : fix session load / save (#1263) 2 years ago
  Alex Klinkhamer 90b19bd6ee llama : let context be const when accessing const data (#1261) 2 years ago
  Ivan Stepanov dd7eff57d8 llama : new sampling algorithms (#1126) 2 years ago
  Stephan Walter 36d19a603b Remove Q4_3 which is no better than Q5 (#1218) 2 years ago
  Evan Jones 1481a9cf25 llama : add session file format and saved sessions in main (#1169) 2 years ago
  Georgi Gerganov 574406dc7e ggml : add Q5_0 and Q5_1 quantization (#1187) 2 years ago
  Ásgeir Bjarni Ingvarsson 87a6f846d3 Allow setting the rng seed after initialization. (#1184) 2 years ago
  Georgi Gerganov 7a32fcb3b2 ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179) 2 years ago
  Georgi Gerganov c4fe84fb0d llama : refactor get / set state + remove redundant kv cache API (#1143) 2 years ago
  xaedes b6e7f9b09e llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105) 2 years ago
  Kawrakow 38de86a711 llama : multi-threaded quantization (#1075) 2 years ago
  Georgi Gerganov e0305ead3a ggml : add Q4_3 quantization (#1082) 2 years ago
  Georgi Gerganov 77a73403ca ggml : add new Q4_2 quantization (ARM only) (#1046) 2 years ago
  slaren 315a95a4d3 Add LoRA support (#820) 2 years ago
  Georgi Gerganov 9190e8eac8 llama : merge llama_internal.h into llama.h 2 years ago
  Stephan Walter e7f6997f89 Don't crash on ftype (formerly f16) == 4 (#917) 2 years ago
  Stephan Walter 3e6e70d8e8 Add enum llama_ftype, sync ggml_type to model files (#709) 2 years ago
  comex f963b63afa Rewrite loading code to try to satisfy everyone: 2 years ago
  unbounded 62cfc54f77 Add quantize-stats command for testing quantization (#728) 2 years ago