Historique des commits

Auteur SHA1 Message Date
  Georgi Gerganov 738ace394a llama : free ggml context in set / copy state data (close #1425) il y a 2 ans
  Georgi Gerganov b9fd7eee57 ggml : remove bit shuffling (#1405) il y a 2 ans
  Jed Fox 3924088512 Remove default arguments from sampling functions (#1343) il y a 2 ans
  Evan Jones e216aa0463 llama : only copy used KV cache in get / set state (#1272) il y a 2 ans
  Georgi Gerganov 0e6cbff1b7 llama : fix compile warnings il y a 2 ans
  Robert Brisita 2bb992f034 llama : allow 0 as a seed number. (#1275) il y a 2 ans
  Georgi Gerganov 70269cae37 llama : fix session load / save (#1263) il y a 2 ans
  Alex Klinkhamer 90b19bd6ee llama : let context be const when accessing const data (#1261) il y a 2 ans
  Ivan Stepanov dd7eff57d8 llama : new sampling algorithms (#1126) il y a 2 ans
  Stephan Walter 36d19a603b Remove Q4_3 which is no better than Q5 (#1218) il y a 2 ans
  Evan Jones 1481a9cf25 llama : add session file format and saved sessions in main (#1169) il y a 2 ans
  Georgi Gerganov 574406dc7e ggml : add Q5_0 and Q5_1 quantization (#1187) il y a 2 ans
  Ásgeir Bjarni Ingvarsson 87a6f846d3 Allow setting the rng seed after initialization. (#1184) il y a 2 ans
  Georgi Gerganov 7a32fcb3b2 ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179) il y a 2 ans
  Georgi Gerganov c4fe84fb0d llama : refactor get / set state + remove redundant kv cache API (#1143) il y a 2 ans
  xaedes b6e7f9b09e llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105) il y a 2 ans
  Kawrakow 38de86a711 llama : multi-threaded quantization (#1075) il y a 2 ans
  Georgi Gerganov e0305ead3a ggml : add Q4_3 quantization (#1082) il y a 2 ans
  Georgi Gerganov 77a73403ca ggml : add new Q4_2 quantization (ARM only) (#1046) il y a 2 ans
  slaren 315a95a4d3 Add LoRA support (#820) il y a 2 ans
  Georgi Gerganov 9190e8eac8 llama : merge llama_internal.h into llama.h il y a 2 ans
  Stephan Walter e7f6997f89 Don't crash on ftype (formerly f16) == 4 (#917) il y a 2 ans
  Stephan Walter 3e6e70d8e8 Add enum llama_ftype, sync ggml_type to model files (#709) il y a 2 ans
  comex f963b63afa Rewrite loading code to try to satisfy everyone: il y a 2 ans
  unbounded 62cfc54f77 Add quantize-stats command for testing quantization (#728) il y a 2 ans
  Christian Falch e986f94829 Added api for getting/setting the kv_cache (#685) il y a 2 ans
  Justine Tunney 78ca9838ee Make loading weights 10-100x faster il y a 2 ans
  anzz1 a5c42c4b13 Fix typo in llama.h (#593) il y a 2 ans
  anzz1 7f4c5c6651 llama : fix linkage with mingw (#551) il y a 2 ans
  Stephan Walter 436e561931 all : be more strict about converting float to double (#458) il y a 2 ans