Historique des commits

Auteur SHA1 Message Date
  Bach Le 7513b7b0a1 llama : add functions that work directly on model (#2197) il y a 2 ans
  Bach Le c9c74b4e3f llama : add classifier-free guidance (#2135) il y a 2 ans
  Evan Miller 5656d10599 mpi : add support for distributed inference via MPI (#2099) il y a 2 ans
  Tobias Lütke 31cfbb1013 Expose generation timings from server & update completions.js (#2116) il y a 2 ans
  Howard Su b8c8dda75f Use unsigned for random seed (#2006) il y a 2 ans
  ningshanwutuobang cfa0750bc9 llama : support input embeddings directly (#1910) il y a 2 ans
  zrm b853d45601 ggml : add NUMA support (#1556) il y a 2 ans
  Didzis Gosko 527b6fba1d llama : make model stateless and context stateful (llama_state) (#1797) il y a 2 ans
  Ettore Di Giacinto aacdbd4056 llama : fix params struct slignment (#1936) il y a 2 ans
  yangli2 c36e81da62 examples : add chat-vicuna.sh (#1854) il y a 2 ans
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) il y a 2 ans
  xaedes e32089b2c2 train : improved training-from-scratch example (#1652) il y a 2 ans
  Kerfuffle 4f0154b0ba llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691) il y a 2 ans
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) il y a 2 ans
  Kawrakow 99009e72f8 ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) il y a 2 ans
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) il y a 2 ans
  Kerfuffle 1b78ed2081 Only show -ngl option when relevant + other doc/arg handling updates (#1625) il y a 2 ans
  Juuso Alasuutari 29cf5596fe llama : define magic numbers as integer constants (#1518) (#1520) il y a 2 ans
  Georgi Gerganov ec2e10c444 llama : add llama_init_backend() API (close #1527) il y a 2 ans
  Georgi Gerganov 8a203f9fa1 llama : fix compile warnings in llama_set_state_data() il y a 2 ans
  Georgi Gerganov 2d5db48371 ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) il y a 2 ans
  Stephan Walter dc271c52ed Remove unused n_parts parameter (#1509) il y a 2 ans
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) il y a 2 ans
  Georgi Gerganov 738ace394a llama : free ggml context in set / copy state data (close #1425) il y a 2 ans
  Georgi Gerganov b9fd7eee57 ggml : remove bit shuffling (#1405) il y a 2 ans
  Jed Fox 3924088512 Remove default arguments from sampling functions (#1343) il y a 2 ans
  Evan Jones e216aa0463 llama : only copy used KV cache in get / set state (#1272) il y a 2 ans
  Georgi Gerganov 0e6cbff1b7 llama : fix compile warnings il y a 2 ans
  Robert Brisita 2bb992f034 llama : allow 0 as a seed number. (#1275) il y a 2 ans
  Georgi Gerganov 70269cae37 llama : fix session load / save (#1263) il y a 2 ans