Commit History

Autor SHA1 Mensaxe Data
  Johannes Gäßler 0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) %!s(int64=2) %!d(string=hai) anos
  Kawrakow eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) %!s(int64=2) %!d(string=hai) anos
  slaren 41c674161f make rms_norm_eps a parameter (#2374) %!s(int64=2) %!d(string=hai) anos
  Evan Jones 84e09a7d8b llama : add grammar-based sampling (#1773) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov e76d630df1 llama : grouped-query attention + LLaMAv2 70B support (#2276) %!s(int64=2) %!d(string=hai) anos
  Guillaume "Vermeille" Sanchez ab0e26bdfb llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov ae178ab46b llama : make tensor_split ptr instead of array (#2272) %!s(int64=2) %!d(string=hai) anos
  Rinne 294f424554 llama : extend API to get max devices at runtime (#2253) %!s(int64=2) %!d(string=hai) anos
  Xiao-Yong Jin 6e7cca4047 llama : add custom RoPE (#2054) %!s(int64=2) %!d(string=hai) anos
  Bach Le 7513b7b0a1 llama : add functions that work directly on model (#2197) %!s(int64=2) %!d(string=hai) anos
  Bach Le c9c74b4e3f llama : add classifier-free guidance (#2135) %!s(int64=2) %!d(string=hai) anos
  Evan Miller 5656d10599 mpi : add support for distributed inference via MPI (#2099) %!s(int64=2) %!d(string=hai) anos
  Tobias Lütke 31cfbb1013 Expose generation timings from server & update completions.js (#2116) %!s(int64=2) %!d(string=hai) anos
  Howard Su b8c8dda75f Use unsigned for random seed (#2006) %!s(int64=2) %!d(string=hai) anos
  ningshanwutuobang cfa0750bc9 llama : support input embeddings directly (#1910) %!s(int64=2) %!d(string=hai) anos
  zrm b853d45601 ggml : add NUMA support (#1556) %!s(int64=2) %!d(string=hai) anos
  Didzis Gosko 527b6fba1d llama : make model stateless and context stateful (llama_state) (#1797) %!s(int64=2) %!d(string=hai) anos
  Ettore Di Giacinto aacdbd4056 llama : fix params struct slignment (#1936) %!s(int64=2) %!d(string=hai) anos
  yangli2 c36e81da62 examples : add chat-vicuna.sh (#1854) %!s(int64=2) %!d(string=hai) anos
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) %!s(int64=2) %!d(string=hai) anos
  xaedes e32089b2c2 train : improved training-from-scratch example (#1652) %!s(int64=2) %!d(string=hai) anos
  Kerfuffle 4f0154b0ba llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691) %!s(int64=2) %!d(string=hai) anos
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 99009e72f8 ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) %!s(int64=2) %!d(string=hai) anos
  Kerfuffle 1b78ed2081 Only show -ngl option when relevant + other doc/arg handling updates (#1625) %!s(int64=2) %!d(string=hai) anos
  Juuso Alasuutari 29cf5596fe llama : define magic numbers as integer constants (#1518) (#1520) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov ec2e10c444 llama : add llama_init_backend() API (close #1527) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 8a203f9fa1 llama : fix compile warnings in llama_set_state_data() %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 2d5db48371 ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) %!s(int64=2) %!d(string=hai) anos