Historique des commits

Auteur SHA1 Message Date
  Bach Le c9c74b4e3f llama : add classifier-free guidance (#2135) il y a 2 ans
  WangHaoranRobin d7d2e6a0f0 server: add option to output probabilities for completion (#1962) il y a 2 ans
  Howard Su b8c8dda75f Use unsigned for random seed (#2006) il y a 2 ans
  zrm b853d45601 ggml : add NUMA support (#1556) il y a 2 ans
  Didzis Gosko 527b6fba1d llama : make model stateless and context stateful (llama_state) (#1797) il y a 2 ans
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) il y a 2 ans
  Kerfuffle fa84c4b3e8 Fix issue where interactive mode crashes when input exceeds ctx size (#1789) il y a 2 ans
  Willy Tarreau 35a84916fb main: add the possibility to open the prompt cache read-only (#1640) il y a 2 ans
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) il y a 2 ans
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) il y a 2 ans
  Vladimir Zorin 337aea1139 examples : add --alias option to gpt_params to set use friendly model name (#1614) il y a 2 ans
  Georgi Gerganov 4b7e245adf minor : fix compile warnings il y a 2 ans
  Stephan Walter dc271c52ed Remove unused n_parts parameter (#1509) il y a 2 ans
  András Salamon 9560655409 define default model path once, sync path with readme (#1366) il y a 2 ans
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) il y a 2 ans
  Evan Jones cf348a60e0 main : add option to save full output to session (#1338) il y a 2 ans
  DannyDaemonic 41654efea8 Interface improvements and `--multiline-input` (previously `--author-mode`) (#1040) il y a 2 ans
  44670 2edbdb0f99 main : add --in-suffix option (#1318) il y a 2 ans
  Ron Evans 67c77799e0 examples : add llama_init_from_gpt_params() common function (#1290) il y a 2 ans
  jon-chuang a5d30b1f53 common : better default number of threads (#934) il y a 2 ans
  Georgi Gerganov 334637e43e common : change default parameters to pre-#1126 (#1223) il y a 2 ans
  Ivan Stepanov dd7eff57d8 llama : new sampling algorithms (#1126) il y a 2 ans
  Evan Jones 1481a9cf25 llama : add session file format and saved sessions in main (#1169) il y a 2 ans
  mgroeber9110 9b0a4d4214 examples/main README improvements and some light refactoring (#1131) il y a 2 ans
  eiery 10f19c1121 llama : have n_batch default to 512 (#1091) il y a 2 ans
  slaren 315a95a4d3 Add LoRA support (#820) il y a 2 ans
  Pavol Rusnak c85e03d12e Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) il y a 2 ans
  Tomáš Pazdiora f4d277ae17 main : alternative instruct mode (Vicuna support, etc.) (#863) il y a 2 ans
  comex f963b63afa Rewrite loading code to try to satisfy everyone: il y a 2 ans
  Tomáš Pazdiora aaf3b23deb fix for windows utf-8 input (#840) il y a 2 ans