Commit történet

Szerző SHA1 Üzenet Dátum
  Johannes Gäßler 2c9380dd2f Only one CUDA stream per device for async compute (#1898) 2 éve
  Borislav Stanimirov 9cbf50c041 build : fix and ignore MSVC warnings (#1889) 2 éve
  Johannes Gäßler 6b8312e797 Better error when using both LoRA + GPU layers (#1861) 2 éve
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) 2 éve
  Kerfuffle fa84c4b3e8 Fix issue where interactive mode crashes when input exceeds ctx size (#1789) 2 éve
  Willy Tarreau 35a84916fb main: add the possibility to open the prompt cache read-only (#1640) 2 éve
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) 2 éve
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) 2 éve
  Kerfuffle 1b78ed2081 Only show -ngl option when relevant + other doc/arg handling updates (#1625) 2 éve
  Vladimir Zorin 337aea1139 examples : add --alias option to gpt_params to set use friendly model name (#1614) 2 éve
  DannyDaemonic d2c59b8ba4 Fix for mingw (#1462) 2 éve
  Jason McCartney 7694b52b9a main : make reverse prompt option act as a stop token in non-interactive mode (#1032) 2 éve
  Georgi Gerganov 4b7e245adf minor : fix compile warnings 2 éve
  Stephan Walter dc271c52ed Remove unused n_parts parameter (#1509) 2 éve
  zrm 63d20469b8 fix get_num_physical_cores() (#1436) 2 éve
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) 2 éve
  Johannes Gäßler 773ee249fb CLI args use - instead of _, backwards compatible (#1416) 2 éve
  Evan Jones cf348a60e0 main : add option to save full output to session (#1338) 2 éve
  DannyDaemonic e6a46b0ed1 Locale fix for Windows (#1379) 2 éve
  DannyDaemonic 41654efea8 Interface improvements and `--multiline-input` (previously `--author-mode`) (#1040) 2 éve
  Georgi Gerganov f9a6364912 llama : require first token to be BOS (#1303) 2 éve
  Johannes Gäßler 1f48b0abcf Documented CUDA reproducibility, added warning (#1346) 2 éve
  44670 2edbdb0f99 main : add --in-suffix option (#1318) 2 éve
  DannyDaemonic db1080876a Only escape prompts when used with `-e` (#1311) 2 éve
  DannyDaemonic 2485d7a4d3 Process escape sequences given in prompts (#1173) 2 éve
  slaren bf4b22ffe4 fix missing parameters in `llama_init_from_gpt_params` (#1293) 2 éve
  Ron Evans 67c77799e0 examples : add llama_init_from_gpt_params() common function (#1290) 2 éve
  Robert Brisita 2bb992f034 llama : allow 0 as a seed number. (#1275) 2 éve
  jon-chuang a5d30b1f53 common : better default number of threads (#934) 2 éve
  Ivan Stepanov dd7eff57d8 llama : new sampling algorithms (#1126) 2 éve