Historia zmian

Autor SHA1 Wiadomość Data
  zrm b853d45601 ggml : add NUMA support (#1556) 2 lat temu
  Didzis Gosko 527b6fba1d llama : make model stateless and context stateful (llama_state) (#1797) 2 lat temu
  Johannes Gäßler 2c9380dd2f Only one CUDA stream per device for async compute (#1898) 2 lat temu
  Borislav Stanimirov 9cbf50c041 build : fix and ignore MSVC warnings (#1889) 2 lat temu
  Johannes Gäßler 6b8312e797 Better error when using both LoRA + GPU layers (#1861) 2 lat temu
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) 2 lat temu
  Kerfuffle fa84c4b3e8 Fix issue where interactive mode crashes when input exceeds ctx size (#1789) 2 lat temu
  Willy Tarreau 35a84916fb main: add the possibility to open the prompt cache read-only (#1640) 2 lat temu
  Johannes Gäßler 17366df842 Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) 2 lat temu
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) 2 lat temu
  Kerfuffle 1b78ed2081 Only show -ngl option when relevant + other doc/arg handling updates (#1625) 2 lat temu
  Vladimir Zorin 337aea1139 examples : add --alias option to gpt_params to set use friendly model name (#1614) 2 lat temu
  DannyDaemonic d2c59b8ba4 Fix for mingw (#1462) 2 lat temu
  Jason McCartney 7694b52b9a main : make reverse prompt option act as a stop token in non-interactive mode (#1032) 2 lat temu
  Georgi Gerganov 4b7e245adf minor : fix compile warnings 2 lat temu
  Stephan Walter dc271c52ed Remove unused n_parts parameter (#1509) 2 lat temu
  zrm 63d20469b8 fix get_num_physical_cores() (#1436) 2 lat temu
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) 2 lat temu
  Johannes Gäßler 773ee249fb CLI args use - instead of _, backwards compatible (#1416) 2 lat temu
  Evan Jones cf348a60e0 main : add option to save full output to session (#1338) 2 lat temu
  DannyDaemonic e6a46b0ed1 Locale fix for Windows (#1379) 2 lat temu
  DannyDaemonic 41654efea8 Interface improvements and `--multiline-input` (previously `--author-mode`) (#1040) 2 lat temu
  Georgi Gerganov f9a6364912 llama : require first token to be BOS (#1303) 2 lat temu
  Johannes Gäßler 1f48b0abcf Documented CUDA reproducibility, added warning (#1346) 2 lat temu
  44670 2edbdb0f99 main : add --in-suffix option (#1318) 2 lat temu
  DannyDaemonic db1080876a Only escape prompts when used with `-e` (#1311) 2 lat temu
  DannyDaemonic 2485d7a4d3 Process escape sequences given in prompts (#1173) 2 lat temu
  slaren bf4b22ffe4 fix missing parameters in `llama_init_from_gpt_params` (#1293) 2 lat temu
  Ron Evans 67c77799e0 examples : add llama_init_from_gpt_params() common function (#1290) 2 lat temu
  Robert Brisita 2bb992f034 llama : allow 0 as a seed number. (#1275) 2 lat temu