Commit History

Author SHA1 Message Date
  Johannes Gäßler 905d87b70a ggml : GPU-accelerated token generation (#1412) 2 years ago
  Evan Jones cf348a60e0 main : add option to save full output to session (#1338) 2 years ago
  DannyDaemonic 41654efea8 Interface improvements and `--multiline-input` (previously `--author-mode`) (#1040) 2 years ago
  44670 2edbdb0f99 main : add --in-suffix option (#1318) 2 years ago
  Ron Evans 67c77799e0 examples : add llama_init_from_gpt_params() common function (#1290) 2 years ago
  jon-chuang a5d30b1f53 common : better default number of threads (#934) 2 years ago
  Georgi Gerganov 334637e43e common : change default parameters to pre-#1126 (#1223) 2 years ago
  Ivan Stepanov dd7eff57d8 llama : new sampling algorithms (#1126) 2 years ago
  Evan Jones 1481a9cf25 llama : add session file format and saved sessions in main (#1169) 2 years ago
  mgroeber9110 9b0a4d4214 examples/main README improvements and some light refactoring (#1131) 2 years ago
  eiery 10f19c1121 llama : have n_batch default to 512 (#1091) 2 years ago
  slaren 315a95a4d3 Add LoRA support (#820) 2 years ago
  Pavol Rusnak c85e03d12e Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 2 years ago
  Tomáš Pazdiora f4d277ae17 main : alternative instruct mode (Vicuna support, etc.) (#863) 2 years ago
  comex f963b63afa Rewrite loading code to try to satisfy everyone: 2 years ago
  Tomáš Pazdiora aaf3b23deb fix for windows utf-8 input (#840) 2 years ago
  anzz1 7b8dbcb78b main.cpp fixes, refactoring (#571) 2 years ago
  Georgi Gerganov e2d490dafd Inifinite generation via context swapping (#71) 2 years ago
  Georgi Gerganov a316a425d0 Overhaul the examples structure 2 years ago