Commit History

Autor SHA1 Mensaxe Data
  Georgi Gerganov dae06c06e5 Revert "finetune : add --n-gpu-layers flag info to --help (#4128)" %!s(int64=2) %!d(string=hai) anos
  Clark Saben 05e8301e45 finetune : add --n-gpu-layers flag info to --help (#4128) %!s(int64=2) %!d(string=hai) anos
  Jiří Podivín ba4cf5c0bf train : move number of gpu layers argument parsing to common/train.cpp (#4074) %!s(int64=2) %!d(string=hai) anos
  Andrew Godfrey 947f64f163 finetune : zero the loraB initial vectors (#4082) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 4760e7cc0b sync : ggml (backend v2) (#3912) %!s(int64=2) %!d(string=hai) anos
  xaedes e9c1cecb9d ggml : fix backward rope after YaRN (#3974) %!s(int64=2) %!d(string=hai) anos
  cebtenzzre 898aeca90a llama : implement YaRN RoPE scaling (#2268) %!s(int64=2) %!d(string=hai) anos
  Andrew Godfrey 73bdcb395e finetune : add -ngl parameter (#3762) %!s(int64=2) %!d(string=hai) anos
  slaren 424b6381c4 ggml : add context enumeration functions (#3605) %!s(int64=2) %!d(string=hai) anos
  xaedes a03ce38455 finetune : fix #3404 (#3437) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov bc34dd4f5b train : fix KQ_pos allocation (#3392) %!s(int64=2) %!d(string=hai) anos
  slaren 16bc66d947 llama.cpp : split llama_context_params into model and context params (#3301) %!s(int64=2) %!d(string=hai) anos
  xaedes 0e76a8992c train : finetune LORA (#2632) %!s(int64=2) %!d(string=hai) anos