Historique des commits

Auteur SHA1 Message Date
  ddh0 13f1e4a9ca llama : add adaptive-p sampler (#17927) il y a 2 semaines
  Georgi Gerganov 39173bcacb context : reserve new scheduler when graph topology changes (#18547) il y a 2 semaines
  Xuan-Son Nguyen a7e6ddb8bd lora: make sure model keep track of associated adapters (#18490) il y a 2 semaines
  Georgi Gerganov f5f8812f7c server : use different seeds for child completions (#18700) il y a 3 semaines
  Johannes Gäßler 64848deb18 llama-fit-params: free memory target per device (#18679) il y a 3 semaines
  Julius Tischbein 2038101bd9 llama : add `use_direct_io` flag for model loading (#18166) il y a 3 semaines
  Tarek Dakhran 73d284a250 model : add LFM2-ColBert-350M (#18607) il y a 3 semaines
  Daniel Bevenius d3dce4e0a5 sampling : add support for backend sampling (#17004) il y a 3 semaines
  Xuan-Son Nguyen cd78e57c3a lora: count lora nodes in graph_max_nodes (#18469) il y a 1 mois
  Johannes Gäßler 026d2ad472 llama: fix magic number of 999 for GPU layers (#18266) il y a 1 mois
  Johannes Gäßler a52dc60ba3 llama_fit_params: return enum for fail vs. error (#18374) il y a 1 mois
  Johannes Gäßler b1f3a6e5db llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) il y a 1 mois
  Aaron Teo 877566d512 llama: introduce support for model-embedded sampling parameters (#17120) il y a 2 mois
  Sigbjørn Skjæret 9008027aa3 hparams : add n_embd_inp() to support extended embed (#16928) il y a 2 mois
  Georgi Gerganov 16bcc1259d kv-cache : pad the cache size to 256 for performance (#17046) il y a 2 mois
  Georgi Gerganov cd5e3b5754 server : support unified cache across slots (#16736) il y a 3 mois
  Adrian Lundberg 76af40aaaa docs: remove llama_sampler_accept reference in sampling sample usage (#16920) il y a 3 mois
  JJJYmmm d261223d24 model: add support for qwen3vl series (#16780) il y a 3 mois
  Gadflyii 3df2244df4 llama : add --no-host to disable host buffers (#16310) il y a 3 mois
  ddh0 f6dcda3900 server : context checkpointing for hybrid and recurrent models (#16382) il y a 4 mois
  Johannes Gäßler e789095502 llama: print memory breakdown on exit (#15860) il y a 4 mois
  Gabe Goodhart fd621880f3 aLoRA Support (#15327) il y a 4 mois
  Georgi Gerganov e92d53b29e sampling : optimize samplers by reusing bucket sort (#15665) il y a 5 mois
  Johannes Gäßler e81b8e4b7f llama: use FA + max. GPU layers by default (#15434) il y a 5 mois
  Sigbjørn Skjæret 84ab83cc0b model : jina-embeddings-v3 support (#13693) il y a 5 mois
  Georgi Gerganov 9ebebef62f llama : remove KV cache defragmentation logic (#15473) il y a 5 mois
  Georgi Gerganov cd36b5e5c7 llama : remove deprecated llama_kv_self API (#15472) il y a 5 mois
  Georgi Gerganov 715a6db02c kv-cache : drop the "unified" prefix (#15467) il y a 5 mois
  Georgi Gerganov d32e03f449 server : add SWA checkpoints (#15293) il y a 5 mois
  Jonathan Graehl 5cdb27e091 finetune: SGD optimizer, more CLI args (#13873) il y a 5 mois