Cronologia Commit

Autore SHA1 Messaggio Data
  RunningLeon 3807c3de04 server : respect `--special` cli arg (#8553) 1 anno fa
  Douglas Hanley c3ebcfa148 server : ensure batches are either all embed or all completion (#8420) 1 anno fa
  Clint Herron 278d0e1846 Initialize default slot sampling parameters from the global context. (#8418) 1 anno fa
  Clint Herron a59f8fdc85 Server: Enable setting default sampling parameters via command-line (#8402) 1 anno fa
  Bjarke Viksøe cb4d86c4d7 server: Retrieve prompt template in /props (#8337) 1 anno fa
  Sigbjørn Skjæret 38373cfbab Add SPM infill support (#8016) 1 anno fa
  Xuan Son Nguyen 48e6b92cc3 Add chat template support for llama-cli (#8068) 1 anno fa
  sasha0552 ba58993152 server : fix smart slot selection (#8020) 1 anno fa
  Sigbjørn Skjæret 91c188d6c2 Only use FIM middle token if it exists (#7648) 1 anno fa
  Georgi Gerganov 704a35b183 server : restore numeric prompts (#7883) 1 anno fa
  Georgi Gerganov d9da0e4986 server : improve "prompt" handling (#7847) 1 anno fa
  sasha0552 7a16ce7db2 server : smart slot selection using Longest Common Prefix (#7728) 1 anno fa
  woodx a5cabd7649 server : do not get prompt in infill mode (#7286) 1 anno fa
  Georgi Gerganov f83351f9a6 imatrix : migrate to gpt_params (#7771) 1 anno fa
  Georgi Gerganov 1442677f92 common : refactor cli arg parsing (#7675) 1 anno fa
  Yazan Agha-Schrader 2e666832e6 server : new UI (#7633) 1 anno fa
  Georgi Gerganov 6ff13987ad common : normalize naming style (#7462) 1 anno fa
  Georgi Gerganov e932094d58 server : return error on too large embedding input (#7389) 1 anno fa
  Johannes Gäßler 41858392e1 server: fix seed being reported back (#7382) 1 anno fa
  Radoslav Gerganov ee94172d33 server : add support for the RPC backend (#7305) 1 anno fa
  Steve Grubb 4f0263633b server: free sampling contexts on exit (#7264) 1 anno fa
  Xuan Son Nguyen 72c177c1f6 fix system prompt handling (#7153) 1 anno fa
  Steve Grubb 988631335a server : free llama_batch on exit (#7212) 1 anno fa
  Johannes Gäßler 5ae3426b0b server: fix reported top tokens for temperature 0 (#7203) 1 anno fa
  Johannes Gäßler c12452c7ae JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143) 1 anno fa
  Johan 911b3900dd server : add_special option for tokenize endpoint (#7059) 1 anno fa
  Johannes Gäßler af0a5b6163 server: fix incorrectly reported token probabilities (#7125) 1 anno fa
  maor-ps 03fb8a002d If first token generated from the server is the stop word the server will crash (#7038) 1 anno fa
  Georgi Gerganov 9c67c2773d ggml : add Flash Attention (#5021) 1 anno fa
  Olivier Chafik 8843a98c2b Improve usability of --model-url & related flags (#6930) 1 anno fa