Historia zmian

Autor SHA1 Wiadomość Data
  slaren 16bc66d947 llama.cpp : split llama_context_params into model and context params (#3301) 2 lat temu
  xaedes 0e76a8992c train : finetune LORA (#2632) 2 lat temu
  Georgi Gerganov ec893798b7 llama : custom attention mask + parallel decoding + no context swaps (#3228) 2 lat temu
  Cebtenzzre a5661d7e71 llama : allow gguf RoPE keys to be overridden with defaults (#3240) 2 lat temu
  Cebtenzzre 3aefaab9e5 check C++ code with -Wmissing-declarations (#3184) 2 lat temu
  Cebtenzzre 00d62adb79 fix some warnings from gcc and clang-tidy (#3038) 2 lat temu
  Cebtenzzre de2fe892af examples : replace fprintf to stdout with printf (#3017) 2 lat temu
  Jhen-Jie Hong 571083f508 server : avoid aniprompt in probabilities of final response (#2849) 2 lat temu
  Cebtenzzre ef15649972 build : fix most gcc and clang warnings (#2861) 2 lat temu
  Johannes Gäßler 6b73ef1201 YAML result logging + preset script (#2657) 2 lat temu
  Georgi Gerganov edd4c14817 llama : more tokenizer fixes (#2810) 2 lat temu
  Bruce MacDonald c1ac54b77a server : add `/detokenize` endpoint (#2802) 2 lat temu
  Matt Pulver c82742ac9c llama : add llama_beam_search() (#2267) 2 lat temu
  Jhen-Jie Hong 29674ab4e8 server : display token probabilities in the UI (#2489) 2 lat temu
  Xiao-Yong Jin b8ad1b66b2 server : allow json array in prompt or content for direct token input (#2306) 2 lat temu
  Johannes Gäßler c63bb1d16a CUDA: use mul_mat_q kernels by default (#2683) 2 lat temu
  Jhen-Jie Hong 226255b44e server : fallback to default if client param is null (#2688) 2 lat temu
  Georgi Gerganov 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 lat temu
  Jhen-Jie Hong 3ebb00935f server : add missing /json-schema-to-grammar.mjs (#2616) 2 lat temu
  Cheng Shao d75561df20 server : add --numa support (#2524) 2 lat temu
  Equim 53dc399472 server: fixed wrong variable name in timing json (#2579) 2 lat temu
  Martin Krasser 1638757767 Fix grammar-based sampling issue in server (#2566) 2 lat temu
  Martin Krasser f5bfea0580 Allow passing grammar to completion endpoint (#2532) 2 lat temu
  Stephen Nichols 5f631c2679 Fixing race condition in server and partial stream handling in frontend. (#2391) 2 lat temu
  Johannes Gäßler 0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) 2 lat temu
  slaren d5512b782b server: add rms_norm_eps parameter (#2380) 2 lat temu
  IgnacioFDM 4f06592cc6 Add gqa parameter support to the server (#2351) 2 lat temu
  Xiao-Yong Jin 6e7cca4047 llama : add custom RoPE (#2054) 2 lat temu
  Howard Su 32c5411631 Revert "Support using mmap when applying LoRA (#2095)" (#2206) 2 lat temu
  Howard Su 2347463201 Support using mmap when applying LoRA (#2095) 2 lat temu