slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
%!s(int64=2) %!d(string=hai) anos |
xaedes
|
0e76a8992c
train : finetune LORA (#2632)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
a5661d7e71
llama : allow gguf RoPE keys to be overridden with defaults (#3240)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
00d62adb79
fix some warnings from gcc and clang-tidy (#3038)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
de2fe892af
examples : replace fprintf to stdout with printf (#3017)
|
%!s(int64=2) %!d(string=hai) anos |
Jhen-Jie Hong
|
571083f508
server : avoid aniprompt in probabilities of final response (#2849)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
ef15649972
build : fix most gcc and clang warnings (#2861)
|
%!s(int64=2) %!d(string=hai) anos |
Johannes Gäßler
|
6b73ef1201
YAML result logging + preset script (#2657)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
edd4c14817
llama : more tokenizer fixes (#2810)
|
%!s(int64=2) %!d(string=hai) anos |
Bruce MacDonald
|
c1ac54b77a
server : add `/detokenize` endpoint (#2802)
|
%!s(int64=2) %!d(string=hai) anos |
Matt Pulver
|
c82742ac9c
llama : add llama_beam_search() (#2267)
|
%!s(int64=2) %!d(string=hai) anos |
Jhen-Jie Hong
|
29674ab4e8
server : display token probabilities in the UI (#2489)
|
%!s(int64=2) %!d(string=hai) anos |
Xiao-Yong Jin
|
b8ad1b66b2
server : allow json array in prompt or content for direct token input (#2306)
|
%!s(int64=2) %!d(string=hai) anos |
Johannes Gäßler
|
c63bb1d16a
CUDA: use mul_mat_q kernels by default (#2683)
|
%!s(int64=2) %!d(string=hai) anos |
Jhen-Jie Hong
|
226255b44e
server : fallback to default if client param is null (#2688)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
%!s(int64=2) %!d(string=hai) anos |
Jhen-Jie Hong
|
3ebb00935f
server : add missing /json-schema-to-grammar.mjs (#2616)
|
%!s(int64=2) %!d(string=hai) anos |
Cheng Shao
|
d75561df20
server : add --numa support (#2524)
|
%!s(int64=2) %!d(string=hai) anos |
Equim
|
53dc399472
server: fixed wrong variable name in timing json (#2579)
|
%!s(int64=2) %!d(string=hai) anos |
Martin Krasser
|
1638757767
Fix grammar-based sampling issue in server (#2566)
|
%!s(int64=2) %!d(string=hai) anos |
Martin Krasser
|
f5bfea0580
Allow passing grammar to completion endpoint (#2532)
|
%!s(int64=2) %!d(string=hai) anos |
Stephen Nichols
|
5f631c2679
Fixing race condition in server and partial stream handling in frontend. (#2391)
|
%!s(int64=2) %!d(string=hai) anos |
Johannes Gäßler
|
0728c5a8b9
CUDA: mmq CLI option, fixed mmq build issues (#2453)
|
%!s(int64=2) %!d(string=hai) anos |
slaren
|
d5512b782b
server: add rms_norm_eps parameter (#2380)
|
%!s(int64=2) %!d(string=hai) anos |
IgnacioFDM
|
4f06592cc6
Add gqa parameter support to the server (#2351)
|
%!s(int64=2) %!d(string=hai) anos |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
%!s(int64=2) %!d(string=hai) anos |
Howard Su
|
32c5411631
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
|
%!s(int64=2) %!d(string=hai) anos |
Howard Su
|
2347463201
Support using mmap when applying LoRA (#2095)
|
%!s(int64=2) %!d(string=hai) anos |