AdithyanI
|
8edd2b40fd
server : fix grammar being ignored (#4494)
|
2 years ago |
Alexey Parfenov
|
eb16dae7e7
server : fix possible ambiguity in content type charset (#4501)
|
2 years ago |
mzcu
|
62bd52b7bf
server : allow requests larger than 8K (#4500)
|
2 years ago |
Bach Le
|
5daa5f54fd
Link to cublas dynamically on Windows even with LLAMA_STATIC (#4506)
|
2 years ago |
slaren
|
c6c4fc081c
lora : add support for non-llama models (#3333)
|
2 years ago |
Jared Van Bortel
|
8a5be3bd58
llama : sanity checks for access to logits (#4274)
|
2 years ago |
ShadovvBeast
|
88ae8952b6
server : add optional API Key Authentication example (#4441)
|
2 years ago |
slaren
|
ee4725a686
ggml : group mul_mat_id rows by matrix (cpu only) (#4480)
|
2 years ago |
slaren
|
6744dbe924
ggml : use ggml_row_size where possible (#4472)
|
2 years ago |
slaren
|
cafcd4f895
ggml : remove n_dims from ggml_tensor (#4469)
|
2 years ago |
wonjun Jang
|
c50e400163
py : add protobuf dependency (#4466)
|
2 years ago |
LostRuins
|
20a68a7030
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
|
2 years ago |
Georgi Gerganov
|
55e87c3749
ggml : fix OpenCL broadcast requirement for ggml_mul (close #4453)
|
2 years ago |
wonjun Jang
|
873637afc7
convert : support loading vocab from fast tokenizer config (#3633)
|
2 years ago |
BarfingLemurs
|
0353a18401
readme : update supported model list (#4457)
|
2 years ago |
shibe2
|
948ff137ec
server : fix handling of characters that span multiple tokens when streaming (#4446)
|
2 years ago |
Georgi Gerganov
|
4d98d9a656
sync : ggml (SD ops, tests, kernels) (#4444)
|
2 years ago |
Jared Van Bortel
|
70f806b821
build : detect host compiler and cuda compiler separately (#4414)
|
2 years ago |
Siwen Yu
|
9fb13f9584
common : add `--version` option to show build info in CLI (#4433)
|
2 years ago |
Georgi Gerganov
|
113f9942fc
readme : update hot topics
|
2 years ago |
slaren
|
799a1cb13b
llama : add Mixtral support (#4406)
|
2 years ago |
kalomaze
|
fecac45658
server : tweak default sampling parameters (#4367)
|
2 years ago |
Richard Kiss
|
9494d7c477
english : use `typos` to fix comments and logs (#4354)
|
2 years ago |
Jared Van Bortel
|
6138963fb2
build : target Windows 8 for standard mingw-w64 (#4405)
|
2 years ago |
crasm
|
6391817cd1
llama : document logits_all deprecation (#4418)
|
2 years ago |
Vladimir Zorin
|
d9d4cfef64
server : fix local model name in server (#4420)
|
2 years ago |
Taikono-Himazin
|
41a11aaf99
ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424)
|
2 years ago |
Yueh-Po Peng
|
8a7b2fa528
Update README.md (#4388)
|
2 years ago |
Xiang (Kevin) Li
|
e18f7345a3
grammar : revert the replacement of llama_token_to_piece with id_to_token (#4396)
|
2 years ago |
Georgi Gerganov
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 years ago |