Georgi Gerganov
|
6ff39b129d
llama.swiftui : add more models
|
2 years ago |
Ebey Abraham
|
b9e74f9bca
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
|
2 years ago |
hankcs
|
3c04bf6da8
llama : fix try_override for bool_value which always return true (#4519)
|
2 years ago |
Jared Van Bortel
|
2994f0c5a2
decode : fix logits_valid for legacy API (#4516)
|
2 years ago |
Georgi Gerganov
|
b1306c4394
readme : update hot topics
|
2 years ago |
Georgi Gerganov
|
800a489e4a
llama.swiftui : add bench functionality (#4483)
|
2 years ago |
Jared Van Bortel
|
f7f468a97d
gguf-py : fail fast on nonsensical special token IDs (#4489)
|
2 years ago |
Matheus Gabriel Alves Silva
|
919c40660f
build : Check the ROCm installation location (#4485)
|
2 years ago |
slaren
|
45668633fd
finetune : keep allocs alive until all allocations are done (#4486)
|
2 years ago |
olexiyb
|
0ffc92d2d2
server : disable llm logs if SERVER_VERBOSE is off (#3792)
|
2 years ago |
AdithyanI
|
8edd2b40fd
server : fix grammar being ignored (#4494)
|
2 years ago |
Alexey Parfenov
|
eb16dae7e7
server : fix possible ambiguity in content type charset (#4501)
|
2 years ago |
mzcu
|
62bd52b7bf
server : allow requests larger than 8K (#4500)
|
2 years ago |
Bach Le
|
5daa5f54fd
Link to cublas dynamically on Windows even with LLAMA_STATIC (#4506)
|
2 years ago |
slaren
|
c6c4fc081c
lora : add support for non-llama models (#3333)
|
2 years ago |
Jared Van Bortel
|
8a5be3bd58
llama : sanity checks for access to logits (#4274)
|
2 years ago |
ShadovvBeast
|
88ae8952b6
server : add optional API Key Authentication example (#4441)
|
2 years ago |
slaren
|
ee4725a686
ggml : group mul_mat_id rows by matrix (cpu only) (#4480)
|
2 years ago |
slaren
|
6744dbe924
ggml : use ggml_row_size where possible (#4472)
|
2 years ago |
slaren
|
cafcd4f895
ggml : remove n_dims from ggml_tensor (#4469)
|
2 years ago |
wonjun Jang
|
c50e400163
py : add protobuf dependency (#4466)
|
2 years ago |
LostRuins
|
20a68a7030
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
|
2 years ago |
Georgi Gerganov
|
55e87c3749
ggml : fix OpenCL broadcast requirement for ggml_mul (close #4453)
|
2 years ago |
wonjun Jang
|
873637afc7
convert : support loading vocab from fast tokenizer config (#3633)
|
2 years ago |
BarfingLemurs
|
0353a18401
readme : update supported model list (#4457)
|
2 years ago |
shibe2
|
948ff137ec
server : fix handling of characters that span multiple tokens when streaming (#4446)
|
2 years ago |
Georgi Gerganov
|
4d98d9a656
sync : ggml (SD ops, tests, kernels) (#4444)
|
2 years ago |
Jared Van Bortel
|
70f806b821
build : detect host compiler and cuda compiler separately (#4414)
|
2 years ago |
Siwen Yu
|
9fb13f9584
common : add `--version` option to show build info in CLI (#4433)
|
2 years ago |
Georgi Gerganov
|
113f9942fc
readme : update hot topics
|
2 years ago |