Pedro Cuenca
|
b97bc3966e
llama : support Llama 3 HF conversion (#6745)
|
hai 1 ano |
bmwl
|
f486f6e1e5
ggml : add numa options (#5377)
|
hai 1 ano |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
%!s(int64=2) %!d(string=hai) anos |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
e6616cf0db
examples : add compiler version and target to build info (#2998)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
%!s(int64=2) %!d(string=hai) anos |
Przemysław Pawełczyk
|
cb6c44c5e0
build : do not use _GNU_SOURCE gratuitously (#2035)
|
%!s(int64=2) %!d(string=hai) anos |
Cebtenzzre
|
ef15649972
build : fix most gcc and clang warnings (#2861)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
c90d135eb4
examples : fix underscore in beam-search + .gitignore (close #2900)
|
%!s(int64=2) %!d(string=hai) anos |