| .. |
|
build-info.sh
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
il y a 1 an |
|
check-requirements.sh
|
68ff663a04
repo : update links to new url (#11886)
|
il y a 11 mois |
|
ci-run.sh
|
413e7b0559
ci : add model tests + script wrapper (#4586)
|
il y a 2 ans |
|
compare-commits.sh
|
87cf323cef
scripts : change build path to "build-bench" for compare-commits.sh (#10836)
|
il y a 1 an |
|
compare-llama-bench.py
|
6dde178248
scripts: fix compare-llama-bench commit hash logic (#11891)
|
il y a 11 mois |
|
debug-test.sh
|
fa42aa6d89
scripts : fix spelling typo in messages and comments (#9782)
|
il y a 1 an |
|
fetch_server_test_models.py
|
8b576b6c55
Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639)
|
il y a 11 mois |
|
gen-authors.sh
|
e11a8999b5
license : update copyright notice + add AUTHORS (#6405)
|
il y a 1 an |
|
gen-unicode-data.py
|
3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
|
il y a 1 an |
|
get-flags.mk
|
a0c2dad9d4
build : pass all warning flags to nvcc via -Xcompiler (#5570)
|
il y a 1 an |
|
get-hellaswag.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
il y a 1 an |
|
get-pg.sh
|
9a818f7c42
scripts : improve get-pg.sh (#4838)
|
il y a 2 ans |
|
get-wikitext-103.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
il y a 1 an |
|
get-wikitext-2.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
il y a 1 an |
|
get-winogrande.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
il y a 1 an |
|
get_chat_template.py
|
5137da7b8c
scripts: corrected encoding when getting chat template (#11866) (#11907)
|
il y a 11 mois |
|
hf.sh
|
f26c874179
scripts : restore hf.sh (#11288)
|
il y a 1 an |
|
install-oneapi.bat
|
01684139c3
support SYCL backend windows build (#5208)
|
il y a 1 an |
|
qnt-all.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
il y a 1 an |
|
run-all-perf.sh
|
611363ac79
scripts : add pipefail
|
il y a 2 ans |
|
run-all-ppl.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
il y a 1 an |
|
sync-ggml-am.sh
|
aede2074f6
scripts : sync-ggml-am.sh fix
|
il y a 10 mois |
|
sync-ggml.last
|
3d1cf3cf33
sync : ggml
|
il y a 10 mois |
|
sync-ggml.sh
|
48e1ae0e61
scripts : sync gguf
|
il y a 1 an |
|
verify-checksum-models.py
|
a2ac89d6ef
convert.py : add python logging instead of print() (#6511)
|
il y a 1 an |
|
xxd.cmake
|
5cf5e7d490
`build`: generate hex dump of server assets during build (#6661)
|
il y a 1 an |