Johannes Gäßler
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 year ago |
Romain Biessy
|
57f8355b29
sycl: Update Intel docker images to use DPC++ 2025.0 (#10305)
|
1 year ago |
Xuan Son Nguyen
|
a77feb5d71
server : add some missing env variables (#9116)
|
1 year ago |
Joe Todd
|
f19bf99c01
Build Llama SYCL Intel with static libs (#8668)
|
1 year ago |
Al Mochkin
|
b3283448ce
build : Fix docker build warnings (#8535) (#8537)
|
1 year ago |
Georgi Gerganov
|
0e814dfc42
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
|
1 year ago |
joecryptotoo
|
925c30956d
Add healthchecks to llama-server containers (#8081)
|
1 year ago |
Olivier Chafik
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 year ago |