Rudi Servo 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa
..
nix e52aba537a nix: allow to override rocm gpu targets (#10794) 1 anno fa
cloud-v-pipeline 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
cpu.Dockerfile 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa
cuda.Dockerfile 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa
intel.Dockerfile 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa
llama-cli-cann.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) 1 anno fa
llama-cpp-cuda.srpm.spec 0e814dfc42 devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 1 anno fa
llama-cpp.srpm.spec 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
musa.Dockerfile 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa
rocm.Dockerfile 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa
tools.sh 11e07fd63b fix: graceful shutdown for Docker images (#10815) 1 anno fa
vulkan.Dockerfile 7c0e285858 devops : add docker-multi-stage builds (#10832) 1 anno fa