Evgeny Kurnevsky e52aba537a nix: allow to override rocm gpu targets (#10794) il y a 1 an
..
nix e52aba537a nix: allow to override rocm gpu targets (#10794) il y a 1 an
cloud-v-pipeline 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) il y a 1 an
full-cuda.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
full-musa.Dockerfile 249cd93da3 mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) il y a 1 an
full-rocm.Dockerfile 6f1d9d71f4 Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) il y a 1 an
full.Dockerfile 59f4db1088 ggml : add predefined list of CPU backend variants to build (#10626) il y a 1 an
llama-cli-cann.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-cli-cuda.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-cli-intel.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-cli-musa.Dockerfile 249cd93da3 mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) il y a 1 an
llama-cli-rocm.Dockerfile 6f1d9d71f4 Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) il y a 1 an
llama-cli-vulkan.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-cli.Dockerfile 59f4db1088 ggml : add predefined list of CPU backend variants to build (#10626) il y a 1 an
llama-cpp-cuda.srpm.spec 0e814dfc42 devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) il y a 1 an
llama-cpp.srpm.spec 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) il y a 1 an
llama-server-cuda.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-server-intel.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-server-musa.Dockerfile 249cd93da3 mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) il y a 1 an
llama-server-rocm.Dockerfile 6f1d9d71f4 Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) il y a 1 an
llama-server-vulkan.Dockerfile 75207b3a88 docker: use GGML_NATIVE=OFF (#10368) il y a 1 an
llama-server.Dockerfile 59f4db1088 ggml : add predefined list of CPU backend variants to build (#10626) il y a 1 an
tools.sh 11e07fd63b fix: graceful shutdown for Docker images (#10815) il y a 1 an