| .. |
|
nix
|
45abe0f74e
server : replace behave with pytest (#10416)
|
1 سال پیش |
|
cloud-v-pipeline
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 سال پیش |
|
full-cuda.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
full-musa.Dockerfile
|
249cd93da3
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
|
1 سال پیش |
|
full-rocm.Dockerfile
|
6f1d9d71f4
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
|
1 سال پیش |
|
full.Dockerfile
|
b3283448ce
build : Fix docker build warnings (#8535) (#8537)
|
1 سال پیش |
|
llama-cli-cann.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-cli-cuda.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-cli-intel.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-cli-musa.Dockerfile
|
249cd93da3
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
|
1 سال پیش |
|
llama-cli-rocm.Dockerfile
|
6f1d9d71f4
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
|
1 سال پیش |
|
llama-cli-vulkan.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-cli.Dockerfile
|
b3283448ce
build : Fix docker build warnings (#8535) (#8537)
|
1 سال پیش |
|
llama-cpp-cuda.srpm.spec
|
0e814dfc42
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
|
1 سال پیش |
|
llama-cpp.srpm.spec
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 سال پیش |
|
llama-server-cuda.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-server-intel.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-server-musa.Dockerfile
|
249cd93da3
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
|
1 سال پیش |
|
llama-server-rocm.Dockerfile
|
6f1d9d71f4
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
|
1 سال پیش |
|
llama-server-vulkan.Dockerfile
|
75207b3a88
docker: use GGML_NATIVE=OFF (#10368)
|
1 سال پیش |
|
llama-server.Dockerfile
|
3420909dff
ggml : automatic selection of best CPU backend (#10606)
|
1 سال پیش |
|
tools.sh
|
be6d7c0791
examples : remove `finetune` and `train-text-from-scratch` (#8669)
|
1 سال پیش |