Explorar el Código

fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)

The flake.nix included references to llama-cpp.cachix.org cache with a comment
claiming it's 'Populated by the CI in ggml-org/llama.cpp', but:

1. No visible CI workflow populates this cache
2. The cache is empty for recent builds (tested b6150, etc.)
3. This misleads users into expecting pre-built binaries that don't exist

This change removes the non-functional cache references entirely, leaving only
the working cuda-maintainers cache that actually provides CUDA dependencies.

Users can still manually add the llama-cpp cache if it becomes functional in the future.
Bas Nijholt hace 5 meses
padre
commit
1adc9812bd
Se han modificado 1 ficheros con 0 adiciones y 5 borrados
  1. 0 5
      flake.nix

+ 0 - 5
flake.nix

@@ -36,9 +36,6 @@
   # ```
   # nixConfig = {
   #   extra-substituters = [
-  #     # Populated by the CI in ggml-org/llama.cpp
-  #     "https://llama-cpp.cachix.org"
-  #
   #     # A development cache for nixpkgs imported with `config.cudaSupport = true`.
   #     # Populated by https://hercules-ci.com/github/SomeoneSerge/nixpkgs-cuda-ci.
   #     # This lets one skip building e.g. the CUDA-enabled openmpi.
@@ -47,10 +44,8 @@
   #   ];
   #
   #   # Verify these are the same keys as published on
-  #   # - https://app.cachix.org/cache/llama-cpp
   #   # - https://app.cachix.org/cache/cuda-maintainers
   #   extra-trusted-public-keys = [
-  #     "llama-cpp.cachix.org-1:H75X+w83wUKTIPSO1KWy9ADUrzThyGs8P5tmAbkWhQc="
   #     "cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
   #   ];
   # };