Explorar o código

flake : build llama.cpp on Intel with nix (#2795)

Problem
-------
`nix build` fails with missing `Accelerate.h`.

Changes
-------
- Fix build of the llama.cpp with nix for Intel: add the same SDK frameworks as
for ARM
- Add `quantize` app to the output of nix flake
- Extend nix devShell with llama-python so we can use convertScript

Testing
-------
Testing the steps with nix:
1. `nix build`
Get the model and then
2. `nix develop` and then `python convert.py models/llama-2-7b.ggmlv3.q4_0.bin`
3. `nix run llama.cpp#quantize -- open_llama_7b/ggml-model-f16.gguf ./models/ggml-model-q4_0.bin 2`
4. `nix run llama.cpp#llama -- -m models/ggml-model-q4_0.bin -p "What is nix?" -n 400 --temp 0.8 -e -t 8`

Co-authored-by: Volodymyr Vitvitskyi <volodymyrvitvitskyi@SamsungPro.local>
Volodymyr Vitvitskyi %!s(int64=2) %!d(string=hai) anos
pai
achega
f305bad11e
Modificáronse 1 ficheiros con 11 adicións e 0 borrados
  1. 11 0
      flake.nix

+ 11 - 0
flake.nix

@@ -21,6 +21,12 @@
               CoreGraphics
               CoreVideo
             ]
+          else if isDarwin then
+            with pkgs.darwin.apple_sdk.frameworks; [
+              Accelerate
+              CoreGraphics
+              CoreVideo
+            ]
           else
             with pkgs; [ openblas ]
         );
@@ -80,8 +86,13 @@
           type = "app";
           program = "${self.packages.${system}.default}/bin/llama";
         };
+        apps.quantize = {
+          type = "app";
+          program = "${self.packages.${system}.default}/bin/quantize";
+        };
         apps.default = self.apps.${system}.llama;
         devShells.default = pkgs.mkShell {
+          buildInputs = [ llama-python ];
           packages = nativeBuildInputs ++ osSpecific;
         };
       });