1
0

Тайлбар байхгүй

Ziang Wu f9c7ba3447 llava : update MobileVLM-README.md (#6180) 1 жил өмнө
.devops 6a87ac3a52 fix editorconfig check break (#5879) 1 жил өмнө
.github 970a48060a ci : exempt some labels from being tagged as stale (#6140) 1 жил өмнө
ci 8ced9f7e32 add wait() to make code stable (#5895) 1 жил өмнө
cmake c41ea36eaa cmake : MSVC instruction detection (fixed up #809) (#3923) 2 жил өмнө
common b80cf3b2d1 common : disable repeat penalties by default (#6127) 1 жил өмнө
docs ff8238f71d docs : add llama-star arch idea 2 жил өмнө
examples f9c7ba3447 llava : update MobileVLM-README.md (#6180) 1 жил өмнө
gguf-py 12247f4c69 llama : add Command-R support (#6033) 1 жил өмнө
grammars 3de31677d3 grammars : blacklists character control set (#5888) 1 жил өмнө
kompute @ 4565194ed7 fbf1ddec69 Nomic Vulkan backend (#4456) 1 жил өмнө
kompute-shaders fbf1ddec69 Nomic Vulkan backend (#4456) 1 жил өмнө
media 62b3e81aae media : add logos and banners 2 жил өмнө
models ea5497df5d gpt2 : Add gpt2 architecture integration (#4555) 2 жил өмнө
pocs a07d0fee1f ggml : add mmla kernels for quantized GEMM (#4966) 1 жил өмнө
prompts 37c746d687 llama : add Qwen support (#4281) 2 жил өмнө
requirements da3b9ba2b7 convert-hf-to-gguf : require einops for InternLM2ForCausalLM (#5792) 1 жил өмнө
scripts b838b53ad6 sync : ggml 1 жил өмнө
spm-headers df334a1125 swift : package no longer use ggml dependency (#5465) 1 жил өмнө
tests aab606a11f llama : add Orion chat template (#6066) 1 жил өмнө
.clang-tidy 00d62adb79 fix some warnings from gcc and clang-tidy (#3038) 2 жил өмнө
.dockerignore ea55295a74 docker : ignore Git files (#3314) 2 жил өмнө
.ecrc fbf1ddec69 Nomic Vulkan backend (#4456) 1 жил өмнө
.editorconfig 800a489e4a llama.swiftui : add bench functionality (#4483) 2 жил өмнө
.flake8 2891c8aa9a Add support for BERT embedding models (#5423) 1 жил өмнө
.gitignore 6b7e76d28c gitignore : ignore curl-related files 1 жил өмнө
.gitmodules fbf1ddec69 Nomic Vulkan backend (#4456) 1 жил өмнө
.pre-commit-config.yaml 5ddf7ea1fb hooks : setting up flake8 and pre-commit hooks (#1681) 2 жил өмнө
CMakeLists.txt d01b3c4c32 common: llama_load_model_from_url using --model-url (#6098) 1 жил өмнө
LICENSE 6a9a67f0be Add LICENSE (#21) 2 жил өмнө
Makefile d0d5de42e5 gguf-split: split and merge gguf per batch of tensors (#6135) 1 жил өмнө
Package.swift 83796e62bc llama : refactor unicode stuff (#5992) 1 жил өмнө
README-sycl.md 6c0b287748 update readme sycl for new update (#6151) 1 жил өмнө
README.md dfbfdd60f9 readme : add wllama as a wasm binding (#6100) 1 жил өмнө
build.zig 83796e62bc llama : refactor unicode stuff (#5992) 1 жил өмнө
codecov.yml 73a12a6344 cov : disable comment in PRs (#2989) 2 жил өмнө
convert-hf-to-gguf.py 9b03719ad7 convert : add support for CamembertModel architecture (#6119) 1 жил өмнө
convert-llama-ggml-to-gguf.py 4d4d2366fc convert : automatically fall back to HfVocab if tokenizer.model doesn't exist (#5821) 1 жил өмнө
convert-lora-to-ggml.py 05490fad7f add safetensors support to convert-lora-to-ggml.py (#5062) 2 жил өмнө
convert-persimmon-to-gguf.py dbd8828eb0 py : fix persimmon `n_rot` conversion (#5460) 1 жил өмнө
convert.py 3a6efdd03c convert : use f32 outtype for bf16 tensors (#6106) 1 жил өмнө
flake.lock 2d15886bb0 flake.lock: Update 1 жил өмнө
flake.nix cb5e8f7fc4 build(nix): Introduce flake.formatter for `nix fmt` (#5687) 1 жил өмнө
ggml-alloc.c 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-alloc.h f30ea47a87 llama : add pipeline parallelism support (#6017) 1 жил өмнө
ggml-backend-impl.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-backend.c 5e1b7f94a0 backend : set max split inputs to GGML_MAX_SRC (#6137) 1 жил өмнө
ggml-backend.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-common.h 8030da7afe ggml : reuse quantum structs across backends (#5943) 1 жил өмнө
ggml-cuda.cu ccf58aa3ec cuda : refactor to remove global resources (#6170) 1 жил өмнө
ggml-cuda.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-impl.h 3202361c5b ggml, ci : Windows ARM runner and build fixes (#5979) 1 жил өмнө
ggml-kompute.cpp 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-kompute.h fbf1ddec69 Nomic Vulkan backend (#4456) 1 жил өмнө
ggml-metal.h 5f14ee0b0c metal : add debug capture backend function (ggml/694) 1 жил өмнө
ggml-metal.m 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-metal.metal 381da2d9f0 metal : build metallib + fix embed path (#6015) 1 жил өмнө
ggml-mpi.c 5bf2a27718 ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178) 2 жил өмнө
ggml-mpi.h 5656d10599 mpi : add support for distributed inference via MPI (#2099) 2 жил өмнө
ggml-opencl.cpp 9fa2627347 ggml : introduce ggml_status (ggml/750) 1 жил өмнө
ggml-opencl.h a1d6df129b Add OpenCL add kernel (#5151) 2 жил өмнө
ggml-quants.c 8030da7afe ggml : reuse quantum structs across backends (#5943) 1 жил өмнө
ggml-quants.h 8030da7afe ggml : reuse quantum structs across backends (#5943) 1 жил өмнө
ggml-sycl.cpp 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-sycl.h d26e8b669d increase igpu cluster limit (#6159) 1 жил өмнө
ggml-vulkan-shaders.hpp 61d1c88e15 Vulkan Improvements (#5835) 1 жил өмнө
ggml-vulkan.cpp 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml-vulkan.h 61d1c88e15 Vulkan Improvements (#5835) 1 жил өмнө
ggml.c 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 жил өмнө
ggml.h 7ce2c77f88 gguf : add support for I64 and F64 arrays (#6062) 1 жил өмнө
ggml_vk_generate_shaders.py 61d1c88e15 Vulkan Improvements (#5835) 1 жил өмнө
llama.cpp d199ca79f2 mpt : implement backwards compatiblity with duped output tensor (#6139) 1 жил өмнө
llama.h 877b4d0c62 llama : add support for control vectors (#5970) 1 жил өмнө
mypy.ini b43ebde3b0 convert : partially revert PR #4818 (#5041) 2 жил өмнө
requirements.txt 04ac0607e9 python : add check-requirements.sh and GitHub workflow (#4585) 2 жил өмнө
unicode.cpp 83796e62bc llama : refactor unicode stuff (#5992) 1 жил өмнө
unicode.h 83796e62bc llama : refactor unicode stuff (#5992) 1 жил өмнө

README-sycl.md

llama.cpp for SYCL

Background

SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators—such as CPUs, GPUs, and FPGAs. It is a single-source embedded domain-specific language based on pure C++17.

oneAPI is a specification that is open and standards-based, supporting multiple architecture types including but not limited to GPU, CPU, and FPGA. The spec has both direct programming and API-based programming paradigms.

Intel uses the SYCL as direct programming language to support CPU, GPUs and FPGAs.

To avoid to re-invent the wheel, this code refer other code paths in llama.cpp (like OpenBLAS, cuBLAS, CLBlast). We use a open-source tool SYCLomatic (Commercial release Intel® DPC++ Compatibility Tool) migrate to SYCL.

The llama.cpp for SYCL is used to support Intel GPUs.

For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building).

News

  • 2024.3

    • New base line is ready: tag b2437.
    • Support multiple cards: --split-mode: [none|layer]; not support [row], it's on developing.
    • Support to assign main GPU by --main-gpu, replace $GGML_SYCL_DEVICE.
    • Support detecting all GPUs with level-zero and same top Max compute units.
    • Support OPs
    • hardsigmoid
    • hardswish
    • pool2d
  • 2024.1

    • Create SYCL backend for Intel GPU.
    • Support Windows build

OS

|OS|Status|Verified| |-|-|-| |Linux|Support|Ubuntu 22.04, Fedora Silverblue 39| |Windows|Support|Windows 11|

Intel GPU

Verified

|Intel GPU| Status | Verified Model| |-|-|-| |Intel Data Center Max Series| Support| Max 1550| |Intel Data Center Flex Series| Support| Flex 170| |Intel Arc Series| Support| Arc 770, 730M| |Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake| |Intel iGPU| Support| iGPU in i5-1250P, i7-1260P, i7-1165G7|

Note: If the EUs (Execution Unit) in iGPU is less than 80, the inference speed will be too slow to use.

Memory

The memory is a limitation to run LLM on GPUs.

When run llama.cpp, there is print log to show the applied memory on GPU. You could know how much memory to be used in your case. Like llm_load_tensors: buffer size = 3577.56 MiB.

For iGPU, please make sure the shared memory from host memory is enough. For llama-2-7b.Q4_0, recommend the host memory is 8GB+.

For dGPU, please make sure the device memory is enough. For llama-2-7b.Q4_0, recommend the device memory is 4GB+.

Nvidia GPU

Verified

|Intel GPU| Status | Verified Model| |-|-|-| |Ampere Series| Support| A100|

oneMKL for CUDA

The current oneMKL release does not contain the oneMKL cuBlas backend. As a result for Nvidia GPU's oneMKL must be built from source.

git clone https://github.com/oneapi-src/oneMKL
cd oneMKL
mkdir build
cd build
cmake -G Ninja .. -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_COMPILER=icx -DENABLE_MKLGPU_BACKEND=OFF -DENABLE_MKLCPU_BACKEND=OFF -DENABLE_CUBLAS_BACKEND=ON
ninja
// Add paths as necessary

Docker

Note:

  • Only docker on Linux is tested. Docker on WSL may not work.
  • You may need to install Intel GPU driver on the host machine (See the Linux section to know how to do that)

Build the image

You can choose between F16 and F32 build. F16 is faster for long-prompt inference.

# For F16:
#docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/main-intel.Dockerfile .

# Or, for F32:
docker build -t llama-cpp-sycl -f .devops/main-intel.Dockerfile .

# Note: you can also use the ".devops/main-server.Dockerfile", which compiles the "server" example

Run

# Firstly, find all the DRI cards:
ls -la /dev/dri
# Then, pick the card that you want to use.

# For example with "/dev/dri/card1"
docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-sycl -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33

Linux

Setup Environment

  1. Install Intel GPU driver.

a. Please install Intel GPU driver by official guide: Install GPU Drivers.

Note: for iGPU, please install the client GPU driver.

b. Add user to group: video, render.

sudo usermod -aG render username
sudo usermod -aG video username

Note: re-login to enable it.

c. Check

sudo apt install clinfo
sudo clinfo -l

Output (example):

Platform #0: Intel(R) OpenCL Graphics
 `-- Device #0: Intel(R) Arc(TM) A770 Graphics


Platform #0: Intel(R) OpenCL HD Graphics
 `-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
  1. Install Intel® oneAPI Base toolkit.

a. Please follow the procedure in Get the Intel® oneAPI Base Toolkit .

Recommend to install to default folder: /opt/intel/oneapi.

Following guide use the default folder as example. If you use other folder, please modify the following guide info with your folder.

b. Check

source /opt/intel/oneapi/setvars.sh

sycl-ls

There should be one or more level-zero devices. Please confirm that at least one GPU is present, like [ext_oneapi_level_zero:gpu:0].

Output (example):

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO  [23.30.26918.50]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918]

  1. Build locally:

Note:

  • You can choose between F16 and F32 build. F16 is faster for long-prompt inference.
  • By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for example/main only.

    mkdir -p build
    cd build
    source /opt/intel/oneapi/setvars.sh
    
    # For FP16:
    #cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON
    
    # Or, for FP32:
    cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
    
    # For Nvidia GPUs
    cmake .. -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
    
    # Build example/main only
    #cmake --build . --config Release --target main
    
    # Or, build all binary
    cmake --build . --config Release -v
    
    cd ..
    

or

./examples/sycl/build.sh

Run

  1. Put model file to folder models

You could download llama-2-7b.Q4_0.gguf as example.

  1. Enable oneAPI running environment

    source /opt/intel/oneapi/setvars.sh
    
  2. List device ID

Run without parameter:

./build/bin/ls-sycl-device

# or running the "main" executable and look at the output log:

./build/bin/main

Check the ID in startup log, like:

found 6 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       1.3|        512|    1024|     32|    16225243136|
| 1|[level_zero:gpu:1]|                    Intel(R) UHD Graphics 770|       1.3|         32|     512|     32|    53651849216|
| 2|    [opencl:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       3.0|        512|    1024|     32|    16225243136|
| 3|    [opencl:gpu:1]|                    Intel(R) UHD Graphics 770|       3.0|         32|     512|     32|    53651849216|
| 4|    [opencl:cpu:0]|         13th Gen Intel(R) Core(TM) i7-13700K|       3.0|         24|    8192|     64|    67064815616|
| 5|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         24|67108864|     64|    67064815616|

|Attribute|Note| |-|-| |compute capability 1.3|Level-zero running time, recommended | |compute capability 3.0|OpenCL running time, slower than level-zero in most cases|

  1. Device selection and execution of llama.cpp

There are two device selection modes:

  • Single device: Use one device assigned by user.
  • Multiple devices: Automatically choose the devices with the same biggest Max compute units.

|Device selection|Parameter| |-|-| |Single device|--split-mode none --main-gpu DEVICE_ID | |Multiple devices|--split-mode layer (default)|

Examples:

  • Use device 0:

    ZES_ENABLE_SYSMAN=1 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm none -mg 0
    

or run by script:

./examples/sycl/run_llama2.sh 0
  • Use multiple devices:

    ZES_ENABLE_SYSMAN=1 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm layer
    

or run by script:

./examples/sycl/run_llama2.sh

Note:

  • By default, mmap is used to read model file. In some cases, it leads to the hang issue. Recommend to use parameter --no-mmap to disable mmap() to skip this issue.
  1. Verify the device ID in output

Verify to see if the selected GPU is shown in the output, like:

detect 1 SYCL GPUs: [0] with top Max compute units:512

Or

use 1 SYCL GPUs: [0] with Max compute units:512

Windows

Setup Environment

  1. Install Intel GPU driver.

Please install Intel GPU driver by official guide: Install GPU Drivers.

Note: The driver is mandatory for compute function.

  1. Install Visual Studio.

Please install Visual Studio which impact oneAPI environment enabling in Windows.

  1. Install Intel® oneAPI Base toolkit.

a. Please follow the procedure in Get the Intel® oneAPI Base Toolkit .

Recommend to install to default folder: C:\Program Files (x86)\Intel\oneAPI.

Following guide uses the default folder as example. If you use other folder, please modify the following guide info with your folder.

b. Enable oneAPI running environment:

  • In Search, input 'oneAPI'.

Search & open "Intel oneAPI command prompt for Intel 64 for Visual Studio 2022"

  • In Run:

In CMD:

"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64

c. Check GPU

In oneAPI command line:

sycl-ls

There should be one or more level-zero devices. Please confirm that at least one GPU is present, like [ext_oneapi_level_zero:gpu:0].

Output (example):

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Iris(R) Xe Graphics OpenCL 3.0 NEO  [31.0.101.5186]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Iris(R) Xe Graphics 1.3 [1.3.28044]
  1. Install cmake & make

a. Download & install cmake for Windows: https://cmake.org/download/

b. Download & install mingw-w64 make for Windows provided by w64devkit

  • Download the 1.19.0 version of w64devkit.

  • Extract w64devkit on your pc.

  • Add the bin folder path in the Windows system PATH environment, like C:\xxx\w64devkit\bin\.

Build locally:

In oneAPI command line window:

mkdir -p build
cd build
@call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force

::  for FP16
::  faster for long-prompt inference
::  cmake -G "MinGW Makefiles" ..  -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx  -DCMAKE_BUILD_TYPE=Release -DLLAMA_SYCL_F16=ON

::  for FP32
cmake -G "MinGW Makefiles" ..  -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx  -DCMAKE_BUILD_TYPE=Release


::  build example/main only
::  make main

::  build all binary
make -j
cd ..

or

.\examples\sycl\win-build-sycl.bat

Note:

  • By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for example/main only.

Run

  1. Put model file to folder models

You could download llama-2-7b.Q4_0.gguf as example.

  1. Enable oneAPI running environment
  • In Search, input 'oneAPI'.

Search & open "Intel oneAPI command prompt for Intel 64 for Visual Studio 2022"

  • In Run:

In CMD:

"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
  1. List device ID

Run without parameter:

build\bin\ls-sycl-device.exe

or

build\bin\main.exe

Check the ID in startup log, like:

found 6 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       1.3|        512|    1024|     32|    16225243136|
| 1|[level_zero:gpu:1]|                    Intel(R) UHD Graphics 770|       1.3|         32|     512|     32|    53651849216|
| 2|    [opencl:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       3.0|        512|    1024|     32|    16225243136|
| 3|    [opencl:gpu:1]|                    Intel(R) UHD Graphics 770|       3.0|         32|     512|     32|    53651849216|
| 4|    [opencl:cpu:0]|         13th Gen Intel(R) Core(TM) i7-13700K|       3.0|         24|    8192|     64|    67064815616|
| 5|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         24|67108864|     64|    67064815616|

|Attribute|Note| |-|-| |compute capability 1.3|Level-zero running time, recommended | |compute capability 3.0|OpenCL running time, slower than level-zero in most cases|

  1. Device selection and execution of llama.cpp

There are two device selection modes:

  • Single device: Use one device assigned by user.
  • Multiple devices: Automatically choose the devices with the same biggest Max compute units.

|Device selection|Parameter| |-|-| |Single device|--split-mode none --main-gpu DEVICE_ID | |Multiple devices|--split-mode layer (default)|

Examples:

  • Use device 0:

    build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 -sm none -mg 0
    
  • Use multiple devices:

    build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 -sm layer
    

or run by script:

.\examples\sycl\win-run-llama2.bat

Note:

  • By default, mmap is used to read model file. In some cases, it leads to the hang issue. Recommend to use parameter --no-mmap to disable mmap() to skip this issue.
  1. Verify the device ID in output

Verify to see if the selected GPU is shown in the output, like:

detect 1 SYCL GPUs: [0] with top Max compute units:512

Or

use 1 SYCL GPUs: [0] with Max compute units:512

Environment Variable

Build

|Name|Value|Function| |-|-|-| |LLAMA_SYCL|ON (mandatory)|Enable build with SYCL code path.
For FP32/FP16, LLAMA_SYCL=ON is mandatory.| |LLAMA_SYCL_F16|ON (optional)|Enable FP16 build with SYCL code path. Faster for long-prompt inference.
For FP32, not set it.| |CMAKE_C_COMPILER|icx|Use icx compiler for SYCL code path| |CMAKE_CXX_COMPILER|icpx (Linux), icx (Windows)|use icpx/icx for SYCL code path|

Running

|Name|Value|Function| |-|-|-| |GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG| |ZES_ENABLE_SYSMAN| 0 (default) or 1|Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory.
Recommended to use when --split-mode = layer|

Known Issue

  • Hang during startup

llama.cpp use mmap as default way to read model file and copy to GPU. In some system, memcpy will be abnormal and block.

Solution: add --no-mmap or --mmap 0.

  • Split-mode: [row] is not supported

It's on developing.

Q&A

Note: please add prefix [SYCL] in issue title, so that we will check it as soon as possible.

  • Error: error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory.

Miss to enable oneAPI running environment.

Install oneAPI base toolkit and enable it by: source /opt/intel/oneapi/setvars.sh.

  • In Windows, no result, not error.

Miss to enable oneAPI running environment.

  • Meet compile error.

Remove folder build and try again.

  • I can not see [ext_oneapi_level_zero:gpu:0] afer install GPU driver in Linux.

Please run sudo sycl-ls.

If you see it in result, please add video/render group to your ID:

  sudo usermod -aG render username
  sudo usermod -aG video username

Then relogin.

If you do not see it, please check the installation GPU steps again.

Todo

  • Support row layer split for multiple card runs.