بدون توضیح

Haoxiang Fei f99e1e456e llama : lookup word in vocab before doing BPE merges (#7193) 1 سال پیش
.devops b8a7a5a90f build(cmake): simplify instructions (`cmake -B build && cmake --build build ...`) (#6964) 1 سال پیش
.github 8f8acc8683 Disable benchmark on forked repo (#7034) 1 سال پیش
ci 947d3ad27d ci : add GG_BUILD_EXTRA_TESTS_0 env (#7098) 1 سال پیش
cmake c41ea36eaa cmake : MSVC instruction detection (fixed up #809) (#3923) 2 سال پیش
common 5ae3426b0b server: fix reported top tokens for temperature 0 (#7203) 1 سال پیش
docs 04976db7a8 docs: fix typos (#7124) 1 سال پیش
examples 5ae3426b0b server: fix reported top tokens for temperature 0 (#7203) 1 سال پیش
ggml-cuda 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
gguf-py b83cc3f5b3 llama : add Jina Embeddings architecture (#6826) 1 سال پیش
grammars 04976db7a8 docs: fix typos (#7124) 1 سال پیش
kompute @ 4565194ed7 fbf1ddec69 Nomic Vulkan backend (#4456) 1 سال پیش
kompute-shaders fbf1ddec69 Nomic Vulkan backend (#4456) 1 سال پیش
media 784e11dea1 README: add graphic for matrix multiplication (#6881) 1 سال پیش
models f99e1e456e llama : lookup word in vocab before doing BPE merges (#7193) 1 سال پیش
pocs a07d0fee1f ggml : add mmla kernels for quantized GEMM (#4966) 1 سال پیش
prompts 37c746d687 llama : add Qwen support (#4281) 2 سال پیش
requirements f98eb31c51 convert-hf : save memory with lazy evaluation (#7075) 1 سال پیش
scripts e849648888 llama-bench : add pp+tg test type (#7199) 1 سال پیش
spm-headers df334a1125 swift : package no longer use ggml dependency (#5465) 1 سال پیش
tests f99e1e456e llama : lookup word in vocab before doing BPE merges (#7193) 1 سال پیش
.clang-tidy ae1f211ce2 cuda : refactor into multiple files (#6269) 1 سال پیش
.dockerignore ea55295a74 docker : ignore Git files (#3314) 2 سال پیش
.ecrc fbf1ddec69 Nomic Vulkan backend (#4456) 1 سال پیش
.editorconfig 800a489e4a llama.swiftui : add bench functionality (#4483) 2 سال پیش
.flake8 6fbd432211 py : logging and flake8 suppression refactoring (#7081) 1 سال پیش
.gitignore 8843a98c2b Improve usability of --model-url & related flags (#6930) 1 سال پیش
.gitmodules fbf1ddec69 Nomic Vulkan backend (#4456) 1 سال پیش
.pre-commit-config.yaml a2ac89d6ef convert.py : add python logging instead of print() (#6511) 1 سال پیش
AUTHORS e11a8999b5 license : update copyright notice + add AUTHORS (#6405) 1 سال پیش
CMakeLists.txt 4426e2987b cmake : fix typo (#7151) 1 سال پیش
LICENSE e11a8999b5 license : update copyright notice + add AUTHORS (#6405) 1 سال پیش
Makefile bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) 1 سال پیش
Package.swift 8cc91dc63c ggml : add llamafile sgemm (#6414) 1 سال پیش
README-sycl.md b8a7a5a90f build(cmake): simplify instructions (`cmake -B build && cmake --build build ...`) (#6964) 1 سال پیش
README.md d11afd6652 llava : fix moondream support (#7163) 1 سال پیش
SECURITY.md 5c4d767ac0 chore: Fix markdown warnings (#6625) 1 سال پیش
build.zig 5cf5e7d490 `build`: generate hex dump of server assets during build (#6661) 1 سال پیش
codecov.yml 73a12a6344 cov : disable comment in PRs (#2989) 2 سال پیش
convert-hf-to-gguf-update.py b83cc3f5b3 llama : add Jina Embeddings architecture (#6826) 1 سال پیش
convert-hf-to-gguf.py b83cc3f5b3 llama : add Jina Embeddings architecture (#6826) 1 سال پیش
convert-llama-ggml-to-gguf.py a2ac89d6ef convert.py : add python logging instead of print() (#6511) 1 سال پیش
convert-lora-to-ggml.py 6fbd432211 py : logging and flake8 suppression refactoring (#7081) 1 سال پیش
convert-persimmon-to-gguf.py a2ac89d6ef convert.py : add python logging instead of print() (#6511) 1 سال پیش
convert.py f98eb31c51 convert-hf : save memory with lazy evaluation (#7075) 1 سال پیش
flake.lock b3a995b416 flake.lock: Update (#7079) 1 سال پیش
flake.nix e9f17dc3bf nix: .#windows: proper cross-compilation set-up 1 سال پیش
ggml-alloc.c e931888d50 ggml : fix calloc argument ordering. (#6820) 1 سال پیش
ggml-alloc.h f30ea47a87 llama : add pipeline parallelism support (#6017) 1 سال پیش
ggml-backend-impl.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 سال پیش
ggml-backend.c 928e0b7013 Reset schedule earlier to allow overlap with ggml graph computation on device (#6933) 1 سال پیش
ggml-backend.h b66aec675c backend : fix typo in scheduler documentation (ggml/781) 1 سال پیش
ggml-common.h 52604860f9 [SYCL] Disable iqx on windows as WA (#6435) 1 سال پیش
ggml-cuda.cu 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml-cuda.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 سال پیش
ggml-impl.h 3855416027 ggml : introduce bfloat16 support (#6412) 1 سال پیش
ggml-kompute.cpp 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml-kompute.h fbf1ddec69 Nomic Vulkan backend (#4456) 1 سال پیش
ggml-metal.h 5f14ee0b0c metal : add debug capture backend function (ggml/694) 1 سال پیش
ggml-metal.m 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml-metal.metal 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml-mpi.c 5bf2a27718 ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178) 2 سال پیش
ggml-mpi.h 5656d10599 mpi : add support for distributed inference via MPI (#2099) 2 سال پیش
ggml-opencl.cpp 4734524882 opencl : alignment size converted from bits to bytes (#7090) 1 سال پیش
ggml-opencl.h a1d6df129b Add OpenCL add kernel (#5151) 2 سال پیش
ggml-quants.c 3855416027 ggml : introduce bfloat16 support (#6412) 1 سال پیش
ggml-quants.h 5dc9dd7152 llama : add Command R Plus support (#6491) 1 سال پیش
ggml-sycl.cpp 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml-sycl.h ddf6568510 [SYCL] offload op (#6217) 1 سال پیش
ggml-vulkan-shaders.hpp befddd0f15 Vulkan Bugfixes and Improvements (#7084) 1 سال پیش
ggml-vulkan.cpp 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml-vulkan.h ba0c7c70ab Vulkan k-quant mmq and ggml-backend offload functionality (#6155) 1 سال پیش
ggml.c 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml.h 9cb317f77e ggml : full ALiBi support (#7192) 1 سال پیش
ggml_vk_generate_shaders.py befddd0f15 Vulkan Bugfixes and Improvements (#7084) 1 سال پیش
llama.cpp f99e1e456e llama : lookup word in vocab before doing BPE merges (#7193) 1 سال پیش
llama.h 229ffff872 llama : add BPE pre-tokenization for Qwen2 (#7114) 1 سال پیش
mypy.ini b43ebde3b0 convert : partially revert PR #4818 (#5041) 2 سال پیش
pyrightconfig.json f98eb31c51 convert-hf : save memory with lazy evaluation (#7075) 1 سال پیش
requirements.txt f4ab2a4147 llama : fix BPE pre-tokenization (#6920) 1 سال پیش
sgemm.cpp 465263d0cf sgemm : AVX Q4_0 and Q8_0 (#6891) 1 سال پیش
sgemm.h 4b1c3c98b4 llamafile : use 64-bit integers in sgemm (#6928) 1 سال پیش
unicode-data.cpp 43248e5594 llama3 custom regex split (#6965) 1 سال پیش
unicode-data.h 43248e5594 llama3 custom regex split (#6965) 1 سال پیش
unicode.cpp 43248e5594 llama3 custom regex split (#6965) 1 سال پیش
unicode.h 43248e5594 llama3 custom regex split (#6965) 1 سال پیش

README-sycl.md

llama.cpp for SYCL

Background

SYCL is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17.

oneAPI is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include:

  • DPCPP (Data Parallel C++): The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers.
  • oneAPI Libraries: A set of highly optimized libraries targeting multiple domains (e.g. oneMKL - Math Kernel Library).
  • oneAPI LevelZero: A high performance low level interface for fine-grained control over intel iGPUs and dGPUs.
  • Nvidia & AMD Plugins: These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets.

Llama.cpp + SYCL

The llama.cpp SYCL backend is designed to support Intel GPU firstly. Based on the cross-platform feature of SYCL, it could support other vendor GPUs: Nvidia GPU (AMD GPU coming).

When targeting Intel CPU, it is recommended to use llama.cpp for Intel oneMKL backend.

It has the similar design of other llama.cpp BLAS-based paths such as OpenBLAS, cuBLAS, CLBlast etc... In beginning work, the oneAPI's SYCLomatic open-source migration tool (Commercial release Intel® DPC++ Compatibility Tool) was used for this purpose.

News

  • 2024.4

    • Support data types: GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, GGML_TYPE_IQ3_S, GGML_TYPE_IQ2_XXS, GGML_TYPE_IQ2_XS, GGML_TYPE_IQ2_S, GGML_TYPE_IQ1_S, GGML_TYPE_IQ1_M.
  • 2024.3

    • Release binary files of Windows.
    • A blog is published: Run LLM on all Intel GPUs Using llama.cpp: intel.com or medium.com.
    • New base line is ready: tag b2437.
    • Support multiple cards: --split-mode: [none|layer]; not support [row], it's on developing.
    • Support to assign main GPU by --main-gpu, replace $GGML_SYCL_DEVICE.
    • Support detecting all GPUs with level-zero and same top Max compute units.
    • Support OPs
    • hardsigmoid
    • hardswish
    • pool2d
  • 2024.1

    • Create SYCL backend for Intel GPU.
    • Support Windows build

OS

OS Status Verified
Linux Support Ubuntu 22.04, Fedora Silverblue 39
Windows Support Windows 11

Hardware

Intel GPU

Verified devices

Intel GPU Status Verified Model
Intel Data Center Max Series Support Max 1550, 1100
Intel Data Center Flex Series Support Flex 170
Intel Arc Series Support Arc 770, 730M
Intel built-in Arc GPU Support built-in Arc GPU in Meteor Lake
Intel iGPU Support iGPU in i5-1250P, i7-1260P, i7-1165G7

Notes:

  • Memory

    • The device memory is a limitation when running a large model. The loaded model size, llm_load_tensors: buffer_size, is displayed in the log when running ./bin/main.

    • Please make sure the GPU shared memory from the host is large enough to account for the model's size. For e.g. the llama-2-7b.Q4_0 requires at least 8.0GB for integrated GPU and 4.0GB for discrete GPU.

  • Execution Unit (EU)

    • If the iGPU has less than 80 EUs, the inference speed will likely be too slow for practical use.

Other Vendor GPU

Verified devices

Nvidia GPU Status Verified Model
Ampere Series Support A100, A4000
Ampere Series (Mobile) Support RTX 40 Series

Docker

The docker build option is currently limited to intel GPU targets.

Build image

# Using FP16
docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/main-intel.Dockerfile .

Notes:

To build in default FP32 (Slower than FP16 alternative), you can remove the --build-arg="LLAMA_SYCL_F16=ON" argument from the previous command.

You can also use the .devops/server-intel.Dockerfile, which builds the "server" alternative.

Run container

# First, find all the DRI cards
ls -la /dev/dri
# Then, pick the card that you want to use (here for e.g. /dev/dri/card1).
docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-sycl -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33

Notes:

  • Docker has been tested successfully on native Linux. WSL support has not been verified yet.
  • You may need to install Intel GPU driver on the host machine (Please refer to the Linux configuration for details).

Linux

I. Setup Environment

  1. Install GPU drivers

    • Intel GPU

Intel data center GPUs drivers installation guide and download page can be found here: Get intel dGPU Drivers.

Note: for client GPUs (iGPU & Arc A-Series), please refer to the client iGPU driver installation.

Once installed, add the user(s) to the video and render groups.

sudo usermod -aG render $USER
sudo usermod -aG video $USER

Note: logout/re-login for the changes to take effect.

Verify installation through clinfo:

sudo apt install clinfo
sudo clinfo -l

Sample output:

Platform #0: Intel(R) OpenCL Graphics
 `-- Device #0: Intel(R) Arc(TM) A770 Graphics

Platform #0: Intel(R) OpenCL HD Graphics
 `-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
  • Nvidia GPU

In order to target Nvidia GPUs through SYCL, please make sure the CUDA/CUBLAS native requirements -found here- are installed.

  1. Install Intel® oneAPI Base toolkit
  • For Intel GPU

The base toolkit can be obtained from the official Intel® oneAPI Base Toolkit page.

Please follow the instructions for downloading and installing the Toolkit for Linux, and preferably keep the default installation values unchanged, notably the installation path (/opt/intel/oneapi by default).

Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.

Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI MKL for intel GPUs.

  • Adding support to Nvidia GPUs

oneAPI Plugin: In order to enable SYCL support on Nvidia GPUs, please install the Codeplay oneAPI Plugin for Nvidia GPUs. User should also make sure the plugin version matches the installed base toolkit one (previous step) for a seamless "oneAPI on Nvidia GPU" setup.

oneMKL for cuBlas: The current oneMKL releases (shipped with the oneAPI base-toolkit) do not contain the cuBLAS backend. A build from source of the upstream oneMKL with the cuBLAS backend enabled is thus required to run it on Nvidia GPUs.

git clone https://github.com/oneapi-src/oneMKL
cd oneMKL
cmake -B buildWithCublas -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_COMPILER=icx -DENABLE_MKLGPU_BACKEND=OFF -DENABLE_MKLCPU_BACKEND=OFF -DENABLE_CUBLAS_BACKEND=ON -DTARGET_DOMAINS=blas
cmake --build buildWithCublas --config Release
  1. Verify installation and environment

In order to check the available SYCL devices on the machine, please use the sycl-ls command.

source /opt/intel/oneapi/setvars.sh
sycl-ls
  • Intel GPU

When targeting an intel GPU, the user should expect one or more level-zero devices among the available SYCL devices. Please make sure that at least one GPU is present, for instance [ext_oneapi_level_zero:gpu:0] in the sample output below:

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO  [23.30.26918.50]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918]
  • Nvidia GPU

Similarly, user targeting Nvidia GPUs should expect at least one SYCL-CUDA device [ext_oneapi_cuda:gpu] as bellow:

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.12.0.12_195853.xmain-hotfix]
[opencl:cpu:1] Intel(R) OpenCL, Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix]
[ext_oneapi_cuda:gpu:0] NVIDIA CUDA BACKEND, NVIDIA A100-PCIE-40GB 8.0 [CUDA 12.2]

II. Build llama.cpp

Intel GPU

# Export relevant ENV variables
source /opt/intel/oneapi/setvars.sh

# Build LLAMA with MKL BLAS acceleration for intel GPU

# Option 1: Use FP32 (recommended for better performance in most cases)
cmake -B build -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx

# Option 2: Use FP16
cmake -B build -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON

# build all binary
cmake --build build --config Release -j -v

Nvidia GPU

# Export relevant ENV variables
export LD_LIBRARY_PATH=/path/to/oneMKL/buildWithCublas/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/path/to/oneMKL/buildWithCublas/lib:$LIBRARY_PATH
export CPLUS_INCLUDE_DIR=/path/to/oneMKL/buildWithCublas/include:$CPLUS_INCLUDE_DIR
export CPLUS_INCLUDE_DIR=/path/to/oneMKL/include:$CPLUS_INCLUDE_DIR

# Build LLAMA with Nvidia BLAS acceleration through SYCL

# Option 1: Use FP32 (recommended for better performance in most cases)
cmake -B build -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx

# Option 2: Use FP16
cmake -B build -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON

# build all binary
cmake --build build --config Release -j -v

III. Run the inference

  1. Retrieve and prepare model

You can refer to the general Prepare and Quantize guide for model prepration, or simply download llama-2-7b.Q4_0.gguf model as example.

  1. Enable oneAPI running environment

    source /opt/intel/oneapi/setvars.sh
    
  2. List devices information

Similar to the native sycl-ls, available SYCL devices can be queried as follow:

./build/bin/ls-sycl-device

A example of such log in a system with 1 intel CPU and 1 intel GPU can look like the following:

found 6 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       1.3|        512|    1024|     32|    16225243136|
| 1|[level_zero:gpu:1]|                    Intel(R) UHD Graphics 770|       1.3|         32|     512|     32|    53651849216|
| 2|    [opencl:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       3.0|        512|    1024|     32|    16225243136|
| 3|    [opencl:gpu:1]|                    Intel(R) UHD Graphics 770|       3.0|         32|     512|     32|    53651849216|
| 4|    [opencl:cpu:0]|         13th Gen Intel(R) Core(TM) i7-13700K|       3.0|         24|    8192|     64|    67064815616|
| 5|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         24|67108864|     64|    67064815616|
Attribute Note
compute capability 1.3 Level-zero driver/runtime, recommended
compute capability 3.0 OpenCL driver/runtime, slower than level-zero in most cases
  1. Launch inference

There are two device selection modes:

  • Single device: Use one device target specified by the user.
  • Multiple devices: Automatically select the devices with the same largest Max compute-units.
Device selection Parameter
Single device --split-mode none --main-gpu DEVICE_ID
Multiple devices --split-mode layer (default)

Examples:

  • Use device 0:

    ZES_ENABLE_SYSMAN=1 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm none -mg 0
    

or run by script:

./examples/sycl/run_llama2.sh 0
  • Use multiple devices:

    ZES_ENABLE_SYSMAN=1 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm layer
    

Otherwise, you can run the script:

./examples/sycl/run_llama2.sh

Notes:

  • Upon execution, verify the selected device(s) ID(s) in the output log, which can for instance be displayed as follow:

    detect 1 SYCL GPUs: [0] with top Max compute units:512
    

Or

use 1 SYCL GPUs: [0] with Max compute units:512

Windows

I. Setup Environment

  1. Install GPU driver

Intel GPU drivers instructions guide and download page can be found here: Get intel GPU Drivers.

  1. Install Visual Studio

If you already have a recent version of Microsoft Visual Studio, you can skip this step. Otherwise, please refer to the official download page for Microsoft Visual Studio.

  1. Install Intel® oneAPI Base toolkit

The base toolkit can be obtained from the official Intel® oneAPI Base Toolkit page.

Please follow the instructions for downloading and installing the Toolkit for Windows, and preferably keep the default installation values unchanged, notably the installation path (C:\Program Files (x86)\Intel\oneAPI by default).

Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.

b. Enable oneAPI running environment:

  • Type "oneAPI" in the search bar, then open the Intel oneAPI command prompt for Intel 64 for Visual Studio 2022 App.

  • On the command prompt, enable the runtime environment with the following:

    "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
    

c. Verify installation

In the oneAPI command line, run the following to print the available SYCL devices:

sycl-ls

There should be one or more level-zero GPU devices displayed as [ext_oneapi_level_zero:gpu]. Below is example of such output detecting an intel Iris Xe GPU as a Level-zero SYCL device:

Output (example):

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Iris(R) Xe Graphics OpenCL 3.0 NEO  [31.0.101.5186]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Iris(R) Xe Graphics 1.3 [1.3.28044]
  1. Install build tools

a. Download & install cmake for Windows: https://cmake.org/download/

b. Download & install mingw-w64 make for Windows provided by w64devkit

  • Download the 1.19.0 version of w64devkit.

  • Extract w64devkit on your pc.

  • Add the bin folder path in the Windows system PATH environment (for e.g. C:\xxx\w64devkit\bin\).

II. Build llama.cpp

On the oneAPI command line window, step into the llama.cpp main directory and run the following:

@call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force

# Option 1: Use FP32 (recommended for better performance in most cases)
cmake -B build -G "MinGW Makefiles" -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx  -DCMAKE_BUILD_TYPE=Release

# Option 2: Or FP16
cmake -B build -G "MinGW Makefiles" -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx  -DCMAKE_BUILD_TYPE=Release -DLLAMA_SYCL_F16=ON

cmake --build build --config Release -j

Otherwise, run the win-build-sycl.bat wrapper which encapsulates the former instructions:

.\examples\sycl\win-build-sycl.bat

Notes:

  • By default, calling make will build all target binary files. In case of a minimal experimental setup, the user can build the inference executable only through make main.

III. Run the inference

  1. Retrieve and prepare model

You can refer to the general Prepare and Quantize guide for model prepration, or simply download llama-2-7b.Q4_0.gguf model as example.

  1. Enable oneAPI running environment

On the oneAPI command line window, run the following and step into the llama.cpp directory:

"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
  1. List devices information

Similar to the native sycl-ls, available SYCL devices can be queried as follow:

build\bin\ls-sycl-device.exe

The output of this command in a system with 1 intel CPU and 1 intel GPU would look like the following:

found 6 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       1.3|        512|    1024|     32|    16225243136|
| 1|[level_zero:gpu:1]|                    Intel(R) UHD Graphics 770|       1.3|         32|     512|     32|    53651849216|
| 2|    [opencl:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       3.0|        512|    1024|     32|    16225243136|
| 3|    [opencl:gpu:1]|                    Intel(R) UHD Graphics 770|       3.0|         32|     512|     32|    53651849216|
| 4|    [opencl:cpu:0]|         13th Gen Intel(R) Core(TM) i7-13700K|       3.0|         24|    8192|     64|    67064815616|
| 5|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         24|67108864|     64|    67064815616|

Attribute Note
compute capability 1.3 Level-zero running time, recommended
compute capability 3.0 OpenCL running time, slower than level-zero in most cases
  1. Launch inference

There are two device selection modes:

  • Single device: Use one device assigned by user.
  • Multiple devices: Automatically choose the devices with the same biggest Max compute units.
Device selection Parameter
Single device --split-mode none --main-gpu DEVICE_ID
Multiple devices --split-mode layer (default)

Examples:

  • Use device 0:

    build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 -sm none -mg 0
    
  • Use multiple devices:

    build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 -sm layer
    

Otherwise, run the following wrapper script:

.\examples\sycl\win-run-llama2.bat

Note:

  • Upon execution, verify the selected device(s) ID(s) in the output log, which can for instance be displayed as follow:

    detect 1 SYCL GPUs: [0] with top Max compute units:512
    

Or

use 1 SYCL GPUs: [0] with Max compute units:512

Environment Variable

Build

Name Value Function
LLAMA_SYCL ON (mandatory) Enable build with SYCL code path.
LLAMA_SYCL_TARGET INTEL (default) | NVIDIA Set the SYCL target device type.
LLAMA_SYCL_F16 OFF (default) |ON (optional) Enable FP16 build with SYCL code path.
CMAKE_C_COMPILER icx Set icx compiler for SYCL code path.
CMAKE_CXX_COMPILER icpx (Linux), icx (Windows) Set icpx/icx compiler for SYCL code path.

Runtime

Name Value Function
GGML_SYCL_DEBUG 0 (default) or 1 Enable log function by macro: GGML_SYCL_DEBUG
ZES_ENABLE_SYSMAN 0 (default) or 1 Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory.
Recommended to use when --split-mode = layer

Known Issues

  • Split-mode:[row] is not supported.

Q&A

  • Error: error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory.

    • Potential cause: Unavailable oneAPI installation or not set ENV variables.
    • Solution: Install oneAPI base toolkit and enable its ENV through: source /opt/intel/oneapi/setvars.sh.
  • General compiler error:

    • Remove build folder or try a clean-build.
  • I can not see [ext_oneapi_level_zero:gpu] afer installing the GPU driver on Linux.

Please double-check with sudo sycl-ls.

If it's present in the list, please add video/render group to your user then logout/login or restart your system:

  sudo usermod -aG render $USER
  sudo usermod -aG video $USER

Otherwise, please double-check the GPU driver installation steps.

GitHub contribution:

Please add the [SYCL] prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay.

TODO

  • Support row layer split for multiple card runs.