Không có mô tả

Ouadie EL FAROUKI 5106ef482c [SYCL] Revisited & updated SYCL build documentation (#6141) 1 năm trước cách đây
.devops d2d8f38996 nix: removed unnessesary indentation 1 năm trước cách đây
.github 28cb9a09c4 ci: bench: fix master not schedule, fix commit status failed on external repo (#6365) 1 năm trước cách đây
ci 280345968d cuda : rename build flag to LLAMA_CUDA (#6299) 1 năm trước cách đây
cmake c41ea36eaa cmake : MSVC instruction detection (fixed up #809) (#3923) 2 năm trước cách đây
common e562b9714b common : change --no-penalize-nl to --penalize-nl (#6334) 1 năm trước cách đây
docs 280345968d cuda : rename build flag to LLAMA_CUDA (#6299) 1 năm trước cách đây
examples 66ba560256 llava : fix MobileVLM (#6364) 1 năm trước cách đây
ggml-cuda 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 năm trước cách đây
gguf-py 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 năm trước cách đây
grammars 3de31677d3 grammars : blacklists character control set (#5888) 1 năm trước cách đây
kompute @ 4565194ed7 fbf1ddec69 Nomic Vulkan backend (#4456) 1 năm trước cách đây
kompute-shaders fbf1ddec69 Nomic Vulkan backend (#4456) 1 năm trước cách đây
media 62b3e81aae media : add logos and banners 2 năm trước cách đây
models ea5497df5d gpt2 : Add gpt2 architecture integration (#4555) 2 năm trước cách đây
pocs a07d0fee1f ggml : add mmla kernels for quantized GEMM (#4966) 1 năm trước cách đây
prompts 37c746d687 llama : add Qwen support (#4281) 2 năm trước cách đây
requirements da3b9ba2b7 convert-hf-to-gguf : require einops for InternLM2ForCausalLM (#5792) 1 năm trước cách đây
scripts 280345968d cuda : rename build flag to LLAMA_CUDA (#6299) 1 năm trước cách đây
spm-headers df334a1125 swift : package no longer use ggml dependency (#5465) 1 năm trước cách đây
tests 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 năm trước cách đây
.clang-tidy ae1f211ce2 cuda : refactor into multiple files (#6269) 1 năm trước cách đây
.dockerignore ea55295a74 docker : ignore Git files (#3314) 2 năm trước cách đây
.ecrc fbf1ddec69 Nomic Vulkan backend (#4456) 1 năm trước cách đây
.editorconfig 800a489e4a llama.swiftui : add bench functionality (#4483) 2 năm trước cách đây
.flake8 2891c8aa9a Add support for BERT embedding models (#5423) 1 năm trước cách đây
.gitignore 64e7b47c69 examples : add "retrieval" (#6193) 1 năm trước cách đây
.gitmodules fbf1ddec69 Nomic Vulkan backend (#4456) 1 năm trước cách đây
.pre-commit-config.yaml 5ddf7ea1fb hooks : setting up flake8 and pre-commit hooks (#1681) 2 năm trước cách đây
CMakeLists.txt 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây
LICENSE 6a9a67f0be Add LICENSE (#21) 2 năm trước cách đây
Makefile 3a0345970e make : whitespace 1 năm trước cách đây
Package.swift 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây
README-sycl.md 5106ef482c [SYCL] Revisited & updated SYCL build documentation (#6141) 1 năm trước cách đây
README.md 1740d6dd4e readme : add php api bindings (#6326) 1 năm trước cách đây
build.zig 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây
codecov.yml 73a12a6344 cov : disable comment in PRs (#2989) 2 năm trước cách đây
convert-hf-to-gguf.py be55134a53 convert : refactor vocab selection logic (#6355) 1 năm trước cách đây
convert-llama-ggml-to-gguf.py 4d4d2366fc convert : automatically fall back to HfVocab if tokenizer.model doesn't exist (#5821) 1 năm trước cách đây
convert-lora-to-ggml.py 05490fad7f add safetensors support to convert-lora-to-ggml.py (#5062) 2 năm trước cách đây
convert-persimmon-to-gguf.py be55134a53 convert : refactor vocab selection logic (#6355) 1 năm trước cách đây
convert.py be55134a53 convert : refactor vocab selection logic (#6355) 1 năm trước cách đây
flake.lock 43139cc528 flake.lock: Update (#6266) 1 năm trước cách đây
flake.nix e9f17dc3bf nix: .#windows: proper cross-compilation set-up 1 năm trước cách đây
ggml-alloc.c 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 năm trước cách đây
ggml-alloc.h f30ea47a87 llama : add pipeline parallelism support (#6017) 1 năm trước cách đây
ggml-backend-impl.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 năm trước cách đây
ggml-backend.c 280345968d cuda : rename build flag to LLAMA_CUDA (#6299) 1 năm trước cách đây
ggml-backend.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 năm trước cách đây
ggml-common.h cbc8343619 Make IQ1_M work for QK_K = 64 (#6327) 1 năm trước cách đây
ggml-cuda.cu 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 năm trước cách đây
ggml-cuda.h 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 năm trước cách đây
ggml-impl.h 3202361c5b ggml, ci : Windows ARM runner and build fixes (#5979) 1 năm trước cách đây
ggml-kompute.cpp 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 năm trước cách đây
ggml-kompute.h fbf1ddec69 Nomic Vulkan backend (#4456) 1 năm trước cách đây
ggml-metal.h 5f14ee0b0c metal : add debug capture backend function (ggml/694) 1 năm trước cách đây
ggml-metal.m 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 năm trước cách đây
ggml-metal.metal cbc8343619 Make IQ1_M work for QK_K = 64 (#6327) 1 năm trước cách đây
ggml-mpi.c 5bf2a27718 ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178) 2 năm trước cách đây
ggml-mpi.h 5656d10599 mpi : add support for distributed inference via MPI (#2099) 2 năm trước cách đây
ggml-opencl.cpp 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 năm trước cách đây
ggml-opencl.h a1d6df129b Add OpenCL add kernel (#5151) 2 năm trước cách đây
ggml-quants.c cbc8343619 Make IQ1_M work for QK_K = 64 (#6327) 1 năm trước cách đây
ggml-quants.h 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 năm trước cách đây
ggml-sycl.cpp 25f4a613c4 [SYCL] fix set main gpu crash (#6339) 1 năm trước cách đây
ggml-sycl.h ddf6568510 [SYCL] offload op (#6217) 1 năm trước cách đây
ggml-vulkan-shaders.hpp 61d1c88e15 Vulkan Improvements (#5835) 1 năm trước cách đây
ggml-vulkan.cpp 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 năm trước cách đây
ggml-vulkan.h 61d1c88e15 Vulkan Improvements (#5835) 1 năm trước cách đây
ggml.c e5b89a441a ggml : fix bounds checking of zero size views (#6347) 1 năm trước cách đây
ggml.h 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 năm trước cách đây
ggml_vk_generate_shaders.py 61d1c88e15 Vulkan Improvements (#5835) 1 năm trước cách đây
llama.cpp 0308f5e3d7 llama : fix command-r inference when omitting outputs (#6367) 1 năm trước cách đây
llama.h be55134a53 convert : refactor vocab selection logic (#6355) 1 năm trước cách đây
mypy.ini b43ebde3b0 convert : partially revert PR #4818 (#5041) 2 năm trước cách đây
requirements.txt 04ac0607e9 python : add check-requirements.sh and GitHub workflow (#4585) 2 năm trước cách đây
unicode-data.cpp 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây
unicode-data.h 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây
unicode.cpp 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây
unicode.h 32c8486e1f wpm : portable unicode tolower (#6305) 1 năm trước cách đây

README-sycl.md

llama.cpp for SYCL

Background

SYCL is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17.

oneAPI is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include:

  • DPCPP (Data Parallel C++): The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers.
  • oneAPI Libraries: A set of highly optimized libraries targeting multiple domains (e.g. oneMKL - Math Kernel Library).
  • oneAPI LevelZero: A high performance low level interface for fine-grained control over intel iGPUs and dGPUs.
  • Nvidia & AMD Plugins: These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets.

Llama.cpp + SYCL

This SYCL "backend" follows the same design found in other llama.cpp BLAS-based paths such as OpenBLAS, cuBLAS, CLBlast etc... The oneAPI's SYCLomatic open-source migration tool (Commercial release Intel® DPC++ Compatibility Tool) was used for this purpose.

The llama.cpp SYCL backend supports:

  • Intel GPUs.
  • Nvidia GPUs.

Upcoming support: AMD GPUs.

When targetting Intel CPUs, it is recommended to use llama.cpp for x86_64 approach.

News

  • 2024.3

    • A blog is published: Run LLM on all Intel GPUs Using llama.cpp: intel.com or medium.com.
    • New base line is ready: tag b2437.
    • Support multiple cards: --split-mode: [none|layer]; not support [row], it's on developing.
    • Support to assign main GPU by --main-gpu, replace $GGML_SYCL_DEVICE.
    • Support detecting all GPUs with level-zero and same top Max compute units.
    • Support OPs
    • hardsigmoid
    • hardswish
    • pool2d
  • 2024.1

    • Create SYCL backend for Intel GPU.
    • Support Windows build

OS

|OS|Status|Verified| |-|-|-| |Linux|Support|Ubuntu 22.04, Fedora Silverblue 39| |Windows|Support|Windows 11|

Supported devices

Intel GPUs

The oneAPI Math Kernel Library, which the oneAPI base-toolkit includes, supports intel GPUs. In order to make it "visible", simply run the following:

source /opt/intel/oneapi/setvars.sh
  • Tested devices

|Intel GPU| Status | Verified Model| |-|-|-| |Intel Data Center Max Series| Support| Max 1550| |Intel Data Center Flex Series| Support| Flex 170| |Intel Arc Series| Support| Arc 770, 730M| |Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake| |Intel iGPU| Support| iGPU in i5-1250P, i7-1260P, i7-1165G7|

Notes:

  • Device memory can be a limitation when running a large model on an intel GPU. The loaded model size, llm_load_tensors: buffer_size, is displayed in the log when running ./bin/main.

  • Please make sure the GPU shared memory from the host is large enough to account for the model's size. For e.g. the llama-2-7b.Q4_0 requires at least 8.0GB for integrated GPUs and 4.0GB for discrete GPUs.

  • If the iGPU has less than 80 EUs (Execution Unit), the inference speed will likely be too slow for practical use.

Nvidia GPUs

The BLAS acceleration on Nvidia GPUs through oneAPI can be obtained using the Nvidia plugins for oneAPI and the cuBLAS backend of the upstream oneMKL library. Details and instructions on how to setup the runtime and library can be found in this section

  • Tested devices

|Nvidia GPU| Status | Verified Model| |-|-|-| |Ampere Series| Support| A100, A4000| |Ampere Series (Mobile)| Support| RTX 40 Series|

Notes:

  • Support for Nvidia targets through oneAPI is currently limited to Linux platforms.

  • Please make sure the native oneAPI MKL (dedicated to intel CPUs and GPUs) is not "visible" at this stage to properly setup and use the built-from-source oneMKL with cuBLAS backend in llama.cpp for Nvidia GPUs.

Docker

The docker build option is currently limited to intel GPU targets.

Build image

# Using FP16
docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/main-intel.Dockerfile .

Notes:

To build in default FP32 (Slower than FP16 alternative), you can remove the --build-arg="LLAMA_SYCL_F16=ON" argument from the previous command.

You can also use the .devops/server-intel.Dockerfile, which builds the "server" alternative.

Run container

# First, find all the DRI cards
ls -la /dev/dri
# Then, pick the card that you want to use (here for e.g. /dev/dri/card1).
docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-sycl -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33

Notes:

  • Docker has been tested successfully on native Linux. WSL support has not been verified yet.
  • You may need to install Intel GPU driver on the host machine (Please refer to the Linux configuration for details).

Linux

I. Setup Environment

  1. Install GPU drivers

    • Intel GPU

Intel data center GPUs drivers installation guide and download page can be found here: Get intel dGPU Drivers.

Note: for client GPUs (iGPU & Arc A-Series), please refer to the client iGPU driver installation.

Once installed, add the user(s) to the video and render groups.

sudo usermod -aG render $USER
sudo usermod -aG video $USER

Note: logout/re-login for the changes to take effect.

Verify installation through clinfo:

sudo apt install clinfo
sudo clinfo -l

Sample output:

Platform #0: Intel(R) OpenCL Graphics
 `-- Device #0: Intel(R) Arc(TM) A770 Graphics

Platform #0: Intel(R) OpenCL HD Graphics
 `-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49]
  • Nvidia GPU

In order to target Nvidia GPUs through SYCL, please make sure the CUDA/CUBLAS native requirements -found here- are installed. Installation can be verified by running the following:

nvidia-smi

Please make sure at least one CUDA device is available, which can be displayed like this (here an A100-40GB Nvidia GPU):

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03              Driver Version: 535.54.03    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:8D:00.0 Off |                    0 |
| N/A   36C    P0              57W / 250W |      4MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
  1. Install Intel® oneAPI Base toolkit
  • Base installation

The base toolkit can be obtained from the official Intel® oneAPI Base Toolkit page.

Please follow the instructions for downloading and installing the Toolkit for Linux, and preferably keep the default installation values unchanged, notably the installation path (/opt/intel/oneapi by default).

Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.

Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI MKL for intel GPUs.

  • Adding support to Nvidia GPUs

oneAPI: In order to enable SYCL support on Nvidia GPUs, please install the Codeplay oneAPI Plugin for Nvidia GPUs. User should also make sure the plugin version matches the installed base toolkit one (previous step) for a seamless "oneAPI on Nvidia GPU" setup.

oneMKL: The current oneMKL releases (shipped with the oneAPI base-toolkit) do not contain the cuBLAS backend. A build from source of the upstream oneMKL with the cuBLAS backend enabled is thus required to run it on Nvidia GPUs.

git clone https://github.com/oneapi-src/oneMKL
cd oneMKL
mkdir -p buildWithCublas && cd buildWithCublas
cmake ../ -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_COMPILER=icx -DENABLE_MKLGPU_BACKEND=OFF -DENABLE_MKLCPU_BACKEND=OFF -DENABLE_CUBLAS_BACKEND=ON -DTARGET_DOMAINS=blas
make
  1. Verify installation and environment

In order to check the available SYCL devices on the machine, please use the sycl-ls command.

source /opt/intel/oneapi/setvars.sh
sycl-ls
  • Intel GPU

When targeting an intel GPU, the user should expect one or more level-zero devices among the available SYCL devices. Please make sure that at least one GPU is present, for instance [ext_oneapi_level_zero:gpu:0] in the sample output below:

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO  [23.30.26918.50]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918]
  • Nvidia GPU

Similarly, user targetting Nvidia GPUs should expect at least one SYCL-CUDA device [ext_oneapi_cuda:gpu] as bellow:

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.12.0.12_195853.xmain-hotfix]
[opencl:cpu:1] Intel(R) OpenCL, Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix]
[ext_oneapi_cuda:gpu:0] NVIDIA CUDA BACKEND, NVIDIA A100-PCIE-40GB 8.0 [CUDA 12.2]

II. Build llama.cpp

Intel GPU

# Export relevant ENV variables
source /opt/intel/oneapi/setvars.sh

# Build LLAMA with MKL BLAS acceleration for intel GPU
mkdir -p build && cd build

# Option 1: Use FP16 for better performance in long-prompt  inference
cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON

# Option 2: Use FP32 by default
cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx

Nvidia GPU

# Export relevant ENV variables
export LD_LIBRARY_PATH=/path/to/oneMKL/buildWithCublas/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/path/to/oneMKL/buildWithCublas/lib:$LIBRARY_PATH
export CPLUS_INCLUDE_DIR=/path/to/oneMKL/buildWithCublas/include:$CPLUS_INCLUDE_DIR
export CPLUS_INCLUDE_DIR=/path/to/oneMKL/include:$CPLUS_INCLUDE_DIR

# Build LLAMA with Nvidia BLAS acceleration through SYCL
mkdir -p build && cd build

# Option 1: Use FP16 for better performance in long-prompt  inference
cmake .. -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON

# Option 2: Use FP32 by default
cmake .. -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx

III. Run the inference

  1. Retrieve and prepare model

You can refer to the general Prepare and Quantize guide for model prepration, or simply download llama-2-7b.Q4_0.gguf model as example.

  1. Enable oneAPI running environment

    source /opt/intel/oneapi/setvars.sh
    
  2. List devices information

Similar to the native sycl-ls, available SYCL devices can be queried as follow:

./build/bin/ls-sycl-device

A example of such log in a system with 1 intel CPU and 1 intel GPU can look like the following:

found 6 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       1.3|        512|    1024|     32|    16225243136|
| 1|[level_zero:gpu:1]|                    Intel(R) UHD Graphics 770|       1.3|         32|     512|     32|    53651849216|
| 2|    [opencl:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       3.0|        512|    1024|     32|    16225243136|
| 3|    [opencl:gpu:1]|                    Intel(R) UHD Graphics 770|       3.0|         32|     512|     32|    53651849216|
| 4|    [opencl:cpu:0]|         13th Gen Intel(R) Core(TM) i7-13700K|       3.0|         24|    8192|     64|    67064815616|
| 5|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         24|67108864|     64|    67064815616|

|Attribute|Note| |-|-| |compute capability 1.3|Level-zero driver/runtime, recommended | |compute capability 3.0|OpenCL driver/runtime, slower than level-zero in most cases|

  1. Launch inference

There are two device selection modes:

  • Single device: Use one device target specified by the user.
  • Multiple devices: Automatically select the devices with the same largest Max compute-units.

|Device selection|Parameter| |-|-| |Single device|--split-mode none --main-gpu DEVICE_ID | |Multiple devices|--split-mode layer (default)|

Examples:

  • Use device 0:

    ZES_ENABLE_SYSMAN=1 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm none -mg 0
    

or run by script:

./examples/sycl/run_llama2.sh 0
  • Use multiple devices:

    ZES_ENABLE_SYSMAN=1 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm layer
    

Otherwise, you can run the script:

./examples/sycl/run_llama2.sh

Notes:

  • By default, mmap is used to read the model file. In some cases, it causes runtime hang issues. Please disable it by passing --no-mmap to the /bin/main if faced with the issue.
  • Upon execution, verify the selected device(s) ID(s) in the output log, which can for instance be displayed as follow:

    detect 1 SYCL GPUs: [0] with top Max compute units:512
    

Or

use 1 SYCL GPUs: [0] with Max compute units:512

Windows

I. Setup Environment

  1. Install GPU driver

Intel GPU drivers instructions guide and download page can be found here: Get intel GPU Drivers.

  1. Install Visual Studio

If you already have a recent version of Microsoft Visual Studio, you can skip this step. Otherwise, please refer to the official download page for Microsoft Visual Studio.

  1. Install Intel® oneAPI Base toolkit

The base toolkit can be obtained from the official Intel® oneAPI Base Toolkit page.

Please follow the instructions for downloading and installing the Toolkit for Windows, and preferably keep the default installation values unchanged, notably the installation path (C:\Program Files (x86)\Intel\oneAPI by default).

Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable.

b. Enable oneAPI running environment:

  • Type "oneAPI" in the search bar, then open the Intel oneAPI command prompt for Intel 64 for Visual Studio 2022 App.

  • On the command prompt, enable the runtime environment with the following:

    "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
    

c. Verify installation

In the oneAPI command line, run the following to print the available SYCL devices:

sycl-ls

There should be one or more level-zero GPU devices displayed as [ext_oneapi_level_zero:gpu]. Below is example of such output detecting an intel Iris Xe GPU as a Level-zero SYCL device:

Output (example):

[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2023.16.10.0.17_160000]
[opencl:cpu:1] Intel(R) OpenCL, 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Iris(R) Xe Graphics OpenCL 3.0 NEO  [31.0.101.5186]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Iris(R) Xe Graphics 1.3 [1.3.28044]
  1. Install build tools

a. Download & install cmake for Windows: https://cmake.org/download/

b. Download & install mingw-w64 make for Windows provided by w64devkit

  • Download the 1.19.0 version of w64devkit.

  • Extract w64devkit on your pc.

  • Add the bin folder path in the Windows system PATH environment (for e.g. C:\xxx\w64devkit\bin\).

II. Build llama.cpp

On the oneAPI command line window, step into the llama.cpp main directory and run the following:

mkdir -p build
cd build
@call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force

cmake -G "MinGW Makefiles" ..  -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx  -DCMAKE_BUILD_TYPE=Release -DLLAMA_SYCL_F16=ON

make

Otherwise, run the win-build-sycl.bat wrapper which encapsulates the former instructions:

.\examples\sycl\win-build-sycl.bat

Notes:

  • By default, calling make will build all target binary files. In case of a minimal experimental setup, the user can build the inference executable only through make main.

III. Run the inference

  1. Retrieve and prepare model

You can refer to the general Prepare and Quantize guide for model prepration, or simply download llama-2-7b.Q4_0.gguf model as example.

  1. Enable oneAPI running environment

On the oneAPI command line window, run the following and step into the llama.cpp directory:

"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
  1. List devices information

Similar to the native sycl-ls, available SYCL devices can be queried as follow:

build\bin\ls-sycl-device.exe

The output of this command in a system with 1 intel CPU and 1 intel GPU would look like the following:

found 6 SYCL devices:
|  |                  |                                             |Compute   |Max compute|Max work|Max sub|               |
|ID|       Device Type|                                         Name|capability|units      |group   |group  |Global mem size|
|--|------------------|---------------------------------------------|----------|-----------|--------|-------|---------------|
| 0|[level_zero:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       1.3|        512|    1024|     32|    16225243136|
| 1|[level_zero:gpu:1]|                    Intel(R) UHD Graphics 770|       1.3|         32|     512|     32|    53651849216|
| 2|    [opencl:gpu:0]|               Intel(R) Arc(TM) A770 Graphics|       3.0|        512|    1024|     32|    16225243136|
| 3|    [opencl:gpu:1]|                    Intel(R) UHD Graphics 770|       3.0|         32|     512|     32|    53651849216|
| 4|    [opencl:cpu:0]|         13th Gen Intel(R) Core(TM) i7-13700K|       3.0|         24|    8192|     64|    67064815616|
| 5|    [opencl:acc:0]|               Intel(R) FPGA Emulation Device|       1.2|         24|67108864|     64|    67064815616|

|Attribute|Note| |-|-| |compute capability 1.3|Level-zero running time, recommended | |compute capability 3.0|OpenCL running time, slower than level-zero in most cases|

  1. Launch inference

There are two device selection modes:

  • Single device: Use one device assigned by user.
  • Multiple devices: Automatically choose the devices with the same biggest Max compute units.

|Device selection|Parameter| |-|-| |Single device|--split-mode none --main-gpu DEVICE_ID | |Multiple devices|--split-mode layer (default)|

Examples:

  • Use device 0:

    build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 -sm none -mg 0
    
  • Use multiple devices:

    build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 -sm layer
    

Otherwise, run the following wrapper script:

.\examples\sycl\win-run-llama2.bat

Note:

  • By default, mmap is used to read the model file. In some cases, it causes runtime hang issues. Please disable it by passing --no-mmap to the main.exe if faced with the issue.
  • Upon execution, verify the selected device(s) ID(s) in the output log, which can for instance be displayed as follow:

    detect 1 SYCL GPUs: [0] with top Max compute units:512
    

Or

use 1 SYCL GPUs: [0] with Max compute units:512

Environment Variable

Build

|Name|Value|Function| |-|-|-| |LLAMA_SYCL|ON (mandatory)|Enable build with SYCL code path.| |LLAMA_SYCL_TARGET | INTEL (default) | NVIDIA|Set the SYCL target device type.| |LLAMA_SYCL_F16|OFF (default) |ON (optional)|Enable FP16 build with SYCL code path.| |CMAKE_C_COMPILER|icx|Set icx compiler for SYCL code path.| |CMAKE_CXX_COMPILER|icpx (Linux), icx (Windows)|Set icpx/icx compiler for SYCL code path.|

Runtime

|Name|Value|Function| |-|-|-| |GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG| |ZES_ENABLE_SYSMAN| 0 (default) or 1|Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory.
Recommended to use when --split-mode = layer|

Known Issues

  • Hanging during startup

llama.cpp uses mmap as the default mode for reading the model file and copying it to the GPU. In some systems, memcpy might behave abnormally and therefore hang.

  • Solution: add --no-mmap or --mmap 0 flag to the main executable.

  • Split-mode:[row] is not supported.

Q&A

  • Error: error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory.

    • Potential cause: Unavailable oneAPI installation or not set ENV variables.
    • Solution: Install oneAPI base toolkit and enable its ENV through: source /opt/intel/oneapi/setvars.sh.
  • General compiler error:

    • Remove build folder or try a clean-build.
  • I can not see [ext_oneapi_level_zero:gpu] afer installing the GPU driver on Linux.

Please double-check with sudo sycl-ls.

If it's present in the list, please add video/render group to your user then logout/login or restart your system:

  sudo usermod -aG render $USER
  sudo usermod -aG video $USER

Otherwise, please double-check the GPU driver installation steps.

GitHub contribution:

Please add the [SYCL] prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay.

Todo

  • Support row layer split for multiple card runs.