[!IMPORTANT] This build documentation is specific only to IBM Z & LinuxONE mainframes (s390x). You can find the build documentation for other architectures: build.md.
The main product of this project is the llama library. Its C-style interface can be found in include/llama.h.
The project also includes many example programs and tools using the llama library. The examples range from simple, minimal code snippets to sophisticated sub-projects such as an OpenAI-compatible HTTP server.
To get the code:
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
Building llama.cpp with BLAS support is highly recommended as it has shown to provide performance improvements.
cmake -S . -B build \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_BLAS=ON \
-DGGML_BLAS_VENDOR=OpenBLAS
cmake --build build --config Release -j $(nproc)
Notes:
By default, VXE/VXE2 is enabled. To disable it (not recommended):
cmake -S . -B build \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_BLAS=ON \
-DGGML_BLAS_VENDOR=OpenBLAS \
-DGGML_VXE=OFF
cmake --build build --config Release -j $(nproc)
By default, NNPA is enabled when available. To disable it (not recommended):
cmake -S . -B build \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_BLAS=ON \
-DGGML_BLAS_VENDOR=OpenBLAS \
-DGGML_NNPA=OFF
cmake --build build --config Release -j $(nproc)
For debug builds:
cmake -S . -B build \
-DCMAKE_BUILD_TYPE=Debug \
-DGGML_BLAS=ON \
-DGGML_BLAS_VENDOR=OpenBLAS
cmake --build build --config Debug -j $(nproc)
For static builds, add -DBUILD_SHARED_LIBS=OFF:
cmake -S . -B build \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_BLAS=ON \
-DGGML_BLAS_VENDOR=OpenBLAS \
-DBUILD_SHARED_LIBS=OFF
cmake --build build --config Release -j $(nproc)
All models need to be converted to Big-Endian. You can achieve this in three cases:
Use pre-converted models verified for use on IBM Z & LinuxONE (easiest)
You can find popular models pre-converted and verified at s390x Ready Models.
These models and their respective tokenizers are verified to run correctly on IBM Z & LinuxONE.
Convert safetensors model to GGUF Big-Endian directly (recommended)
python3 convert_hf_to_gguf.py \
--outfile model-name-be.f16.gguf \
--outtype f16 \
--bigendian \
model-directory/
For example,
python3 convert_hf_to_gguf.py \
--outfile granite-3.3-2b-instruct-be.f16.gguf \
--outtype f16 \
--bigendian \
granite-3.3-2b-instruct/
Convert existing GGUF Little-Endian model to Big-Endian
python3 gguf-py/gguf/scripts/gguf_convert_endian.py model-name.f16.gguf BIG
For example,
python3 gguf-py/gguf/scripts/gguf_convert_endian.py granite-3.3-2b-instruct-le.f16.gguf BIG
mv granite-3.3-2b-instruct-le.f16.gguf granite-3.3-2b-instruct-be.f16.gguf
Notes:
Only available in IBM z15 or later system with the -DGGML_VXE=ON (turned on by default) compile flag. No hardware acceleration is possible with llama.cpp with older systems, such as IBM z14/arch12. In such systems, the APIs can still run but will use a scalar implementation.
Only available in IBM z16 or later system with the -DGGML_NNPA=ON (turned on when available) compile flag. No hardware acceleration is possible with llama.cpp with older systems, such as IBM z15/arch13. In such systems, the APIs can still run but will use a scalar implementation.
Only available in IBM z16 or later system. No direction at the moment.
No direction at the moment.
It is strongly recommended to use only LPAR (Type-1) virtualization to get the most performance.
Note: Type-2 virtualization is not supported at the moment, while you can get it running, the performance will not be the best.
It is recommended to allocate a minimum of 8 shared IFLs assigned to the LPAR. Increasing the IFL count past 8 shared IFLs will only improve Prompt Processing performance but not Token Generation.
Note: IFL count does not equate to vCPU count.
It is strongly recommended to disable SMT via the kernel boot parameters as it negatively affects performance. Please refer to your Linux distribution's guide on disabling SMT via kernel boot parameters.
IBM VXE/VXE2 SIMD acceleration depends on the BLAS implementation. It is strongly recommended to use BLAS.
Bugs, Feature Requests
Please file an issue in llama.cpp and ensure that the title contains "s390x".
Other Questions
Please reach out directly to aionz@us.ibm.com.