Нет описания

Jeff Bolz a20d810d79 vulkan: add RTE variants of exp shader (#16165) 3 месяцев назад
.devops dc381aa9a6 docker : enable rocWMMA in ROCm images, add gfx1151 (#15997) 4 месяцев назад
.github 4d0a7cbc61 ci : adjust params for less runtime (#16167) 3 месяцев назад
.windsurf a7a98e0fff SvelteKit-based WebUI (#14839) 4 месяцев назад
ci 4d0a7cbc61 ci : adjust params for less runtime (#16167) 3 месяцев назад
cmake 9f2da5871f llama : build windows releases with dl backends (#13220) 8 месяцев назад
common f432d8d83e chat: Fix streaming parser for granite models (#15682) 4 месяцев назад
docs 40be51152d ggml-zdnn: fix #15414, activate FP16 and BF16 acceleration and incorrect zTensor free (#15839) 4 месяцев назад
examples 1cbd80f8cf examples : support encoder-decoder models in the simple example (#16002) 4 месяцев назад
ggml a20d810d79 vulkan: add RTE variants of exp shader (#16165) 3 месяцев назад
gguf-py 6d758839ff Add LLaDA-7b-MoE diffusion model (#16003) 4 месяцев назад
grammars 1d36b3670b llama : move end-user examples to tools directory (#13249) 8 месяцев назад
include fd621880f3 aLoRA Support (#15327) 4 месяцев назад
licenses bd3f59f812 cmake : enable curl by default (#12761) 9 месяцев назад
media 2cfef4d117 media : add transparent icon svg and png [no ci] (#15891) 4 месяцев назад
models 88021565f0 chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533) 4 месяцев назад
pocs 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 год назад
prompts 37c746d687 llama : add Qwen support (#4281) 2 лет назад
requirements 70cd37dbbe requirements : update transformers/torch for Embedding Gemma (#15828) 4 месяцев назад
scripts 7f766929ca sync : ggml 3 месяцев назад
src e58174cecb llama : bump max seq limit from 64 to 256 (#15916) 4 месяцев назад
tests 9073a73d82 vulkan: vec dot matrix multiplication fix (#16151) 3 месяцев назад
tools 4d0a7cbc61 ci : adjust params for less runtime (#16167) 3 месяцев назад
vendor 6c7e9a5440 vendor: sync minja (#15161) 5 месяцев назад
.clang-format f1fbffb5c0 fix: apply clang-format to CUDA macros (#16017) 4 месяцев назад
.clang-tidy 572b3141d3 clang-tidy : disable warning about missing math parenthesis (#13091) 8 месяцев назад
.dockerignore ea9c32be71 ci : fix docker build number and tag name (#9638) 1 год назад
.ecrc ad76569f8e common : Update stb_image.h to latest version (#9161) 1 год назад
.editorconfig a7a98e0fff SvelteKit-based WebUI (#14839) 4 месяцев назад
.flake8 1d36b3670b llama : move end-user examples to tools directory (#13249) 8 месяцев назад
.gitignore a7a98e0fff SvelteKit-based WebUI (#14839) 4 месяцев назад
.gitmodules d4cdd9c1c3 ggml : remove kompute backend (#14501) 6 месяцев назад
.pre-commit-config.yaml a2ac89d6ef convert.py : add python logging instead of print() (#6511) 1 год назад
AUTHORS 0fd7ca7a21 authors : update (#12271) 10 месяцев назад
CMakeLists.txt 07808ebb07 cmake : Do not install tools on iOS targets (#15903) 4 месяцев назад
CMakePresets.json 84b396e051 cmake : Add CMake presets for Linux and GCC (#14656) 6 месяцев назад
CODEOWNERS 6d7f1117e3 codeowners : remove mmv.* 5 месяцев назад
CONTRIBUTING.md c252ce67c4 contrib : add notes about merging PRs (#15881) 4 месяцев назад
LICENSE e11a8999b5 license : update copyright notice + add AUTHORS (#6405) 1 год назад
Makefile 37f10f955f make : remove make in favor of CMake (#15449) 5 месяцев назад
README.md 4795c91c32 docs : add Hunyuan to models section (#15707) 4 месяцев назад
SECURITY.md 1d36b3670b llama : move end-user examples to tools directory (#13249) 8 месяцев назад
build-xcframework.sh e9b6350e61 scripts : make the shell scripts cross-platform (#14341) 6 месяцев назад
convert_hf_to_gguf.py 8f8f2274ee convert : add Llama4ForCausalLM (#16042) 4 месяцев назад
convert_hf_to_gguf_update.py 6d758839ff Add LLaDA-7b-MoE diffusion model (#16003) 4 месяцев назад
convert_llama_ggml_to_gguf.py ee2984bdaf py : fix wrong input type for raw_dtype in ggml to gguf scripts (#8928) 1 год назад
convert_lora_to_gguf.py fd621880f3 aLoRA Support (#15327) 4 месяцев назад
flake.lock cce5a90075 flake.lock: Update (#10470) 1 год назад
flake.nix 1adc9812bd fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295) 5 месяцев назад
mypy.ini b43ebde3b0 convert : partially revert PR #4818 (#5041) 2 лет назад
poetry.lock b0a46993df build(python): Package scripts with pip-0517 compliance 1 год назад
pyproject.toml a7366faa5b gguf-py : avoid requiring pyside6 for other scripts (#13036) 8 месяцев назад
pyrightconfig.json 1d36b3670b llama : move end-user examples to tools directory (#13249) 8 месяцев назад
requirements.txt 669912d9a5 `tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 10 месяцев назад

README.md

llama.cpp

llama

Release Server

Manifesto / ggml / ops

LLM inference in C/C++

Recent API changes

Hot topics


Quick start

Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:

Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.

Example command:

# Use a local model file
llama-cli -m my_model.gguf

# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUF

Description

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.

  • Plain C/C++ implementation without any dependencies
  • Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
  • AVX, AVX2, AVX512 and AMX support for x86 architectures
  • 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
  • Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
  • Vulkan and SYCL backend support
  • CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity

The llama.cpp project is the main playground for developing new features for the ggml library.

Models Typically finetunes of the base models below are supported as well. Instructions for adding support for new models: [HOWTO-add-model.md](docs/development/HOWTO-add-model.md) #### Text-only - [X] LLaMA 🦙 - [x] LLaMA 2 🦙🦙 - [x] LLaMA 3 🦙🦙🦙 - [X] [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) - [x] [Mixtral MoE](https://huggingface.co/models?search=mistral-ai/Mixtral) - [x] [DBRX](https://huggingface.co/databricks/dbrx-instruct) - [X] [Falcon](https://huggingface.co/models?search=tiiuae/falcon) - [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) and [Chinese LLaMA-2 / Alpaca-2](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2) - [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne) - [X] [BERT](https://github.com/ggml-org/llama.cpp/pull/5423) - [X] [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/) - [X] [Baichuan 1 & 2](https://huggingface.co/models?search=baichuan-inc/Baichuan) + [derivations](https://huggingface.co/hiyouga/baichuan-7b-sft) - [X] [Aquila 1 & 2](https://huggingface.co/models?search=BAAI/Aquila) - [X] [Starcoder models](https://github.com/ggml-org/llama.cpp/pull/3187) - [X] [Refact](https://huggingface.co/smallcloudai/Refact-1_6B-fim) - [X] [MPT](https://github.com/ggml-org/llama.cpp/pull/3417) - [X] [Bloom](https://github.com/ggml-org/llama.cpp/pull/3553) - [x] [Yi models](https://huggingface.co/models?search=01-ai/Yi) - [X] [StableLM models](https://huggingface.co/stabilityai) - [x] [Deepseek models](https://huggingface.co/models?search=deepseek-ai/deepseek) - [x] [Qwen models](https://huggingface.co/models?search=Qwen/Qwen) - [x] [PLaMo-13B](https://github.com/ggml-org/llama.cpp/pull/3557) - [x] [Phi models](https://huggingface.co/models?search=microsoft/phi) - [x] [PhiMoE](https://github.com/ggml-org/llama.cpp/pull/11003) - [x] [GPT-2](https://huggingface.co/gpt2) - [x] [Orion 14B](https://github.com/ggml-org/llama.cpp/pull/5118) - [x] [InternLM2](https://huggingface.co/models?search=internlm2) - [x] [CodeShell](https://github.com/WisdomShell/codeshell) - [x] [Gemma](https://ai.google.dev/gemma) - [x] [Mamba](https://github.com/state-spaces/mamba) - [x] [Grok-1](https://huggingface.co/keyfan/grok-1-hf) - [x] [Xverse](https://huggingface.co/models?search=xverse) - [x] [Command-R models](https://huggingface.co/models?search=CohereForAI/c4ai-command-r) - [x] [SEA-LION](https://huggingface.co/models?search=sea-lion) - [x] [GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) + [GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B) - [x] [OLMo](https://allenai.org/olmo) - [x] [OLMo 2](https://allenai.org/olmo) - [x] [OLMoE](https://huggingface.co/allenai/OLMoE-1B-7B-0924) - [x] [Granite models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec3) - [x] [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) + [Pythia](https://github.com/EleutherAI/pythia) - [x] [Snowflake-Arctic MoE](https://huggingface.co/collections/Snowflake/arctic-66290090ab) - [x] [Smaug](https://huggingface.co/models?search=Smaug) - [x] [Poro 34B](https://huggingface.co/LumiOpen/Poro-34B) - [x] [Bitnet b1.58 models](https://huggingface.co/1bitLLM) - [x] [Flan T5](https://huggingface.co/models?search=flan-t5) - [x] [Open Elm models](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d) - [x] [ChatGLM3-6b](https://huggingface.co/THUDM/chatglm3-6b) + [ChatGLM4-9b](https://huggingface.co/THUDM/glm-4-9b) + [GLMEdge-1.5b](https://huggingface.co/THUDM/glm-edge-1.5b-chat) + [GLMEdge-4b](https://huggingface.co/THUDM/glm-edge-4b-chat) - [x] [GLM-4-0414](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34) - [x] [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad) - [x] [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct) - [x] [FalconMamba Models](https://huggingface.co/collections/tiiuae/falconmamba-7b-66b9a58032) - [x] [Jais](https://huggingface.co/inceptionai/jais-13b-chat) - [x] [Bielik-11B-v2.3](https://huggingface.co/collections/speakleash/bielik-11b-v23-66ee813238) - [x] [RWKV-6](https://github.com/BlinkDL/RWKV-LM) - [x] [QRWKV-6](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1) - [x] [GigaChat-20B-A3B](https://huggingface.co/ai-sage/GigaChat-20B-A3B-instruct) - [X] [Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview) - [x] [Ling models](https://huggingface.co/collections/inclusionAI/ling-67c51c85b3) - [x] [LFM2 models](https://huggingface.co/collections/LiquidAI/lfm2-686d721927) - [x] [Hunyuan models](https://huggingface.co/collections/tencent/hunyuan-dense-model-6890632cda) #### Multimodal - [x] [LLaVA 1.5 models](https://huggingface.co/collections/liuhaotian/llava-15-653aac15d9), [LLaVA 1.6 models](https://huggingface.co/collections/liuhaotian/llava-16-65b9e40155) - [x] [BakLLaVA](https://huggingface.co/models?search=SkunkworksAI/Bakllava) - [x] [Obsidian](https://huggingface.co/NousResearch/Obsidian-3B-V0.5) - [x] [ShareGPT4V](https://huggingface.co/models?search=Lin-Chen/ShareGPT4V) - [x] [MobileVLM 1.7B/3B models](https://huggingface.co/models?search=mobileVLM) - [x] [Yi-VL](https://huggingface.co/models?search=Yi-VL) - [x] [Mini CPM](https://huggingface.co/models?search=MiniCPM) - [x] [Moondream](https://huggingface.co/vikhyatk/moondream2) - [x] [Bunny](https://github.com/BAAI-DCAI/Bunny) - [x] [GLM-EDGE](https://huggingface.co/models?search=glm-edge) - [x] [Qwen2-VL](https://huggingface.co/collections/Qwen/qwen2-vl-66cee74555) - [x] [LFM2-VL](https://huggingface.co/collections/LiquidAI/lfm2-vl-68963bbc84)
Bindings - Python: [ddh0/easy-llama](https://github.com/ddh0/easy-llama) - Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) - Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp) - Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp) - JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp) - JS/TS (Programmable Prompt Engine CLI): [offline-ai/cli](https://github.com/offline-ai/cli) - JavaScript/Wasm (works in browser): [tangledgroup/llama-cpp-wasm](https://github.com/tangledgroup/llama-cpp-wasm) - Typescript/Wasm (nicer API, available on npm): [ngxson/wllama](https://github.com/ngxson/wllama) - Ruby: [yoshoku/llama_cpp.rb](https://github.com/yoshoku/llama_cpp.rb) - Rust (more features): [edgenai/llama_cpp-rs](https://github.com/edgenai/llama_cpp-rs) - Rust (nicer API): [mdrokz/rust-llama.cpp](https://github.com/mdrokz/rust-llama.cpp) - Rust (more direct bindings): [utilityai/llama-cpp-rs](https://github.com/utilityai/llama-cpp-rs) - Rust (automated build from crates.io): [ShelbyJenkins/llm_client](https://github.com/ShelbyJenkins/llm_client) - C#/.NET: [SciSharp/LLamaSharp](https://github.com/SciSharp/LLamaSharp) - C#/VB.NET (more features - community license): [LM-Kit.NET](https://docs.lm-kit.com/lm-kit-net/index.html) - Scala 3: [donderom/llm4s](https://github.com/donderom/llm4s) - Clojure: [phronmophobic/llama.clj](https://github.com/phronmophobic/llama.clj) - React Native: [mybigday/llama.rn](https://github.com/mybigday/llama.rn) - Java: [kherud/java-llama.cpp](https://github.com/kherud/java-llama.cpp) - Zig: [deins/llama.cpp.zig](https://github.com/Deins/llama.cpp.zig) - Flutter/Dart: [netdur/llama_cpp_dart](https://github.com/netdur/llama_cpp_dart) - Flutter: [xuegao-tzx/Fllama](https://github.com/xuegao-tzx/Fllama) - PHP (API bindings and features built on top of llama.cpp): [distantmagic/resonance](https://github.com/distantmagic/resonance) [(more info)](https://github.com/ggml-org/llama.cpp/pull/6326) - Guile Scheme: [guile_llama_cpp](https://savannah.nongnu.org/projects/guile-llama-cpp) - Swift [srgtuszy/llama-cpp-swift](https://github.com/srgtuszy/llama-cpp-swift) - Swift [ShenghaiWang/SwiftLlama](https://github.com/ShenghaiWang/SwiftLlama) - Delphi [Embarcadero/llama-cpp-delphi](https://github.com/Embarcadero/llama-cpp-delphi)
UIs *(to have a project listed here, it should clearly state that it depends on `llama.cpp`)* - [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT) - [cztomsik/ava](https://github.com/cztomsik/ava) (MIT) - [Dot](https://github.com/alexpinel/Dot) (GPL) - [eva](https://github.com/ylsdamxssjxxdd/eva) (MIT) - [iohub/collama](https://github.com/iohub/coLLaMA) (Apache-2.0) - [janhq/jan](https://github.com/janhq/jan) (AGPL) - [johnbean393/Sidekick](https://github.com/johnbean393/Sidekick) (MIT) - [KanTV](https://github.com/zhouwg/kantv?tab=readme-ov-file) (Apache-2.0) - [KodiBot](https://github.com/firatkiral/kodibot) (GPL) - [llama.vim](https://github.com/ggml-org/llama.vim) (MIT) - [LARS](https://github.com/abgulati/LARS) (AGPL) - [Llama Assistant](https://github.com/vietanhdev/llama-assistant) (GPL) - [LLMFarm](https://github.com/guinmoon/LLMFarm?tab=readme-ov-file) (MIT) - [LLMUnity](https://github.com/undreamai/LLMUnity) (MIT) - [LMStudio](https://lmstudio.ai/) (proprietary) - [LocalAI](https://github.com/mudler/LocalAI) (MIT) - [LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp) (AGPL) - [MindMac](https://mindmac.app) (proprietary) - [MindWorkAI/AI-Studio](https://github.com/MindWorkAI/AI-Studio) (FSL-1.1-MIT) - [Mobile-Artificial-Intelligence/maid](https://github.com/Mobile-Artificial-Intelligence/maid) (MIT) - [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile) (Apache-2.0) - [nat/openplayground](https://github.com/nat/openplayground) (MIT) - [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) (MIT) - [ollama/ollama](https://github.com/ollama/ollama) (MIT) - [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (AGPL) - [PocketPal AI](https://github.com/a-ghorbani/pocketpal-ai) (MIT) - [psugihara/FreeChat](https://github.com/psugihara/FreeChat) (MIT) - [ptsochantaris/emeltal](https://github.com/ptsochantaris/emeltal) (MIT) - [pythops/tenere](https://github.com/pythops/tenere) (AGPL) - [ramalama](https://github.com/containers/ramalama) (MIT) - [semperai/amica](https://github.com/semperai/amica) (MIT) - [withcatai/catai](https://github.com/withcatai/catai) (MIT) - [Autopen](https://github.com/blackhole89/autopen) (GPL)
Tools - [akx/ggify](https://github.com/akx/ggify) – download PyTorch models from HuggingFace Hub and convert them to GGML - [akx/ollama-dl](https://github.com/akx/ollama-dl) – download models from the Ollama library to be used directly with llama.cpp - [crashr/gppm](https://github.com/crashr/gppm) – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption - [gpustack/gguf-parser](https://github.com/gpustack/gguf-parser-go/tree/main/cmd/gguf-parser) - review/check the GGUF file and estimate the memory usage - [Styled Lines](https://marketplace.unity.com/packages/tools/generative-ai/styled-lines-llama-cpp-model-292902) (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
Infrastructure - [Paddler](https://github.com/intentee/paddler) - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure - [GPUStack](https://github.com/gpustack/gpustack) - Manage GPU clusters for running LLMs - [llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) - llama.cpp as a smart contract on the Internet Computer, using WebAssembly - [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy that adds automatic model switching with llama-server - [Kalavai](https://github.com/kalavai-net/kalavai-client) - Crowdsource end to end LLM deployment at any scale - [llmaz](https://github.com/InftyAI/llmaz) - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
Games - [Lucy's Labyrinth](https://github.com/MorganRO8/Lucys_Labyrinth) - A simple maze game where agents controlled by an AI model will try to trick you.

Supported backends

Backend Target devices
Metal Apple Silicon
BLAS All
BLIS All
SYCL Intel and Nvidia GPU
MUSA Moore Threads GPU
CUDA Nvidia GPU
HIP AMD GPU
Vulkan GPU
CANN Ascend NPU
OpenCL Adreno GPU
WebGPU [In Progress] All
RPC All

Obtaining and quantizing models

The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:

You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, such as ModelScope, by using this CLI argument: -hf <user>/<model>[:quant]. For example:

llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. MODEL_ENDPOINT=https://www.modelscope.cn/.

After downloading a model, use the CLI tools to run it locally - see below.

llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.

The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:

To learn more about model quantization, read this documentation

llama-cli

A CLI tool for accessing and experimenting with most of llama.cpp's functionality.

  • Run in conversation mode Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding `-cnv` and specifying a suitable chat template with `--chat-template NAME` ```bash llama-cli -m model.gguf # > hi, who are you? # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today? # # > what is 1+1? # Easy peasy! The answer to 1+1 is... 2! ```
  • Run in conversation mode with custom chat template ```bash # use the "chatml" template (use -h to see the list of supported templates) llama-cli -m model.gguf -cnv --chat-template chatml # use a custom template llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:' ```
  • Run simple text completion To disable conversation mode explicitly, use `-no-cnv` ```bash llama-cli -m model.gguf -p "I believe the meaning of life is" -n 128 -no-cnv # I believe the meaning of life is to find your own truth and to live in accordance with it. For me, this means being true to myself and following my passions, even if they don't align with societal expectations. I think that's what I love about yoga – it's not just a physical practice, but a spiritual one too. It's about connecting with yourself, listening to your inner voice, and honoring your own unique journey. ```
  • Constrain the output with a custom grammar ```bash llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:' # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"} ``` The [grammars/](grammars/) folder contains a handful of sample grammars. To write your own, check out the [GBNF Guide](grammars/README.md). For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/

llama-server

A lightweight, OpenAI API compatible, HTTP server for serving LLMs.

  • Start a local HTTP server with default configuration on port 8080 ```bash llama-server -m model.gguf --port 8080 # Basic web UI can be accessed via browser: http://localhost:8080 # Chat completion endpoint: http://localhost:8080/v1/chat/completions ```
  • Support multiple-users and parallel decoding ```bash # up to 4 concurrent requests, each with 4096 max context llama-server -m model.gguf -c 16384 -np 4 ```
  • Enable speculative decoding ```bash # the draft.gguf model should be a small variant of the target model.gguf llama-server -m model.gguf -md draft.gguf ```
  • Serve an embedding model ```bash # use the /embedding endpoint llama-server -m model.gguf --embedding --pooling cls -ub 8192 ```
  • Serve a reranking model ```bash # use the /reranking endpoint llama-server -m model.gguf --reranking ```
  • Constrain all outputs with a grammar ```bash # custom grammar llama-server -m model.gguf --grammar-file grammar.gbnf # JSON llama-server -m model.gguf --grammar-file grammars/json.gbnf ```

llama-perplexity

A tool for measuring the perplexity ^1 of a model over a given text.

  • Measure the perplexity over a text file ```bash llama-perplexity -m model.gguf -f file.txt # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ... # Final estimate: PPL = 5.4007 +/- 0.67339 ```
  • Measure KL divergence ```bash # TODO ```

llama-bench

Benchmark the performance of the inference for various parameters.

  • Run default benchmark ```bash llama-bench -m model.gguf # Output: # | model | size | params | backend | threads | test | t/s | # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 | # # build: 3e0ba0e60 (4229) ```

llama-run

A comprehensive example for running llama.cpp models. Useful for inferencing. Used with RamaLama ^3.

  • Run a model with a specific prompt (by default it's pulled from Ollama registry) ```bash llama-run granite-code ```

llama-simple

A minimal example for implementing apps with llama.cpp. Useful for developers.

  • Basic text completion ```bash llama-simple -m model.gguf # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of ```

Contributing

  • Contributors can open PRs
  • Collaborators can push to branches in the llama.cpp repo and merge PRs into the master branch
  • Collaborators will be invited based on contributions
  • Any help with managing issues, PRs and projects is very appreciated!
  • See good first issues for tasks suitable for first contributions
  • Read the CONTRIBUTING.md for more information
  • Make sure to read this: Inference at the edge
  • A bit of backstory for those who are interested: Changelog podcast

Other documentation

Development documentation

Seminal papers and background on the models

If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:

XCFramework

The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:

// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.

import PackageDescription

let package = Package(
    name: "MyLlamaPackage",
    targets: [
        .executableTarget(
            name: "MyLlamaPackage",
            dependencies: [
                "LlamaFramework"
            ]),
        .binaryTarget(
            name: "LlamaFramework",
            url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
            checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
        )
    ]
)

The above example is using an intermediate build b5046 of the library. This can be modified to use a different version by changing the URL and checksum.

Completions

Command-line completion is available for some environments.

Bash Completion

$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

Optionally this can be added to your .bashrc or .bash_profile to load it automatically. For example:

$ echo "source ~/.llama-completion.bash" >> ~/.bashrc

Dependencies

  • yhirose/cpp-httplib - Single-header HTTP server, used by llama-server - MIT license
  • stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
  • nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
  • minja - Minimal Jinja parser in C++, used by various tools/examples - MIT License
  • linenoise.cpp - C++ library that provides readline-like line editing capabilities, used by llama-run - BSD 2-Clause License
  • curl - Client-side URL transfer library, used by various tools/examples - CURL License
  • miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain