|
|
5 months ago | |
|---|---|---|
| .. | ||
| legacy-models | 5 months ago | |
| CMakeLists.txt | 7 months ago | |
| README.md | 8 months ago | |
| clip-impl.h | 5 months ago | |
| clip.cpp | 5 months ago | |
| clip.h | 7 months ago | |
| deprecation-warning.cpp | 8 months ago | |
| mtmd-audio.cpp | 7 months ago | |
| mtmd-audio.h | 7 months ago | |
| mtmd-cli.cpp | 7 months ago | |
| mtmd-helper.cpp | 7 months ago | |
| mtmd-helper.h | 7 months ago | |
| mtmd.cpp | 5 months ago | |
| mtmd.h | 7 months ago | |
| requirements.txt | 5 months ago | |
| test-1.jpeg | 8 months ago | |
| test-2.mp3 | 7 months ago | |
| tests.sh | 5 months ago | |
This directory provides multimodal capabilities for llama.cpp. Initially intended as a showcase for running LLaVA models, its scope has expanded significantly over time to include various other vision-capable models. As a result, LLaVA is no longer the only multimodal architecture supported.
[!IMPORTANT]
Multimodal support can be viewed as a sub-project within
llama.cpp. It is under very heavy development, and breaking changes are expected.
The naming and structure related to multimodal support have evolved, which might cause some confusion. Here's a brief timeline to clarify:
llava.cpp and clip.cpp. The llava-cli binary was created for model interaction.llava.cpp, clip.cpp, and llava-cli infrastructure.llava-cli lacked support for the increasingly complex chat templates required by these models. This led to the creation of model-specific binaries like qwen2vl-cli, minicpmv-cli, and gemma3-cli. While functional, this proliferation of command-line tools became confusing for users.libmtmd was introduced as a replacement for llava.cpp. Its goals include providing a single, unified command-line interface, improving the user/developer experience (UX/DX), and supporting both audio and image inputs.mtmd-cli was added, consolidating the various model-specific CLIs into a single tool powered by libmtmd.See the list of pre-quantized model here
mmproj?Multimodal support in llama.cpp works by encoding images into embeddings using a separate model component, and then feeding these embeddings into the language model.
This approach keeps the multimodal components distinct from the core libllama library. Separating these allows for faster, independent development cycles. While many modern vision models are based on Vision Transformers (ViTs), their specific pre-processing and projection steps can vary significantly. Integrating this diverse complexity directly into libllama is currently challenging.
Consequently, running a multimodal model typically requires two GGUF files:
mmproj) file, which handles the image encoding and projection.libmtmd?As outlined in the history, libmtmd is the modern library designed to replace the original llava.cpp implementation for handling multimodal inputs.
Built upon clip.cpp (similar to llava.cpp), libmtmd offers several advantages:
Processor class in the Hugging Face transformers library.mmprojMultimodal projector (mmproj) files are specific to each model architecture.
For the following models, you can use convert_hf_to_gguf.py with --mmproj flag to get the mmproj file:
transformers-compatible checkpointInternVL3-*-hf model, only non-HF version is supported ; InternLM2Model text model is not supported)For older models, please refer to the relevant guide for instructions on how to obtain or create them:
NOTE: conversion scripts are located under tools/mtmd/legacy-models