LostRuins Concedo 59e991c23c Fixes Qwen2.5VL segfault during inference with https://github.com/ggml-org/llama.cpp/pull/12402 as has_qwen2vl_merger migration was incomplete (#13133) il y a 8 mois
..
android 243453533e llava : update documentations (#13055) il y a 9 mois
CMakeLists.txt 84a9bf2fc2 mtmd : merge llava, gemma3 and minicpmv CLI into single `llama-mtmd-cli` (#13012) il y a 9 mois
README-quantize.md 1ec208083c llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) il y a 11 mois
README.md ecda2ec4b3 mtmd : Support Pixtral 12B (#13065) il y a 9 mois
clip-impl.h ca2bb89eac clip : Add Qwen2.5VL support (#12402) il y a 8 mois
clip-quantize-cli.cpp 1ec208083c llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) il y a 11 mois
clip.cpp 59e991c23c Fixes Qwen2.5VL segfault during inference with https://github.com/ggml-org/llama.cpp/pull/12402 as has_qwen2vl_merger migration was incomplete (#13133) il y a 8 mois
clip.h 4753791e70 clip : improve projector naming (#13118) il y a 8 mois
convert_image_encoder_to_gguf.py e9b2f84f14 llava: add big-endian conversion for image encoder (#12218) il y a 10 mois
deprecation-warning.cpp 84a9bf2fc2 mtmd : merge llava, gemma3 and minicpmv CLI into single `llama-mtmd-cli` (#13012) il y a 9 mois
glmedge-convert-image-encoder-to-gguf.py 0cec062a63 llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) il y a 11 mois
glmedge-surgery.py 0cec062a63 llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) il y a 11 mois
llava.cpp 0c50923944 clip : use smart pointer (⚠️ breaking change) (#12869) il y a 9 mois
llava.h 3071c0a5f2 llava : support MiniCPM-V-2.5 (#7599) il y a 1 an
llava_surgery.py e235b267a2 py : switch to snake_case (#8305) il y a 1 an
llava_surgery_v2.py 7a2c913e66 llava : Add Granite Vision Support (#11794) il y a 10 mois
minicpmv-convert-image-encoder-to-gguf.py 8352cdc87b llava : fix bug in minicpm-v code (#11513) il y a 10 mois
minicpmv-surgery.py 3e3357fd77 llava : support Minicpm-omni (#11289) il y a 1 an
mtmd-cli.cpp 7c727fbe39 arg : add --no-mmproj-offload (#13093) il y a 8 mois
mtmd.cpp 13be08daf9 clip : remove boi/eoi embeddings for GLM-edge model (#13081) il y a 8 mois
mtmd.h b9154ecff9 mtmd : add methods to access `mtmd_image_tokens` (#12906) il y a 9 mois
qwen2_vl_surgery.py ca2bb89eac clip : Add Qwen2.5VL support (#12402) il y a 8 mois
qwen2vl-cli.cpp ca2bb89eac clip : Add Qwen2.5VL support (#12402) il y a 8 mois
requirements.txt d3ae0ee8d7 py : fix requirements check '==' -> '~=' (#8982) il y a 1 an
test-1.jpeg 0364178ca2 clip : refactor clip_init, add tests (#12757) il y a 9 mois
tests.sh ca2bb89eac clip : Add Qwen2.5VL support (#12402) il y a 8 mois

README-quantize.md

Quantizing CLIP Visual Projector

This is the tool for quantizing the CLIP visual projector model. Quantization reduces the precision of the model's weights, which can significantly decrease the model size and improve inference speed, often with minimal impact on performance.

Usage

To quantize a CLIP visual projector model, use the following command:

./bin/llama-llava-clip-quantize-cli /path/to/ggml-model-f32.gguf /path/to/ggml-model-quantized.gguf <type>

After the quantization, the visual projector can be used freely with the existing LLAVA cli (LLAVA, Qwen2VL, etc).

Arguments

  • /path/to/ggml-model-f32.gguf: The path to the input model file in FP32 or FP16 format.
  • /path/to/ggml-model-quantized.gguf: The path where the quantized model will be saved.
  • <type>: The quantization type to apply. This should be an integer corresponding to one of the quantization types defined in the enum ggml_type.

Quantization Types

The following quantization types are supported, based on the enum ggml_type definition:

  • 2 - q4_0: 4-bit quantization with a single scale value.
  • 3 - q4_1: 4-bit quantization with a separate scale value for each block.
  • 6 - q5_0: 5-bit quantization with a single scale value.
  • 7 - q5_1: 5-bit quantization with a separate scale value for each block.
  • 8 - q8_0: 8-bit quantization with a single scale value.

Example

To quantize a model using the q4_0 quantization type, you would run:

./bin/llama-llava-clip-quantize-cli /path/to/ggml-model-f32.gguf /path/to/ggml-model-quantized.gguf 2

This command will generate a quantized model at /path/to/ggml-model-quantized.gguf using the q4_0 quantization method.

Notes

  • Quantization can lead to a loss in model accuracy, depending on the chosen quantization type. It is recommended to evaluate the quantized model's performance on your specific task to ensure it meets your requirements.
  • The quantized model will typically be smaller in size and faster to run, making it more suitable for deployment in resource-constrained environments.