SAMI
|
1ec208083c
llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644)
|
11 ماه پیش |
HimariO
|
ba1cb19cdd
llama : add Qwen2VL support + multimodal RoPE (#10361)
|
1 سال پیش |
Diego Devesa
|
7cc2d2c889
ggml : move AMX to the CPU backend (#10570)
|
1 سال پیش |
tc-mb
|
3071c0a5f2
llava : support MiniCPM-V-2.5 (#7599)
|
1 سال پیش |
Olivier Chafik
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 سال پیش |
Steward Garcia
|
ce18d727a4
clip : enable gpu backend (#4205)
|
2 سال پیش |
Cuong Trinh Manh
|
97bbca6e85
cmake : fix ld warning duplicate libraries libllama.a (#4671)
|
2 سال پیش |
Damian Stewart
|
381efbf480
llava : expose as a shared library for downstream projects (#3613)
|
2 سال پیش |
cebtenzzre
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 سال پیش |
Georgi Gerganov
|
438c2ca830
server : parallel decoding and multimodal (#3677)
|
2 سال پیش |
M. Yusuf Sarıgöz
|
370359e5ba
examples: support LLaVA v1.5 (multimodal model) (#3436)
|
2 سال پیش |