|
|
2 anni fa | |
|---|---|---|
| .. | ||
| .gitignore | 2 anni fa | |
| CMakeLists.txt | 2 anni fa | |
| README.md | 2 anni fa | |
| embd-input-lib.cpp | 2 anni fa | |
| embd-input-test.cpp | 2 anni fa | |
| embd-input.h | 2 anni fa | |
| embd_input.py | 2 anni fa | |
| llava.py | 2 anni fa | |
| minigpt4.py | 2 anni fa | |
| panda_gpt.py | 2 anni fa | |
build libembdinput.so
run the following comman in main dir (../../).
make
llava_projection.pth is pytorch_model-00003-of-00003.bin.
import torch
bin_path = "../LLaVA-13b-delta-v1-1/pytorch_model-00003-of-00003.bin"
pth_path = "./examples/embd-input/llava_projection.pth"
dic = torch.load(bin_path)
used_key = ["model.mm_projector.weight","model.mm_projector.bias"]
torch.save({k: dic[k] for k in used_key}, pth_path)
Check the path of LLaVA model and llava_projection.pth in llava.py.
Obtian PandaGPT lora model from https://github.com/yxuansu/PandaGPT. Rename the file to adapter_model.bin. Use convert-lora-to-ggml.py to convert it to ggml format.
The adapter_config.json is
{
"peft_type": "LORA",
"fan_in_fan_out": false,
"bias": null,
"modules_to_save": null,
"r": 32,
"lora_alpha": 32,
"lora_dropout": 0.1,
"target_modules": ["q_proj", "k_proj", "v_proj", "o_proj"]
}
Papare the vicuna v0 model.
Obtain the ImageBind model.
Clone the PandaGPT source.
git clone https://github.com/yxuansu/PandaGPT
Install the requirement of PandaGPT.
Check the path of PandaGPT source, ImageBind model, lora model and vicuna model in panda_gpt.py.
embd-input.Clone the MiniGPT-4 source.
git clone https://github.com/Vision-CAIR/MiniGPT-4/
Install the requirement of PandaGPT.
Papare the vicuna v0 model.
Check the path of MiniGPT-4 source, MiniGPT-4 model and vicuna model in minigpt4.py.