Просмотр исходного кода

readme : add note that LLaMA 3 is not supported with convert.py (#7065)

Lyle Dean 1 год назад
Родитель
Сommit
ca36326020
1 измененных файлов с 2 добавлено и 0 удалено
  1. 2 0
      README.md

+ 2 - 0
README.md

@@ -712,6 +712,8 @@ Building the program with BLAS support may lead to some performance improvements
 
 To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
 
+Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. 
+
 ```bash
 # obtain the official LLaMA model weights and place them in ./models
 ls ./models