|
|
@@ -1,6 +1,6 @@
|
|
|
# llama.cpp
|
|
|
|
|
|
-
|
|
|
+
|
|
|
|
|
|
[](https://github.com/ggerganov/llama.cpp/actions)
|
|
|
[](https://opensource.org/licenses/MIT)
|
|
|
@@ -10,7 +10,6 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
|
|
**Hot topics:**
|
|
|
|
|
|
- [Roadmap (short-term)](https://github.com/ggerganov/llama.cpp/discussions/457)
|
|
|
-- Support for [GPT4All](https://github.com/ggerganov/llama.cpp#using-gpt4all)
|
|
|
|
|
|
## Description
|
|
|
|
|
|
@@ -28,20 +27,31 @@ Please do not make conclusions about the models based on the results from this i
|
|
|
For all I know, it can be completely wrong. This project is for educational purposes.
|
|
|
New features will probably be added mostly through community contributions.
|
|
|
|
|
|
-Supported platforms:
|
|
|
+**Supported platforms:**
|
|
|
|
|
|
- [X] Mac OS
|
|
|
- [X] Linux
|
|
|
- [X] Windows (via CMake)
|
|
|
- [X] Docker
|
|
|
|
|
|
-Supported models:
|
|
|
+**Supported models:**
|
|
|
|
|
|
- [X] LLaMA 🦙
|
|
|
- [X] [Alpaca](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca)
|
|
|
- [X] [GPT4All](https://github.com/ggerganov/llama.cpp#using-gpt4all)
|
|
|
- [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
|
|
|
- [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne)
|
|
|
+- [X] [Vicuna](https://github.com/ggerganov/llama.cpp/discussions/643#discussioncomment-5533894)
|
|
|
+
|
|
|
+**Bindings:**
|
|
|
+
|
|
|
+- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
|
|
+- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
|
|
|
+
|
|
|
+**UI:**
|
|
|
+
|
|
|
+- [nat/openplayground](https://github.com/nat/openplayground)
|
|
|
+- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
|
|
|
|
|
---
|
|
|
|
|
|
@@ -374,3 +384,6 @@ docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models
|
|
|
- Clean-up any trailing whitespaces, use 4 spaces indentation, brackets on same line, `void * ptr`, `int & a`
|
|
|
- See [good first issues](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
|
|
|
|
|
|
+### Docs
|
|
|
+
|
|
|
+- [GGML tips & tricks](https://github.com/ggerganov/llama.cpp/wiki/GGML-Tips-&-Tricks)
|