|
|
@@ -2,7 +2,6 @@
|
|
|
|
|
|

|
|
|
|
|
|
-[](https://github.com/ggerganov/llama.cpp/actions)
|
|
|
[](https://opensource.org/licenses/MIT)
|
|
|
|
|
|
[Roadmap](https://github.com/users/ggerganov/projects/7) / [Project status](https://github.com/ggerganov/llama.cpp/discussions/3471) / [Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205) / [ggml](https://github.com/ggerganov/ggml)
|
|
|
@@ -11,8 +10,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
|
|
|
|
|
### Hot topics
|
|
|
|
|
|
-- LLaVA support: https://github.com/ggerganov/llama.cpp/pull/3436
|
|
|
-- ‼️ BPE tokenizer update: existing Falcon and Starcoder `.gguf` models will need to be reconverted: [#3252](https://github.com/ggerganov/llama.cpp/pull/3252)
|
|
|
+- ⚠️ **Upcoming change that might break functionality. Help with testing is needed:** https://github.com/ggerganov/llama.cpp/pull/3912
|
|
|
|
|
|
----
|
|
|
|