|
|
@@ -18,12 +18,12 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
|
|
|
|
|
|
### Hot topics
|
|
|
|
|
|
+- Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404
|
|
|
- Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225
|
|
|
- Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017
|
|
|
- Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981
|
|
|
- Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962
|
|
|
- Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328
|
|
|
-- Support loading sharded model, using `gguf-split` CLI https://github.com/ggerganov/llama.cpp/pull/6187
|
|
|
|
|
|
----
|
|
|
|