|
|
@@ -21,7 +21,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
|
|
|
- **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387**
|
|
|
- Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404
|
|
|
- Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225
|
|
|
-- Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017
|
|
|
+- Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017
|
|
|
- Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981
|
|
|
- Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962
|
|
|
- Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328
|