Browse Source

README: add "Supported platforms" + update hot topics

Georgi Gerganov 2 years ago
parent
commit
7d86e25bf6
1 changed files with 8 additions and 1 deletions
  1. 8 1
      README.md

+ 8 - 1
README.md

@@ -5,10 +5,11 @@ Inference of [Facebook's LLaMA](https://github.com/facebookresearch/llama) model
 **Hot topics**
 **Hot topics**
 
 
 - Running on Windows: https://github.com/ggerganov/llama.cpp/issues/22
 - Running on Windows: https://github.com/ggerganov/llama.cpp/issues/22
+- Fix Tokenizer / Unicode support: https://github.com/ggerganov/llama.cpp/issues/11
 
 
 ## Description
 ## Description
 
 
-The main goal is to run the model using 4-bit quantization on a MacBook.
+The main goal is to run the model using 4-bit quantization on a MacBook
 
 
 - Plain C/C++ implementation without dependencies
 - Plain C/C++ implementation without dependencies
 - Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
 - Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
@@ -22,6 +23,12 @@ Please do not make conclusions about the models based on the results from this i
 For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly.
 For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly.
 New features will probably be added mostly through community contributions, if any.
 New features will probably be added mostly through community contributions, if any.
 
 
+Supported platformst:
+
+- [X] Mac OS
+- [X] Linux
+- [ ] Windows (soon)
+
 ---
 ---
 
 
 Here is a typical run using LLaMA-7B:
 Here is a typical run using LLaMA-7B: