Ver Fonte

readme : server compile flag (#1874)

Explicitly include the server make instructions for C++ noobsl like me ;)
Srinivas Billa há 2 anos atrás
pai
commit
9dda13e5e1
1 ficheiros alterados com 4 adições e 0 exclusões
  1. 4 0
      examples/server/README.md

+ 4 - 0
examples/server/README.md

@@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
 To get started right away, run the following command, making sure to use the correct path for the model you have:
 
 #### Unix-based systems (Linux, macOS, etc.):
+Make sure to build with the server option on
+```bash
+LLAMA_BUILD_SERVER=1 make
+```
 
 ```bash
 ./server -m models/7B/ggml-model.bin --ctx_size 2048