Sfoglia il codice sorgente

readme : server compile flag (#1874)

Explicitly include the server make instructions for C++ noobsl like me ;)
Srinivas Billa 2 anni fa
parent
commit
9dda13e5e1
1 ha cambiato i file con 4 aggiunte e 0 eliminazioni
  1. 4 0
      examples/server/README.md

+ 4 - 0
examples/server/README.md

@@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
 To get started right away, run the following command, making sure to use the correct path for the model you have:
 
 #### Unix-based systems (Linux, macOS, etc.):
+Make sure to build with the server option on
+```bash
+LLAMA_BUILD_SERVER=1 make
+```
 
 ```bash
 ./server -m models/7B/ggml-model.bin --ctx_size 2048