Explorar o código

readme : server compile flag (#1874)

Explicitly include the server make instructions for C++ noobsl like me ;)
Srinivas Billa %!s(int64=2) %!d(string=hai) anos
pai
achega
9dda13e5e1
Modificáronse 1 ficheiros con 4 adicións e 0 borrados
  1. 4 0
      examples/server/README.md

+ 4 - 0
examples/server/README.md

@@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
 To get started right away, run the following command, making sure to use the correct path for the model you have:
 
 #### Unix-based systems (Linux, macOS, etc.):
+Make sure to build with the server option on
+```bash
+LLAMA_BUILD_SERVER=1 make
+```
 
 ```bash
 ./server -m models/7B/ggml-model.bin --ctx_size 2048