Переглянути джерело

server : add `LOG_INFO` when model is successfully loaded (#4881)

* added /health endpoint to the server

* added comments on the additional /health endpoint

* Better handling of server state

When the model is being loaded, the server state is `LOADING_MODEL`. If model-loading fails, the server state becomes `ERROR`, otherwise it becomes `READY`. The `/health` endpoint provides more granular messages now according to the server_state value.

* initialized server_state

* fixed a typo

* starting http server before initializing the model

* Update server.cpp

* Update server.cpp

* fixes

* fixes

* fixes

* made ServerState atomic and turned two-line spaces into one-line

* updated `server` readme to document the `/health` endpoint too

* used LOG_INFO after successful model loading
Behnam M 2 роки тому
батько
коміт
eab6795006
1 змінених файлів з 1 додано та 0 видалено
  1. 1 0
      examples/server/server.cpp

+ 1 - 0
examples/server/server.cpp

@@ -2906,6 +2906,7 @@ int main(int argc, char **argv)
     } else {
         llama.initialize();
         state.store(SERVER_STATE_READY);
+        LOG_INFO("model loaded", {});
     }
 
     // Middleware for API key validation