|
|
3 settimane fa | |
|---|---|---|
| .. | ||
| bench | 4 mesi fa | |
| public | 4 settimane fa | |
| public_legacy | 1 mese fa | |
| public_simplechat | 8 mesi fa | |
| tests | 1 mese fa | |
| themes | 8 mesi fa | |
| webui | 4 settimane fa | |
| CMakeLists.txt | 1 mese fa | |
| README-dev.md | 1 mese fa | |
| README.md | 3 settimane fa | |
| chat-llama2.sh | 6 mesi fa | |
| chat.mjs | 8 mesi fa | |
| chat.sh | 6 mesi fa | |
| server-common.cpp | 1 mese fa | |
| server-common.h | 1 mese fa | |
| server-context.cpp | 3 settimane fa | |
| server-context.h | 1 mese fa | |
| server-http.cpp | 1 mese fa | |
| server-http.h | 2 mesi fa | |
| server-models.cpp | 3 settimane fa | |
| server-models.h | 3 settimane fa | |
| server-queue.cpp | 1 mese fa | |
| server-queue.h | 1 mese fa | |
| server-task.cpp | 1 mese fa | |
| server-task.h | 1 mese fa | |
| server.cpp | 1 mese fa | |
This document provides an in-depth technical overview of llama-server, intended for maintainers and contributors.
If you are an end user consuming llama-server as a product, please refer to the main README instead.
The server supports two primary operating modes:
The core architecture consists of the following components:
server_context: Holds the primary inference state, including the main llama_context and all active slots.server_slot: An abstraction over a single “sequence” in llama.cpp, responsible for managing individual parallel inference requests.server_routes: Middleware layer between server_context and the HTTP interface; handles JSON parsing/formatting and request routing logic.server_http_context: Implements the HTTP server using cpp-httplib.server_queue: Thread-safe queue used by HTTP workers to submit new tasks to server_context.server_response: Thread-safe queue used by server_context to return results to HTTP workers.server_response_reader: Higher-level wrapper around the two queues above for cleaner code.server_task: Unit of work pushed into server_queue.server_task_result: Unit of result pushed into server_response.server_tokens: Unified representation of token sequences (supports both text and multimodal tokens); used by server_task and server_slot.server_prompt_checkpoint: For recurrent (e.g., RWKV) and SWA models, stores snapshots of KV cache state. Enables reuse when subsequent requests share the same prompt prefix, saving redundant computation.server_models: Standalone component for managing multiple backend instances (used in router mode). It is completely independent of server_context.
graph TD
API_User <--> server_http_context
server_http_context <-- router mode --> server_models
server_http_context <-- inference mode --> server_routes
server_routes -- server_task --> server_queue
subgraph server_context
server_queue --> server_slot
server_slot -- server_task_result --> server_response
server_slot[multiple server_slot]
end
server_response --> server_routes
The server context maintains a single batch shared across all slots. When update_slots() is invoked, the system iterates through all active slots to populate this batch. For each slot, either a generated token from the previous decoding step or available prompt tokens are added to the batch.
Batching constraints apply: slots can only be batched together if they share compatible configurations. For instance, slots using a specific LoRA adapter can be batched with each other, but not with slots using a different LoRA adapter or no adapter at all.
Once the batch reaches capacity or all slots have been processed, llama_decode is called to execute the inference. This operation represents the primary computational bottleneck in update_slots().
Following decoding, the system either retrieves embeddings or samples the next token using common_sampler_sample. If a slot has remaining prompt tokens to process, it yields until the next update_slots() iteration.
server_context runs on a dedicated single thread. Because it is single-threaded, heavy post-processing (especially after token generation) should be avoided, as it directly impacts multi-sequence throughput.
Each incoming HTTP request is handled by its own thread managed by the HTTP library. The following operations are performed in HTTP worker threads:
server_task_result into final JSON responseBest practices to follow:
server_slot. Instead, parse everything into native C++ types as early as possible.Here is an example trace of an API request for text completion:
server_routes. In this case, handle_completions_impl is invoked.server_task, and passes it to server_res_generator.server_res_generator creates a new task_result_state for each task:
task_result_state stays in the HTTP layer, responsible for keeping track of the current state of the response (e.g., parsing tool calls or thinking messages).server_task is moved into server_queue inside server_context.server_context launches the task by moving it into an available slot (see launch_slot_with_task()).update_slot() processes the task as described in the "Batching" section above.send_partial_response or send_final_response, which creates a new server_task_result and pushes it to the response queue.server_res_generator listens to the response queue and retrieves this response.server_res_generator calls response->update() to update the response with the current state.server_res_generator then calls response->to_json() and passes the response to the HTTP layer.llama-server includes an automated test suite based on pytest.
The framework automatically starts a llama-server instance, sends requests, and validates responses.
For detailed instructions, see the test documentation.
server_queue and server_response: https://github.com/ggml-org/llama.cpp/pull/5065libmtmd): https://github.com/ggml-org/llama.cpp/pull/12898The project includes a web-based user interface for interacting with llama-server. It supports both single-model (MODEL mode) and multi-model (ROUTER mode) operation.
The SvelteKit-based Web UI is introduced in this PR: https://github.com/ggml-org/llama.cpp/pull/14839
The WebUI follows a layered architecture:
Routes → Components → Hooks → Stores → Services → Storage/API
chatStore, conversationsStore, modelsStore, serverStore, settingsStore)ChatService, ModelsService, PropsService, DatabaseService)useModelChangeValidation, useProcessingState)For detailed architecture diagrams, see tools/server/webui/docs/:
high-level-architecture.mmd - full architecture with all moduleshigh-level-architecture-simplified.mmd - simplified overviewdata-flow-simplified-model-mode.mmd - data flow for single-model modedata-flow-simplified-router-mode.mmd - data flow for multi-model modeflows/*.mmd - detailed per-domain flows (chat, conversations, models, etc.)# make sure you have Node.js installed
cd tools/server/webui
npm i
# run dev server (with hot reload)
npm run dev
# run tests
npm run test
# build production bundle
npm run build
After public/index.html.gz has been generated, rebuild llama-server as described in the build section to include the updated UI.
Note: The Vite dev server automatically proxies API requests to http://localhost:8080. Make sure llama-server is running on that port during development.