Xuan-Son Nguyen 63ac128563 server : add TEI API format for /rerank endpoint (#11942) 11 ヶ月 前
..
unit 63ac128563 server : add TEI API format for /rerank endpoint (#11942) 11 ヶ月 前
.gitignore 45abe0f74e server : replace behave with pytest (#10416) 1 年間 前
README.md 09aaf4f1f5 docs : Fix duplicated file extension in test command (#11935) 11 ヶ月 前
conftest.py 45abe0f74e server : replace behave with pytest (#10416) 1 年間 前
pytest.ini 8b576b6c55 Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639) 11 ヶ月 前
requirements.txt 0da5d86026 server : allow using LoRA adapters per-request (#10994) 1 年間 前
tests.sh 8b576b6c55 Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639) 11 ヶ月 前
utils.py c7f460ab88 `server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607) 11 ヶ月 前

README.md

Server tests

Python based server tests scenario using pytest.

Tests target GitHub workflows job runners with 4 vCPU.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server

    cd ../../..
    cmake -B build -DLLAMA_CURL=ON
    cmake --build build --target llama-server
    
  2. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG to enable steps and server verbose mode --verbose
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers
LLAMA_CACHE by default server tests re-download models to the tmp subfolder. Set this to your cache (e.g. $HOME/Library/Caches/llama.cpp on Mac or $HOME/.cache/llama.cpp on Unix) to avoid this

To run slow tests (will download many models, make sure to set LLAMA_CACHE if needed):

SLOW_TESTS=1 ./tests.sh

To run with stdout/stderr display in real time (verbose output, but useful for debugging):

DEBUG=1 ./tests.sh -s -v -x

To run all the tests in a file:

./tests.sh unit/test_chat_completion.py -v -x

To run a single test:

./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req

Hint: You can compile and run test in single command, useful for local developement:

cmake --build build -j --target llama-server && ./examples/server/tests/tests.sh

To see all available arguments, please refer to pytest documentation