Xuan Son Nguyen b782e5c7d4 server : add more test cases (#10569) пре 1 година
..
unit b782e5c7d4 server : add more test cases (#10569) пре 1 година
.gitignore 45abe0f74e server : replace behave with pytest (#10416) пре 1 година
README.md 45abe0f74e server : replace behave with pytest (#10416) пре 1 година
conftest.py 45abe0f74e server : replace behave with pytest (#10416) пре 1 година
requirements.txt 6c59567689 server : (tests) don't use thread for capturing stdout/stderr, bump openai client library (#10568) пре 1 година
tests.sh 45abe0f74e server : replace behave with pytest (#10416) пре 1 година
utils.py b782e5c7d4 server : add more test cases (#10569) пре 1 година

README.md

Server tests

Python based server tests scenario using pytest.

Tests target GitHub workflows job runners with 4 vCPU.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server

    cd ../../..
    cmake -B build -DLLAMA_CURL=ON
    cmake --build build --target llama-server
    
  2. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG to enable steps and server verbose mode --verbose
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers

To run slow tests:

SLOW_TESTS=1 ./tests.sh

To run with stdout/stderr display in real time (verbose output, but useful for debugging):

DEBUG=1 ./tests.sh -s -v -x

To see all available arguments, please refer to pytest documentation