|
|
1 anno fa | |
|---|---|---|
| .. | ||
| features | 1 anno fa | |
| README.md | 1 anno fa | |
| requirements.txt | 1 anno fa | |
| tests.sh | 1 anno fa | |
Python based server tests scenario using BDD and behave:
Tests target GitHub workflows job runners with 4 vCPU.
Requests are using aiohttp, asyncio based http client.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.
pip install -r requirements.txt
Build the server
cd ../../..
mkdir build
cd build
cmake ../
cmake --build . --target server
download required models:
../../../scripts/hf.sh --repo ggml-org/models --file tinyllamas/stories260K.ggufStart the test: ./tests.sh
It's possible to override some scenario steps values with environment variables:
PORT -> context.server_port to set the listening port of the server during scenario, default: 8080LLAMA_SERVER_BIN_PATH -> to change the server binary path, default: ../../../build/bin/serverDEBUG -> "ON" to enable steps and server verbose mode --verboseFeature or Scenario must be annotated with @llama.cpp to be included in the default scope.
@bug annotation aims to link a scenario with a GitHub issue.@wrong_usage are meant to show user issue that are actually an expected behavior@wip to focus on a scenario working in progressTo run a scenario annotated with @bug, start:
DEBUG=ON ./tests.sh --no-skipped --tags bug
After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.