README.md 94 B

llama.cpp/example/parallel

Simplified simluation for serving incoming requests in parallel