Marcus Dunn 5be6c803fa llama : remove token functions with `context` args in favor of `model` (#3720) %!s(int64=2) %!d(string=hai) anos
..
CMakeLists.txt ec893798b7 llama : custom attention mask + parallel decoding + no context swaps (#3228) %!s(int64=2) %!d(string=hai) anos
README.md ec893798b7 llama : custom attention mask + parallel decoding + no context swaps (#3228) %!s(int64=2) %!d(string=hai) anos
parallel.cpp 5be6c803fa llama : remove token functions with `context` args in favor of `model` (#3720) %!s(int64=2) %!d(string=hai) anos

README.md

llama.cpp/example/parallel

Simplified simluation for serving incoming requests in parallel