ctx_shift.feature 2.6 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
  1. @llama.cpp
  2. @ctx_shift
  3. Feature: llama.cpp server
  4. Background: Server startup
  5. Given a server listening on localhost:8080
  6. And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models
  7. And a model file test-model.gguf
  8. And a model alias tinyllama-2
  9. And BOS token is 1
  10. And 42 as server seed
  11. And 256 KV cache size
  12. And 32 as batch size
  13. And 2 slots
  14. Scenario: Inference with context shift
  15. And 64 server max tokens to predict
  16. Then the server is starting
  17. Then the server is healthy
  18. Given a prompt:
  19. """
  20. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
  21. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
  22. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
  23. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
  24. """
  25. And a completion request with no api error
  26. Then 64 tokens are predicted matching fun|Annaks|popcorns|pictry|bowl
  27. And the completion is truncated
  28. And 109 prompt tokens are processed
  29. Scenario Outline: Inference without context shift
  30. And <n_predict> server max tokens to predict
  31. And disable context shifting
  32. Then the server is starting
  33. Then the server is healthy
  34. Given a prompt:
  35. """
  36. Hi how are you
  37. """
  38. And a completion request with no api error
  39. Then <n_token_output> tokens are predicted matching twind|Anna
  40. And the completion is <truncated> truncated
  41. And 8 prompt tokens are processed
  42. Examples:
  43. | n_predict | n_token_output | truncated |
  44. | 64 | 64 | not |
  45. | -1 | 120 | |
  46. Scenario: Inference without context shift (expected error: prompt too long)
  47. And disable context shifting
  48. Then the server is starting
  49. Then the server is healthy
  50. Given a prompt:
  51. """
  52. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
  53. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
  54. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
  55. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
  56. """
  57. And a completion request with 400 api error