011-bug-results.yml 3.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
  1. name: Bug (model use)
  2. description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module).
  3. title: "Eval bug: "
  4. labels: ["bug-unconfirmed", "model evaluation"]
  5. body:
  6. - type: markdown
  7. attributes:
  8. value: >
  9. Thanks for taking the time to fill out this bug report!
  10. This issue template is intended for bug reports where the model evaluation results
  11. (i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation.
  12. If you encountered the issue while using an external UI (e.g. ollama),
  13. please reproduce your issue using one of the examples/binaries in this repository.
  14. The `llama-cli` binary can be used for simple and reproducible model inference.
  15. - type: textarea
  16. id: version
  17. attributes:
  18. label: Name and Version
  19. description: Which version of our software are you running? (use `--version` to get a version string)
  20. placeholder: |
  21. $./llama-cli --version
  22. version: 2999 (42b4109e)
  23. built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
  24. validations:
  25. required: true
  26. - type: dropdown
  27. id: operating-system
  28. attributes:
  29. label: Operating systems
  30. description: Which operating systems do you know to be affected?
  31. multiple: true
  32. options:
  33. - Linux
  34. - Mac
  35. - Windows
  36. - BSD
  37. - Other? (Please let us know in description)
  38. validations:
  39. required: true
  40. - type: dropdown
  41. id: backends
  42. attributes:
  43. label: GGML backends
  44. description: Which GGML backends do you know to be affected?
  45. options: [AMX, BLAS, CPU, CUDA, HIP, Metal, Musa, RPC, SYCL, Vulkan, OpenCL, zDNN]
  46. multiple: true
  47. validations:
  48. required: true
  49. - type: textarea
  50. id: hardware
  51. attributes:
  52. label: Hardware
  53. description: Which CPUs/GPUs are you using?
  54. placeholder: >
  55. e.g. Ryzen 5950X + 2x RTX 4090
  56. validations:
  57. required: true
  58. - type: textarea
  59. id: model
  60. attributes:
  61. label: Models
  62. description: >
  63. Which model(s) at which quantization were you using when encountering the bug?
  64. If you downloaded a GGUF file off of Huggingface, please provide a link.
  65. placeholder: >
  66. e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
  67. validations:
  68. required: false
  69. - type: textarea
  70. id: info
  71. attributes:
  72. label: Problem description & steps to reproduce
  73. description: >
  74. Please give us a summary of the problem and tell us how to reproduce it.
  75. If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
  76. that information would be very much appreciated by us.
  77. placeholder: >
  78. e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
  79. When I use -ngl 0 it works correctly.
  80. Here are the exact commands that I used: ...
  81. validations:
  82. required: true
  83. - type: textarea
  84. id: first_bad_commit
  85. attributes:
  86. label: First Bad Commit
  87. description: >
  88. If the bug was not present on an earlier version: when did it start appearing?
  89. If possible, please do a git bisect and identify the exact commit that introduced the bug.
  90. validations:
  91. required: false
  92. - type: textarea
  93. id: logs
  94. attributes:
  95. label: Relevant log output
  96. description: >
  97. Please copy and paste any relevant log output, including the command that you entered and any generated text.
  98. This will be automatically formatted into code, so no need for backticks.
  99. render: shell
  100. validations:
  101. required: true