modelcard.template 190 B

12345678910111213
  1. ---
  2. base_model:
  3. - {base_model}
  4. ---
  5. # {model_name} GGUF
  6. Recommended way to run this model:
  7. ```sh
  8. llama-server -hf {namespace}/{model_name}-GGUF -c 0
  9. ```
  10. Then, access http://localhost:8080