Bladeren bron

llama-run : fix context size (#11094)

Set `n_ctx` equal to `n_batch` in `Opt` class. Now context size is
a more reasonable 2048.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
Eric Curtin 1 jaar geleden
bovenliggende
commit
dc7cef9f37
1 gewijzigde bestanden met toevoegingen van 1 en 0 verwijderingen
  1. 1 0
      examples/run/run.cpp

+ 1 - 0
examples/run/run.cpp

@@ -83,6 +83,7 @@ class Opt {
         }
         }
 
 
         ctx_params.n_batch        = context_size >= 0 ? context_size : context_size_default;
         ctx_params.n_batch        = context_size >= 0 ? context_size : context_size_default;
+        ctx_params.n_ctx          = ctx_params.n_batch;
         model_params.n_gpu_layers = ngl >= 0 ? ngl : ngl_default;
         model_params.n_gpu_layers = ngl >= 0 ? ngl : ngl_default;
         temperature               = temperature >= 0 ? temperature : temperature_default;
         temperature               = temperature >= 0 ? temperature : temperature_default;