فهرست منبع

finetune: fix typo in README.md (#4733)

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
Daniel Bevenius 2 سال پیش
والد
کامیت
775ac8712a
1فایلهای تغییر یافته به همراه1 افزوده شده و 1 حذف شده
  1. 1 1
      examples/finetune/README.md

+ 1 - 1
examples/finetune/README.md

@@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L
   --lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin
 ```
 
-The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values.
+The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values too big will sometimes result in worse output. Play around to find good values.
 
 Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime.
 If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`.