Kaynağa Gözat

finetune: fix typo in README.md (#4733)

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
Daniel Bevenius 2 yıl önce
ebeveyn
işleme
775ac8712a
1 değiştirilmiş dosya ile 1 ekleme ve 1 silme
  1. 1 1
      examples/finetune/README.md

+ 1 - 1
examples/finetune/README.md

@@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L
   --lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin
 ```
 
-The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values.
+The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values too big will sometimes result in worse output. Play around to find good values.
 
 Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime.
 If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`.