Georgi Gerganov 254098a279 common : refactor common_sampler + grammar logic changes (#17937) hace 1 mes
..
CMakeLists.txt 10d2af0eaa llama/ggml: add LLM training support (#10544) hace 8 meses
README.md 88c125f2ac examples/training: Fix file name in README (#13803) hace 7 meses
finetune.cpp 254098a279 common : refactor common_sampler + grammar logic changes (#17937) hace 1 mes

README.md

llama.cpp/examples/training

This directory contains examples related to language model training using llama.cpp/GGML. So far finetuning is technically functional (for FP32 models and limited hardware setups) but the code is very much WIP. Finetuning of Stories 260K and LLaMA 3.2 1b seems to work with 24 GB of memory. For CPU training, compile llama.cpp without any additional backends such as CUDA. For CUDA training, use the maximum number of GPU layers.

Proof of concept:

export model_name=llama_3.2-1b && export quantization=f32
./build/bin/llama-finetune --file wikitext-2-raw/wiki.test.raw -ngl 999 --model models/${model_name}-${quantization}.gguf -c 512 -b 512 -ub 512
./build/bin/llama-perplexity --file wikitext-2-raw/wiki.test.raw -ngl 999 --model finetuned-model.gguf

The perplexity value of the finetuned model should be lower after training on the test set for 2 epochs.