Daniel Bevenius
|
cb1e2818e0
train : fix typo in overlapping-samples help msg (#4758)
|
2 years ago |
slaren
|
cafcd4f895
ggml : remove n_dims from ggml_tensor (#4469)
|
2 years ago |
Jiří Podivín
|
ba4cf5c0bf
train : move number of gpu layers argument parsing to common/train.cpp (#4074)
|
2 years ago |
Georgi Gerganov
|
4760e7cc0b
sync : ggml (backend v2) (#3912)
|
2 years ago |
Andrew Godfrey
|
73bdcb395e
finetune : add -ngl parameter (#3762)
|
2 years ago |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 years ago |
Herman Semenov
|
f439e506e8
ggml : fix rope + llama minor optimizations (#3560)
|
2 years ago |
staviq
|
1a159553f9
tokenizer : special token handling (#3538)
|
2 years ago |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 years ago |
xaedes
|
0e76a8992c
train : finetune LORA (#2632)
|
2 years ago |