slaren
|
6744dbe924
ggml : use ggml_row_size where possible (#4472)
|
2 years ago |
slaren
|
cafcd4f895
ggml : remove n_dims from ggml_tensor (#4469)
|
2 years ago |
LostRuins
|
20a68a7030
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
|
2 years ago |
Georgi Gerganov
|
4d98d9a656
sync : ggml (SD ops, tests, kernels) (#4444)
|
2 years ago |
slaren
|
799a1cb13b
llama : add Mixtral support (#4406)
|
2 years ago |
Taikono-Himazin
|
41a11aaf99
ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424)
|
2 years ago |
Georgi Gerganov
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 years ago |
Georgi Gerganov
|
ef47ec18da
ggml : add ggml_soft_max_ext (#4256)
|
2 years ago |
Jared Van Bortel
|
64e64aa255
ggml : restore abort() in GGML_ASSERT (#4242)
|
2 years ago |
slaren
|
e85bb1a8e7
llama : add functions to get the model's metadata (#4013)
|
2 years ago |
Georgi Gerganov
|
3d68f364f1
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
|
2 years ago |
Georgi Gerganov
|
4760e7cc0b
sync : ggml (backend v2) (#3912)
|
2 years ago |
xaedes
|
e9c1cecb9d
ggml : fix backward rope after YaRN (#3974)
|
2 years ago |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 years ago |
Georgi Gerganov
|
71e3718abd
llama : refactor graph build code (#3837)
|
2 years ago |
Georgi Gerganov
|
d69d777c02
ggml : quantization refactoring (#3833)
|
2 years ago |
Georgi Gerganov
|
b2f7e04bd3
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
|
2 years ago |
Qin Yue Chen
|
8cf19d60dc
gguf : support big endian platform (#3552)
|
2 years ago |
slaren
|
424b6381c4
ggml : add context enumeration functions (#3605)
|
2 years ago |
Georgi Gerganov
|
db3abcc114
sync : ggml (ggml-backend) (#3548)
|
2 years ago |
Georgi Gerganov
|
f93af02488
sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)
|
2 years ago |
Cebtenzzre
|
bc39553c90
build : enable more non-default compiler warnings (#3200)
|
2 years ago |
Hua Jiang
|
0ccfc62a96
ggml_tensor: update the structure comments. (#3283)
|
2 years ago |
xaedes
|
0e76a8992c
train : finetune LORA (#2632)
|
2 years ago |
Cebtenzzre
|
2db94d98ed
gguf : basic type checking in gguf_get_* (#3346)
|
2 years ago |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 years ago |
Rickard Hallerbäck
|
dc6897404e
metal : reusing llama.cpp logging (#3152)
|
2 years ago |
Georgi Gerganov
|
8c00b7a6ff
sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)
|
2 years ago |
Eric Sommerlade
|
b52b29ab9d
arm64 support for windows (#3007)
|
2 years ago |
slaren
|
06abf8eeba
ggml : add view_src and view_offs to ggml_tensor for views (#2874)
|
2 years ago |