Georgi Gerganov
|
4ff1046d75
gguf : print error for GGUFv1 files (#3908)
|
2 years ago |
Georgi Gerganov
|
2756c4fbff
gguf : remove special-case code for GGUFv1 (#3901)
|
2 years ago |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 years ago |
Andrew Godfrey
|
73bdcb395e
finetune : add -ngl parameter (#3762)
|
2 years ago |
Georgi Gerganov
|
207b51900e
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
|
2 years ago |
Georgi Gerganov
|
d69d777c02
ggml : quantization refactoring (#3833)
|
2 years ago |
Georgi Gerganov
|
b2f7e04bd3
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
|
2 years ago |
Georgi Gerganov
|
2b4ea35e56
cuda : add batched cuBLAS GEMM for faster attention (#3749)
|
2 years ago |
Qin Yue Chen
|
8cf19d60dc
gguf : support big endian platform (#3552)
|
2 years ago |
Herman Semenov
|
f439e506e8
ggml : fix rope + llama minor optimizations (#3560)
|
2 years ago |
slaren
|
424b6381c4
ggml : add context enumeration functions (#3605)
|
2 years ago |
M. Yusuf Sarıgöz
|
370359e5ba
examples: support LLaVA v1.5 (multimodal model) (#3436)
|
2 years ago |
Jan Ploski
|
f5f9121de1
llm : add MPT support (#3417)
|
2 years ago |
Georgi Gerganov
|
fcca0a7004
refact : fix convert script + zero out KV cache to avoid nans (#3523)
|
2 years ago |
Georgi Gerganov
|
db3abcc114
sync : ggml (ggml-backend) (#3548)
|
2 years ago |
Georgi Gerganov
|
0d152b37fe
ggml : fix build after #3329
|
2 years ago |
ds5t5
|
f8c90cdbaa
llm : add Refact model (#3329)
|
2 years ago |
Georgi Gerganov
|
f93af02488
sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)
|
2 years ago |
Tameem
|
79f34abddb
ggml : add RISC-V Vector Support for K-Quants and improved the existing intrinsics (#3453)
|
2 years ago |
shibe2
|
665018c749
CLBlast: Add broadcast support for matrix multiplication (#3402)
|
2 years ago |
Cebtenzzre
|
bc39553c90
build : enable more non-default compiler warnings (#3200)
|
2 years ago |
Qu Zongfu
|
7f1a0fe709
ggml : release the requested thread pool resource (#3292)
|
2 years ago |
xaedes
|
0e76a8992c
train : finetune LORA (#2632)
|
2 years ago |
Cebtenzzre
|
2db94d98ed
gguf : basic type checking in gguf_get_* (#3346)
|
2 years ago |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 years ago |
Georgi Gerganov
|
8c00b7a6ff
sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)
|
2 years ago |
Georgi Gerganov
|
a51b687657
metal : relax conditions on fast matrix multiplication kernel (#3168)
|
2 years ago |
Eric Sommerlade
|
b52b29ab9d
arm64 support for windows (#3007)
|
2 years ago |
Georgi Gerganov
|
b3e9852e47
sync : ggml (CUDA GLM RoPE + POSIX) (#3082)
|
2 years ago |
Przemysław Pawełczyk
|
cb6c44c5e0
build : do not use _GNU_SOURCE gratuitously (#2035)
|
2 years ago |