Kerfuffle
|
2923f17f6f
Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
|
2 years ago |
Andrew Godfrey
|
b83e149ec6
cuda : get_row_rounding F32 (#4095)
|
2 years ago |
Georgi Gerganov
|
4f447a4833
llama : fix data units (#4101)
|
2 years ago |
slaren
|
1cf2850d52
ggml-cuda : increase max graph size (#4084)
|
2 years ago |
Georgi Gerganov
|
3d68f364f1
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
|
2 years ago |
Georgi Gerganov
|
4760e7cc0b
sync : ggml (backend v2) (#3912)
|
2 years ago |
Kerfuffle
|
bb50a792ec
Add ReLU and SQR CUDA ops to (partially) fix Persimmon offloading (#4041)
|
2 years ago |
Meng Zhang
|
46876d2a2c
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)
|
2 years ago |
slaren
|
2833a6f63c
ggml-cuda : fix f16 mul mat (#3961)
|
2 years ago |
Jared Van Bortel
|
132d25b8a6
cuda : fix disabling device with --tensor-split 1,0 (#3951)
|
2 years ago |
slaren
|
48ade94538
cuda : revert CUDA pool stuff (#3944)
|
2 years ago |
slaren
|
abb77e7319
ggml-cuda : move row numbers to x grid dim in mmv kernels (#3921)
|
2 years ago |
Kerfuffle
|
629f917cd6
cuda : add ROCM aliases for CUDA pool stuff (#3918)
|
2 years ago |
Georgi Gerganov
|
c7743fe1c1
cuda : fix const ptrs warning causing ROCm build issues (#3913)
|
2 years ago |
Oleksii Maryshchenko
|
d6069051de
cuda : use CUDA memory pool with async memory allocation/deallocation when available (#3903)
|
2 years ago |
Georgi Gerganov
|
4d719a6d4e
cuda : check if this fixes Pascal card regression (#3882)
|
2 years ago |
cebtenzzre
|
2fffa0d61f
cuda : fix RoPE after #2268 (#3897)
|
2 years ago |
slaren
|
d02e98cde0
ggml-cuda : compute ptrs for cublasGemmBatchedEx in a kernel (#3891)
|
2 years ago |
cebtenzzre
|
898aeca90a
llama : implement YaRN RoPE scaling (#2268)
|
2 years ago |
Andrew Godfrey
|
73bdcb395e
finetune : add -ngl parameter (#3762)
|
2 years ago |
Georgi Gerganov
|
2f9ec7e271
cuda : improve text-generation and batched decoding performance (#3776)
|
2 years ago |
Georgi Gerganov
|
6961c4bd0b
batched-bench : print params at start
|
2 years ago |
Georgi Gerganov
|
b2f7e04bd3
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
|
2 years ago |
Georgi Gerganov
|
2b4ea35e56
cuda : add batched cuBLAS GEMM for faster attention (#3749)
|
2 years ago |
Jan Ploski
|
f5f9121de1
llm : add MPT support (#3417)
|
2 years ago |
Georgi Gerganov
|
db3abcc114
sync : ggml (ggml-backend) (#3548)
|
2 years ago |
slaren
|
f5ef5cfb18
ggml-cuda : perform cublas mat mul of quantized types as f16 (#3412)
|
2 years ago |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 years ago |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 years ago |
slaren
|
da0400344b
ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (#3370)
|
2 years ago |