Johannes Gäßler
|
1cd06fa25e
CUDA: launch_bounds, small q4_K, q5_K mmq refactor (#2596)
|
2 years ago |
Johannes Gäßler
|
f64d44a9b9
CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time (#2590)
|
2 years ago |
Johannes Gäßler
|
25d43e0eb5
CUDA: tuned mul_mat_q kernels (#2546)
|
2 years ago |
Johannes Gäßler
|
f514d1b306
CUDA: faster k-quant mul_mat_q kernels (#2525)
|
2 years ago |
Cebtenzzre
|
4329d1acb0
CUDA: use min compute capability of GPUs actually used (#2506)
|
2 years ago |
Cebtenzzre
|
02f9d96a86
CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)
|
2 years ago |
Johannes Gäßler
|
468ea24fb4
CUDA: faster non k-quant mul_mat_q kernels (#2483)
|
2 years ago |
Johannes Gäßler
|
4f6b60c776
CUDA: Fix models with output size != 32000 (#2480)
|
2 years ago |
Johannes Gäßler
|
0728c5a8b9
CUDA: mmq CLI option, fixed mmq build issues (#2453)
|
2 years ago |
Johannes Gäßler
|
1215ed7d5c
CUDA: Implemented row flattening for non-glm RoPE (#2468)
|
2 years ago |
Johannes Gäßler
|
2dbf518911
CUDA: fewer memory bank conflicts for mul_mat_q (#2458)
|
2 years ago |
Johannes Gäßler
|
11f3ca06b8
CUDA: Quantized matrix matrix multiplication (#2160)
|
2 years ago |
Johannes Gäßler
|
9baf9ef304
CUDA: faster multi GPU synchronization (#2448)
|
2 years ago |
Kawrakow
|
129d844c87
Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359)
|
2 years ago |
slaren
|
41c674161f
make rms_norm_eps a parameter (#2374)
|
2 years ago |
Georgi Gerganov
|
5b2b2dc6ae
ggml : sync (unary ops refactor, static-correctness) (#2370)
|
2 years ago |
Kawrakow
|
2f9cf974a0
Some more Q4_K and Q5_K speedup on CUDA (#2346)
|
2 years ago |
slaren
|
95a6c595e7
ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)
|
2 years ago |
Georgi Gerganov
|
e76d630df1
llama : grouped-query attention + LLaMAv2 70B support (#2276)
|
2 years ago |
Kawrakow
|
d2a43664f9
Speed up Q4_K (#2322)
|
2 years ago |
Johannes Gäßler
|
b9b7d94fc1
CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)
|
2 years ago |
Kawrakow
|
d924522a46
Custom RoPE + bettter memory management for CUDA (#2295)
|
2 years ago |
Georgi Gerganov
|
ae178ab46b
llama : make tensor_split ptr instead of array (#2272)
|
2 years ago |
Jiahao Li
|
7568d1a2b2
Support dup & cont ops on CUDA (#2242)
|
2 years ago |
Bach Le
|
7cdd30bf1f
cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer (#2220)
|
2 years ago |
Jiahao Li
|
206e01de11
cuda : support broadcast add & mul (#2192)
|
2 years ago |
Johannes Gäßler
|
4304bd3cde
CUDA: mul_mat_vec_q kernels for k-quants (#2203)
|
2 years ago |
Georgi Gerganov
|
697966680b
ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)
|
2 years ago |
Howard Su
|
ff5d58faec
Fix compile error on Windows CUDA (#2207)
|
2 years ago |
Georgi Gerganov
|
680e6f9177
cuda : add gelu support
|
2 years ago |