Georgi Gerganov e84b71c2c6 ggml : drop support for QK_K=64 (#7473) hai 1 ano
..
acc.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
acc.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
arange.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
arange.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
argsort.cu 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) hai 1 ano
argsort.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
binbcast.cu 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) hai 1 ano
binbcast.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
clamp.cu bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) hai 1 ano
clamp.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
common.cuh 133d99c599 CUDA: deduplicate FlashAttention code (#7352) hai 1 ano
concat.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
concat.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
convert.cu e84b71c2c6 ggml : drop support for QK_K=64 (#7473) hai 1 ano
convert.cuh 5dc9dd7152 llama : add Command R Plus support (#6491) hai 1 ano
cpy.cu bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) hai 1 ano
cpy.cuh bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) hai 1 ano
dequantize.cuh 5dc9dd7152 llama : add Command R Plus support (#6491) hai 1 ano
diagmask.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
diagmask.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
dmmv.cu e84b71c2c6 ggml : drop support for QK_K=64 (#7473) hai 1 ano
dmmv.cuh d48ccf3ad4 sync : ggml (#6351) hai 1 ano
fattn-common.cuh 133d99c599 CUDA: deduplicate FlashAttention code (#7352) hai 1 ano
fattn-tile-f16.cu cd93a28cb1 CUDA: fix FA out-of-bounds reads (#7479) hai 1 ano
fattn-tile-f16.cuh 0fc1e820a9 CUDA: faster large batch FA without tensor cores (#7314) hai 1 ano
fattn-tile-f32.cu cd93a28cb1 CUDA: fix FA out-of-bounds reads (#7479) hai 1 ano
fattn-tile-f32.cuh 0fc1e820a9 CUDA: faster large batch FA without tensor cores (#7314) hai 1 ano
fattn-vec-f16.cu cd93a28cb1 CUDA: fix FA out-of-bounds reads (#7479) hai 1 ano
fattn-vec-f16.cuh dc685be466 CUDA: add FP32 FlashAttention vector kernel (#7188) hai 1 ano
fattn-vec-f32.cu cd93a28cb1 CUDA: fix FA out-of-bounds reads (#7479) hai 1 ano
fattn-vec-f32.cuh dc685be466 CUDA: add FP32 FlashAttention vector kernel (#7188) hai 1 ano
fattn.cu 133d99c599 CUDA: deduplicate FlashAttention code (#7352) hai 1 ano
fattn.cuh 9c67c2773d ggml : add Flash Attention (#5021) hai 1 ano
getrows.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
getrows.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
im2col.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
im2col.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
mmq.cu e84b71c2c6 ggml : drop support for QK_K=64 (#7473) hai 1 ano
mmq.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
mmvq.cu bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) hai 1 ano
mmvq.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
norm.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
norm.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
pad.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
pad.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
pool2d.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
pool2d.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
quantize.cu 5dc9dd7152 llama : add Command R Plus support (#6491) hai 1 ano
quantize.cuh 5dc9dd7152 llama : add Command R Plus support (#6491) hai 1 ano
rope.cu 3e5faa8503 cuda : fix rope + add tests (#7452) hai 1 ano
rope.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
scale.cu bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) hai 1 ano
scale.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
softmax.cu 133d99c599 CUDA: deduplicate FlashAttention code (#7352) hai 1 ano
softmax.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
sumrows.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
sumrows.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
tsembd.cu ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
tsembd.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
unary.cu f5ef34e428 feat: implemented sigmoid function (ggml/806) hai 1 ano
unary.cuh f5ef34e428 feat: implemented sigmoid function (ggml/806) hai 1 ano
upscale.cu 48aa8fd1f2 ggml : add `ggml_upscale_ext` (ggml/814) hai 1 ano
upscale.cuh ae1f211ce2 cuda : refactor into multiple files (#6269) hai 1 ano
vecdotq.cuh e84b71c2c6 ggml : drop support for QK_K=64 (#7473) hai 1 ano