Kawrakow
|
a14679cc30
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
|
1 rok temu |
slaren
|
40c3a6c1e1
cuda : ignore peer access already enabled errors (#5597)
|
1 rok temu |
Georgi Gerganov
|
d0e3ce51f4
ci : enable -Werror for CUDA builds (#5579)
|
1 rok temu |
slaren
|
3a9cb4ca64
cuda, metal : fix nans in soft_max (#5574)
|
1 rok temu |
Kawrakow
|
bd2d4e393b
1.5 bit quantization (#5453)
|
1 rok temu |
Georgi Gerganov
|
8f1be0d42f
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
|
1 rok temu |
slaren
|
9060a1e9df
cuda : print message when initialization fails (#5512)
|
1 rok temu |
Johannes Gäßler
|
3bdc4cd0f5
CUDA: mul_mat_vec_q tiling, refactor mul mat logic (#5434)
|
1 rok temu |
Johannes Gäßler
|
8e6a9d2de0
CUDA: more warps for mmvq on NVIDIA (#5394)
|
1 rok temu |
Johannes Gäßler
|
aa7ab99be2
CUDA: fixed mmvq kernel for bs 2,3,4 and -sm row (#5386)
|
2 lat temu |
Johannes Gäßler
|
17c97fb062
CUDA: mul_mat_vec_q max. batch size 8 -> 4 (#5370)
|
2 lat temu |
Johannes Gäßler
|
2c516611f1
CUDA: mul_mat_vec_q for batch sizes > 1 (#5351)
|
2 lat temu |
slaren
|
8ca511cade
cuda : fix LLAMA_CUDA_F16 (#5262)
|
2 lat temu |
JidongZhang-THU
|
15606309a0
llava : add MobileVLM support (#5132)
|
2 lat temu |
Georgi Gerganov
|
8f8ddfcfad
sync : ggml (#0)
|
2 lat temu |
John Balis
|
625a699b54
`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)
|
2 lat temu |
Kawrakow
|
f4d7e54974
SOTA 3-bit quants (#5196)
|
2 lat temu |
0cc4m
|
2307523d32
ggml : add Vulkan backend (#2059)
|
2 lat temu |
slaren
|
62fead3ea0
cuda : fix tensor size calculation for non-split buffer (#5145)
|
2 lat temu |
Engininja2
|
cd4fddb29f
cuda : fix 2-bit quants on amd hip (#5105)
|
2 lat temu |
Johannes Gäßler
|
9ecdd12e95
CUDA: more info when no device code (#5088)
|
2 lat temu |
Kylin
|
cca894f16a
cuda : fix compile error in jetson platform (#4975)
|
2 lat temu |
Georgi Gerganov
|
38566680cd
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
|
2 lat temu |
Justine Tunney
|
a0b3ac8c48
ggml : introduce GGML_CALL function annotation (#4850)
|
2 lat temu |
Georgi Gerganov
|
ddb008d845
cuda : fix dequantize kernel names (#4938)
|
2 lat temu |
Kawrakow
|
4a3156de2f
CUDA: faster dequantize kernels for Q4_0 and Q4_1 (#4938)
|
2 lat temu |
Johannes Gäßler
|
3fe81781e3
CUDA: faster q8_0 -> f16 dequantization (#4895)
|
2 lat temu |
slaren
|
e7e4df031b
llama : ggml-backend integration (#4766)
|
2 lat temu |
Johannes Gäßler
|
1b280c9fff
CUDA: fix softmax compile for old CUDA versions (#4862)
|
2 lat temu |
Kawrakow
|
49662cbed3
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
|
2 lat temu |