Kawrakow
|
0becb22ac0
IQ4_XS: a 4.25 bpw quantization (#5747)
|
1 год назад |
Kawrakow
|
a33e6a0d2a
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721)
|
1 год назад |
Georgi Gerganov
|
ab336a9d5e
code : normalize enum names (#5697)
|
1 год назад |
Kawrakow
|
4c4cb30736
IQ3_S: a much better alternative to Q3_K (#5676)
|
1 год назад |
Kawrakow
|
a14679cc30
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
|
1 год назад |
Kawrakow
|
bd2d4e393b
1.5 bit quantization (#5453)
|
1 год назад |
Georgi Gerganov
|
8f1be0d42f
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
|
1 год назад |
Georgi Gerganov
|
99b8b43d7b
tests : disable moe test (#5473)
|
1 год назад |
JidongZhang-THU
|
15606309a0
llava : add MobileVLM support (#5132)
|
1 год назад |
John Balis
|
625a699b54
`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)
|
1 год назад |
Kawrakow
|
f4d7e54974
SOTA 3-bit quants (#5196)
|
1 год назад |
Jared Van Bortel
|
fbf1ddec69
Nomic Vulkan backend (#4456)
|
1 год назад |
Abhilash Majumder
|
0f648573dd
ggml : add unified SYCL backend for Intel GPUs (#2690)
|
2 лет назад |
Michael Klimenko
|
35a2ee9143
Remove unused data and add fixes (#5154)
|
2 лет назад |
Georgi Gerganov
|
38566680cd
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
|
2 лет назад |
Kawrakow
|
147b17ac94
2-bit quantizations (#4897)
|
2 лет назад |
slaren
|
e7e4df031b
llama : ggml-backend integration (#4766)
|
2 лет назад |
Johannes Gäßler
|
8f900abfc0
CUDA: faster softmax via shared memory + fp16 math (#4742)
|
2 лет назад |
Johannes Gäßler
|
a91928014f
Print backend name on test-backend-ops failure (#4751)
|
2 лет назад |
Guillaume Wenzek
|
5f66ebca9c
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
|
2 лет назад |
Georgi Gerganov
|
58ba655af0
metal : enable shader debugging (cmake option) (#4705)
|
2 лет назад |
bssrdf
|
afc8c19291
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
|
2 лет назад |
Georgi Gerganov
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 лет назад |
Ebey Abraham
|
b9e74f9bca
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
|
2 лет назад |
slaren
|
6744dbe924
ggml : use ggml_row_size where possible (#4472)
|
2 лет назад |
Georgi Gerganov
|
4d98d9a656
sync : ggml (SD ops, tests, kernels) (#4444)
|
2 лет назад |
slaren
|
799a1cb13b
llama : add Mixtral support (#4406)
|
2 лет назад |
Georgi Gerganov
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 лет назад |