Kawrakow
|
147b17ac94
2-bit quantizations (#4897)
|
2 years ago |
slaren
|
e7e4df031b
llama : ggml-backend integration (#4766)
|
2 years ago |
Johannes Gäßler
|
8f900abfc0
CUDA: faster softmax via shared memory + fp16 math (#4742)
|
2 years ago |
Johannes Gäßler
|
a91928014f
Print backend name on test-backend-ops failure (#4751)
|
2 years ago |
Guillaume Wenzek
|
5f66ebca9c
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
|
2 years ago |
Georgi Gerganov
|
58ba655af0
metal : enable shader debugging (cmake option) (#4705)
|
2 years ago |
bssrdf
|
afc8c19291
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
|
2 years ago |
Georgi Gerganov
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 years ago |
Ebey Abraham
|
b9e74f9bca
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
|
2 years ago |
slaren
|
6744dbe924
ggml : use ggml_row_size where possible (#4472)
|
2 years ago |
Georgi Gerganov
|
4d98d9a656
sync : ggml (SD ops, tests, kernels) (#4444)
|
2 years ago |
slaren
|
799a1cb13b
llama : add Mixtral support (#4406)
|
2 years ago |
Georgi Gerganov
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 years ago |