snadampal
|
7032f4f634
ggml : update softmax n_task calculation (#5126)
|
2 years ago |
Georgi Gerganov
|
89758723c7
minor : clean-up some warnings and style (#5094)
|
2 years ago |
Reinforce-II
|
780e24a22e
ggml : parallelize FP32 conversion when using BLAS (#5045)
|
2 years ago |
XiaotaoChen
|
3ce7e8f8e7
llava : MobileVLM support (#4954)
|
2 years ago |
Georgi Gerganov
|
38566680cd
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
|
2 years ago |
Georgi Gerganov
|
ba69bbc84c
imatrix : offload to GPU support (#4957)
|
2 years ago |
Kawrakow
|
334a835a1c
ggml : importance matrix support for legacy quants (#4969)
|
2 years ago |
Justine Tunney
|
a0b3ac8c48
ggml : introduce GGML_CALL function annotation (#4850)
|
2 years ago |
Kawrakow
|
467a882fd2
Add ability to use importance matrix for all k-quants (#4930)
|
2 years ago |
Kawrakow
|
147b17ac94
2-bit quantizations (#4897)
|
2 years ago |
Johannes Gäßler
|
c71d608ce7
ggml: cache sin/cos for RoPE (#4908)
|
2 years ago |
texmex76
|
c30b1ef39a
gguf : fix potential infinite for-loop (#4600)
|
2 years ago |
slaren
|
e7e4df031b
llama : ggml-backend integration (#4766)
|
2 years ago |
Kawrakow
|
326b418b59
Importance Matrix calculation (#4861)
|
2 years ago |
Kawrakow
|
49662cbed3
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
|
2 years ago |
Timothy Cronin
|
f85a973aa1
ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)
|
2 years ago |
Halalaluyafail3
|
c910e3c28a
Fix execlp call (ggml/689)
|
2 years ago |
Kawrakow
|
dd5ae06405
SOTA 2-bit quants (#4773)
|
2 years ago |
Georgi Gerganov
|
c1d7cb28d3
ggml : do not sched_yield when calling BLAS (#4761)
|
2 years ago |
Guillaume Wenzek
|
5f66ebca9c
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
|
2 years ago |
automaticcat
|
24a447e20a
ggml : add ggml_cpu_has_avx_vnni() (#4589)
|
2 years ago |
bssrdf
|
afc8c19291
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
|
2 years ago |
slaren
|
dc68f0054c
cuda : fix vmm pool with multi GPU (#4620)
|
2 years ago |
WillCorticesAI
|
de8e496437
Update comment for AdamW implementation reference. (#4604)
|
2 years ago |
slaren
|
5bf3953d7e
cuda : improve cuda pool efficiency using virtual memory (#4606)
|
2 years ago |
slaren
|
48b7ff193e
llama : fix platforms without mmap (#4578)
|
2 years ago |
Herman Semenov
|
48b24b170e
ggml : add comment about backward GGML_OP_DIAG_MASK_INF (#4203)
|
2 years ago |
Georgi Gerganov
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 years ago |
slaren
|
d232aca5a7
llama : initial ggml-backend integration (#4520)
|
2 years ago |
Ebey Abraham
|
b9e74f9bca
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
|
2 years ago |