Kawrakow
|
49662cbed3
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
|
2 years ago |
Erik Scholz
|
f34432ca1e
fix : cuda order of synchronization when setting a buffer (ggml/679)
|
2 years ago |
Johannes Gäßler
|
8f900abfc0
CUDA: faster softmax via shared memory + fp16 math (#4742)
|
2 years ago |
Kawrakow
|
dd5ae06405
SOTA 2-bit quants (#4773)
|
2 years ago |
Johannes Gäßler
|
d5a410e855
CUDA: fixed redundant value dequantization (#4809)
|
2 years ago |
Konstantin Zhuravlyov
|
63ee677efd
ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (#4787)
|
2 years ago |
Finn Voorhees
|
1bf681f90e
ggml : add error handling to graph_compute (whisper/1714)
|
2 years ago |
Georgi Gerganov
|
7bed7eba35
cuda : simplify expression
|
2 years ago |
Georgi Gerganov
|
d55356d3ba
cuda : mark I16 and I32 ops as unsupported
|
2 years ago |
Johannes Gäßler
|
39d8bc71ed
CUDA: fixed tensor cores not being used on RDNA3 (#4697)
|
2 years ago |
Johannes Gäßler
|
a20f3c7465
CUDA: fix tensor core logic for Pascal and HIP (#4682)
|
2 years ago |
hydai
|
91bb39cec7
cuda: fix vmm oom issue on NVIDIA AGX Orin (#4687)
|
2 years ago |
bssrdf
|
afc8c19291
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
|
2 years ago |
slaren
|
dc68f0054c
cuda : fix vmm pool with multi GPU (#4620)
|
2 years ago |
FantasyGmm
|
77465dad48
Fix new CUDA10 compilation errors (#4635)
|
2 years ago |
slaren
|
5bf3953d7e
cuda : improve cuda pool efficiency using virtual memory (#4606)
|
2 years ago |
slaren
|
708e179e85
fallback to CPU buffer if host buffer alloc fails (#4610)
|
2 years ago |
Johannes Gäßler
|
e0a4002273
CUDA: fixed row rounding for 0 tensor splits (#4594)
|
2 years ago |
Georgi Gerganov
|
ba66175132
sync : ggml (fix im2col) (#4591)
|
2 years ago |
FantasyGmm
|
a55876955b
cuda : fix jetson compile error (#4560)
|
2 years ago |
Henrik Forstén
|
6724ef1657
Fix CudaMemcpy direction (#4599)
|
2 years ago |
slaren
|
48b7ff193e
llama : fix platforms without mmap (#4578)
|
2 years ago |
Georgi Gerganov
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 years ago |
slaren
|
d232aca5a7
llama : initial ggml-backend integration (#4520)
|
2 years ago |
Erik Garrison
|
0f630fbc92
cuda : ROCm AMD Unified Memory Architecture (UMA) handling (#4449)
|
2 years ago |
arlo-phoenix
|
562cf222b5
ggml-cuda: Fix HIP build by adding define for __trap (#4569)
|
2 years ago |
Johannes Gäßler
|
9154494808
CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)
|
2 years ago |
bobqianic
|
66f35a2f48
cuda : better error message for ggml_get_rows (#4561)
|
2 years ago |
slaren
|
1398823922
cuda : replace asserts in wrong architecture checks with __trap (#4556)
|
2 years ago |
LoganDark
|
1d7a1912ce
Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)
|
2 years ago |