slaren
|
da0400344b
ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (#3370)
|
2 سال پیش |
Johannes Gäßler
|
ee66942d7e
CUDA: fix peer access logic (#3231)
|
2 سال پیش |
Johannes Gäßler
|
111163e246
CUDA: enable peer access between devices (#2470)
|
2 سال پیش |
Johannes Gäßler
|
578d8c8f5c
CUDA: fix scratch malloced on non-main device (#3220)
|
2 سال پیش |
Vlad
|
5dbc2b3213
Enable build with CUDA 11.0 (make) (#3132)
|
2 سال پیش |
Johannes Gäßler
|
0a5eebb45d
CUDA: mul_mat_q RDNA2 tunings (#2910)
|
2 سال پیش |
Johannes Gäßler
|
4f7cd6ba9c
CUDA: fix LoRAs (#3130)
|
2 سال پیش |
Johannes Gäßler
|
89e89599fd
CUDA: fix mul_mat_q not used for output tensor (#3127)
|
2 سال پیش |
Johannes Gäßler
|
d54a4027a6
CUDA: lower GPU latency + fix Windows performance (#3110)
|
2 سال پیش |
Johannes Gäßler
|
8a4ca9af56
CUDA: add device number to error messages (#3112)
|
2 سال پیش |
Georgi Gerganov
|
b3e9852e47
sync : ggml (CUDA GLM RoPE + POSIX) (#3082)
|
2 سال پیش |
Jiahao Li
|
35195689cd
2x faster (rms) norm cuda kernels (3.7% e2e improvement) (#2985)
|
2 سال پیش |
Engininja2
|
f04d002844
cuda : vsubss4 for older versions of ROCm/clang (#2942)
|
2 سال پیش |
Johannes Gäßler
|
92b1bbd2ec
CUDA: fix RoPE asserts, block sizes (#2833)
|
2 سال پیش |
Georgi Gerganov
|
eaa13a48ff
falcon : fix CUDA inference by making K and Q contiguous (#2830)
|
2 سال پیش |
Kawrakow
|
a6d1189fdd
k_quants tuning for Falcon-7b (#2816)
|
2 سال پیش |
Henri Vasserman
|
6bbc598a63
ROCm Port (#1087)
|
2 سال پیش |
Georgi Gerganov
|
3f460a2b72
cuda : add RoPE kernel for mode == 2 (NeoX) (#2760)
|
2 سال پیش |
Georgi Gerganov
|
cf658adc83
llm : add Falcon support (#2717)
|
2 سال پیش |
Johannes Gäßler
|
c63bb1d16a
CUDA: use mul_mat_q kernels by default (#2683)
|
2 سال پیش |
Jiahao Li
|
800c9635b4
Fix CUDA softmax by subtracting max value before exp (#2665)
|
2 سال پیش |
slaren
|
1123f7fbdf
ggml-cuda : use graph allocator (#2684)
|
2 سال پیش |
Georgi Gerganov
|
ef3f333d37
ggml : sync latest (SAM + SD operators, CUDA alibi) (#2709)
|
2 سال پیش |
slaren
|
097e121e2f
llama : add benchmark example (#2626)
|
2 سال پیش |
Johannes Gäßler
|
1cd06fa25e
CUDA: launch_bounds, small q4_K, q5_K mmq refactor (#2596)
|
2 سال پیش |
Johannes Gäßler
|
f64d44a9b9
CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time (#2590)
|
2 سال پیش |
Johannes Gäßler
|
25d43e0eb5
CUDA: tuned mul_mat_q kernels (#2546)
|
2 سال پیش |
Johannes Gäßler
|
f514d1b306
CUDA: faster k-quant mul_mat_q kernels (#2525)
|
2 سال پیش |
Cebtenzzre
|
4329d1acb0
CUDA: use min compute capability of GPUs actually used (#2506)
|
2 سال پیش |
Cebtenzzre
|
02f9d96a86
CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)
|
2 سال پیش |