bssrdf
|
afc8c19291
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
|
2 سال پیش |
slaren
|
dc68f0054c
cuda : fix vmm pool with multi GPU (#4620)
|
2 سال پیش |
FantasyGmm
|
77465dad48
Fix new CUDA10 compilation errors (#4635)
|
2 سال پیش |
slaren
|
5bf3953d7e
cuda : improve cuda pool efficiency using virtual memory (#4606)
|
2 سال پیش |
slaren
|
708e179e85
fallback to CPU buffer if host buffer alloc fails (#4610)
|
2 سال پیش |
Johannes Gäßler
|
e0a4002273
CUDA: fixed row rounding for 0 tensor splits (#4594)
|
2 سال پیش |
Georgi Gerganov
|
ba66175132
sync : ggml (fix im2col) (#4591)
|
2 سال پیش |
FantasyGmm
|
a55876955b
cuda : fix jetson compile error (#4560)
|
2 سال پیش |
Henrik Forstén
|
6724ef1657
Fix CudaMemcpy direction (#4599)
|
2 سال پیش |
slaren
|
48b7ff193e
llama : fix platforms without mmap (#4578)
|
2 سال پیش |
Georgi Gerganov
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 سال پیش |
slaren
|
d232aca5a7
llama : initial ggml-backend integration (#4520)
|
2 سال پیش |
Erik Garrison
|
0f630fbc92
cuda : ROCm AMD Unified Memory Architecture (UMA) handling (#4449)
|
2 سال پیش |
arlo-phoenix
|
562cf222b5
ggml-cuda: Fix HIP build by adding define for __trap (#4569)
|
2 سال پیش |
Johannes Gäßler
|
9154494808
CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)
|
2 سال پیش |
bobqianic
|
66f35a2f48
cuda : better error message for ggml_get_rows (#4561)
|
2 سال پیش |
slaren
|
1398823922
cuda : replace asserts in wrong architecture checks with __trap (#4556)
|
2 سال پیش |
LoganDark
|
1d7a1912ce
Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)
|
2 سال پیش |
Johannes Gäßler
|
799fc22689
CUDA: Faster Mixtral prompt processing (#4538)
|
2 سال پیش |
arlo-phoenix
|
a7aee47b98
ggml-cuda: Fix HIP build (#4528)
|
2 سال پیش |
Ebey Abraham
|
b9e74f9bca
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
|
2 سال پیش |
slaren
|
6744dbe924
ggml : use ggml_row_size where possible (#4472)
|
2 سال پیش |
Georgi Gerganov
|
4d98d9a656
sync : ggml (SD ops, tests, kernels) (#4444)
|
2 سال پیش |
slaren
|
799a1cb13b
llama : add Mixtral support (#4406)
|
2 سال پیش |
Georgi Gerganov
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 سال پیش |
Georgi Gerganov
|
bcc0eb4591
llama : per-layer KV cache + quantum K cache (#4309)
|
2 سال پیش |
Georgi Gerganov
|
ef47ec18da
ggml : add ggml_soft_max_ext (#4256)
|
2 سال پیش |
slaren
|
8a052c131e
ggml-cuda : support stablelm rope (#4156)
|
2 سال پیش |
Haohui Mai
|
55978ce09b
Fix incorrect format strings and uninitialized variables. (#4133)
|
2 سال پیش |
Kerfuffle
|
2923f17f6f
Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
|
2 سال پیش |