| .. |
|
ggml-cann
|
1bdd8ae19f
[CANN] Add Ascend NPU backend (#6035)
|
1 year ago |
|
ggml-cuda
|
46e47417aa
Allow all RDNA2 archs to use sdot4 intrinsic (#8629)
|
1 year ago |
|
ggml-sycl
|
ed67bcb24f
[SYCL] fix multi-gpu issue on sycl (#8554)
|
1 year ago |
|
kompute @ 4565194ed7
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
1 year ago |
|
kompute-shaders
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
1 year ago |
|
llamafile
|
6b2a849d1f
ggml : move sgemm sources to llamafile subfolder (#8394)
|
1 year ago |
|
vulkan-shaders
|
751fcfc6c3
Vulkan IQ4_NL Support (#8613)
|
1 year ago |
|
CMakeLists.txt
|
79167d9e49
Re-add erroneously removed -fsycl from GGML_EXTRA_LIBS (#8667)
|
1 year ago |
|
ggml-aarch64.c
|
bf5a81df37
ggml : fix build on Windows with Snapdragon X (#8531)
|
1 year ago |
|
ggml-aarch64.h
|
370b1f7e7a
ggml : minor naming changes (#8433)
|
1 year ago |
|
ggml-alloc.c
|
a15ef8f8a0
CUDA: fix partial offloading for ne0 % 256 != 0 (#8572)
|
1 year ago |
|
ggml-backend-impl.h
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
1 year ago |
|
ggml-backend.c
|
a15ef8f8a0
CUDA: fix partial offloading for ne0 % 256 != 0 (#8572)
|
1 year ago |
|
ggml-blas.cpp
|
368645698a
ggml : add NVPL BLAS support (#8329) (#8425)
|
1 year ago |
|
ggml-cann.cpp
|
1bdd8ae19f
[CANN] Add Ascend NPU backend (#6035)
|
1 year ago |
|
ggml-common.h
|
0f1a39f343
ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780)
|
1 year ago |
|
ggml-cuda.cu
|
a15ef8f8a0
CUDA: fix partial offloading for ne0 % 256 != 0 (#8572)
|
1 year ago |
|
ggml-impl.h
|
0f1a39f343
ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780)
|
1 year ago |
|
ggml-kompute.cpp
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
1 year ago |
|
ggml-metal.m
|
87e397d00b
ggml : fix quant dot product with odd number of blocks (#8549)
|
1 year ago |
|
ggml-metal.metal
|
87e397d00b
ggml : fix quant dot product with odd number of blocks (#8549)
|
1 year ago |
|
ggml-quants.c
|
04bab6b7da
ggml: fix compile error for RISC-V (#8623)
|
1 year ago |
|
ggml-quants.h
|
370b1f7e7a
ggml : minor naming changes (#8433)
|
1 year ago |
|
ggml-rpc.cpp
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
1 year ago |
|
ggml-sycl.cpp
|
16bdfa42ac
[SYCL] add concat through dim 1/2 (#8483)
|
1 year ago |
|
ggml-vulkan.cpp
|
751fcfc6c3
Vulkan IQ4_NL Support (#8613)
|
1 year ago |
|
ggml.c
|
eddcb5238b
ggml : add and use ggml_cpu_has_llamafile() (#8664)
|
1 year ago |