1
0
compilade a1631e53f6 llama : simplify Mamba with advanced batch splits (#8526) 1 жил өмнө
..
ggml-alloc.h f3f65429c4 llama : reorganize source code + improve CMake (#8006) 1 жил өмнө
ggml-backend.h a15ef8f8a0 CUDA: fix partial offloading for ne0 % 256 != 0 (#8572) 1 жил өмнө
ggml-blas.h f3f65429c4 llama : reorganize source code + improve CMake (#8006) 1 жил өмнө
ggml-cann.h 1bdd8ae19f [CANN] Add Ascend NPU backend (#6035) 1 жил өмнө
ggml-cuda.h e54c35e4fb feat: Support Moore Threads GPU (#8383) 1 жил өмнө
ggml-kompute.h f3f65429c4 llama : reorganize source code + improve CMake (#8006) 1 жил өмнө
ggml-metal.h 85fca8deb6 metal : add abort callback (ggml/905) 1 жил өмнө
ggml-rpc.h f3f65429c4 llama : reorganize source code + improve CMake (#8006) 1 жил өмнө
ggml-sycl.h f3f65429c4 llama : reorganize source code + improve CMake (#8006) 1 жил өмнө
ggml-vulkan.h f3f65429c4 llama : reorganize source code + improve CMake (#8006) 1 жил өмнө
ggml.h a1631e53f6 llama : simplify Mamba with advanced batch splits (#8526) 1 жил өмнө