| .. |
|
include
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 год назад |
|
src
|
1244cdcf14
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211)
|
1 год назад |
|
.gitignore
|
17eb6aa8a9
vulkan : cmake integration (#8119)
|
1 год назад |
|
CMakeLists.txt
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 год назад |