Explorar o código

Vulkan Improvements (#5835)

* Improve dequant shaders, add fast q4_0 dequant

* Optimize dmmv non-kquants for GCN

Remove unnecessary SPIR-V shader duplication

* Fix q4_0 dequant dispatch sizes

Fix backend free bug

* Optimize dequant shaders for q4_1, q5_0, q5_1 and q8_0

* Add unary and binary op shader templates

* Fix Vulkan check results

* Enable non-contiguous support for simple ops

* Add argsort

Basic q4_0 mmq shader and unit test

* Speed up q4_0 dequant code, enable mmq for q4_0

* Rework matmul pipeline selection

* Add soft_max alibi support

* Add q4_1, q5_0, q5_1 and q8_0 dequant mat mat mul shaders

* Add environment variable GGML_VK_FORCE_MAX_ALLOCATION_SIZE to limit max buffer size

Rename GGML_VULKAN_DISABLE_F16 to GGML_VK_DISABLE_F16 for consistency
0cc4m hai 1 ano
pai
achega
61d1c88e15
Modificáronse 5 ficheiros con 2920 adicións e 1538 borrados
  1. 1974 944
      ggml-vulkan-shaders.hpp
  2. 500 320
      ggml-vulkan.cpp
  3. 1 0
      ggml-vulkan.h
  4. 443 272
      ggml_vk_generate_shaders.py
  5. 2 2
      llama.cpp

A diferenza do arquivo foi suprimida porque é demasiado grande
+ 1974 - 944
ggml-vulkan-shaders.hpp


A diferenza do arquivo foi suprimida porque é demasiado grande
+ 500 - 320
ggml-vulkan.cpp


+ 1 - 0
ggml-vulkan.h

@@ -10,6 +10,7 @@ extern "C" {
 #define GGML_VK_NAME "Vulkan"
 #define GGML_VK_MAX_DEVICES 16
 
+GGML_API void ggml_vk_instance_init(void);
 GGML_API void ggml_vk_init_cpu_assist(void);
 
 GGML_API void ggml_vk_preallocate_buffers_graph_cpu_assist(struct ggml_tensor * node);

A diferenza do arquivo foi suprimida porque é demasiado grande
+ 443 - 272
ggml_vk_generate_shaders.py


+ 2 - 2
llama.cpp

@@ -5014,8 +5014,8 @@ static struct ggml_tensor * llm_build_kqv(
         ggml_mul_mat_set_prec(kq, GGML_PREC_F32);
     }
 
-#if defined(GGML_USE_VULKAN) || defined(GGML_USE_KOMPUTE)
-#pragma message("TODO: ALiBi support in ggml_soft_max_ext is not implemented for Vulkan, and Kompute")
+#if defined(GGML_USE_KOMPUTE)
+#pragma message("TODO: ALiBi support in ggml_soft_max_ext is not implemented for Kompute")
 #pragma message("      Falling back to ggml_alibi(). Will become an error in Mar 2024")
 #pragma message("ref:  https://github.com/ggerganov/llama.cpp/pull/5488")
     if (hparams.f_max_alibi_bias > 0.0f) {

Algúns arquivos non se mostraron porque demasiados arquivos cambiaron neste cambio