Commit History

Author SHA1 Message Date
  Kawrakow 4d76a5f49b Faster Q3_K implementation on Metal (#2307) 2 years ago
  Kawrakow e68c96f7fe Faster Q2_K on Metal (#2297) 2 years ago
  Kawrakow e782c9e735 Faster Q5_K and Q6_K on Metal (#2294) 2 years ago
  Kawrakow 785829dfe8 Faster Q4_K on Metal (#2290) 2 years ago
  Shouzheng Liu 417a85a001 metal: minor q4 optimization and reduce code size (#2248) 2 years ago
  Xiao-Yong Jin 6e7cca4047 llama : add custom RoPE (#2054) 2 years ago
  Kawrakow 27ad57a69b Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212) 2 years ago
  Shouzheng Liu 1cbf561466 metal : new q4_0 matrix-vector kernel (#2188) 2 years ago
  Kawrakow 6769e944c7 k-quants : support for super-block size of 64 (#2001) 2 years ago
  Aaron Miller 0711a5f6dc metal : add norm, cpy f16->f16, alibi kernels (#1823) 2 years ago
  Kawrakow 74a6d922f1 Metal implementation for all k_quants (#1807) 2 years ago
  Kawrakow e9b66ee982 metal : add Q4_1 implementation (#1785) 2 years ago
  Georgi Gerganov b33dee282f metal : fix build "tanhf" -> "tanh" 2 years ago
  AT 92f44ff7f7 metal : add GELU implementation (#1770) 2 years ago
  Kawrakow 245fc3c37d metal : faster q4_0 (#1775) 2 years ago
  Kawrakow 72ff5282bf metal : add Q2_K implementation (#1762) 2 years ago
  Kawrakow 0f291e1f65 metal : Q6_K implementation (#1752) 2 years ago
  Kawrakow 4161bdc04d metal : add Q4_K implementation (#1733) 2 years ago
  Georgi Gerganov 44f906e853 metal : add f16 support 2 years ago
  Georgi Gerganov ecb217db4f llama : Metal inference (#1642) 2 years ago