Cronologia Commit

Autore SHA1 Messaggio Data
  Georgi Gerganov ae178ab46b llama : make tensor_split ptr instead of array (#2272) 2 anni fa
  Jiří Podivín 54e3bc76fe make : add new target for test binaries (#2244) 2 anni fa
  Hatsune Miku 019fe257bb MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2 anni fa
  Kawrakow e68c96f7fe Faster Q2_K on Metal (#2297) 2 anni fa
  Przemysław Pawełczyk 9cf022a188 make : fix embdinput library and server examples building on MSYS2 (#2235) 2 anni fa
  Kawrakow e782c9e735 Faster Q5_K and Q6_K on Metal (#2294) 2 anni fa
  Kawrakow 785829dfe8 Faster Q4_K on Metal (#2290) 2 anni fa
  Georgi Gerganov fff0e0eafe llama : fix regression from #2000 - could not load no-mmap models 2 anni fa
  Shouzheng Liu 417a85a001 metal: minor q4 optimization and reduce code size (#2248) 2 anni fa
  Rinne 294f424554 llama : extend API to get max devices at runtime (#2253) 2 anni fa
  wzy 45a1b07e9b flake : update flake.nix (#2270) 2 anni fa
  wzy b1f4290953 cmake : install targets (#2256) 2 anni fa
  Georgi Gerganov d01bccde9f ci : integrate with ggml-org/ci (#2250) 2 anni fa
  Georgi Gerganov 6cbf9dfb32 llama : shorten quantization descriptions 2 anni fa
  Jiahao Li 7568d1a2b2 Support dup & cont ops on CUDA (#2242) 2 anni fa
  Alex Klinkhamer b7647436cc llama : fix t_start_sample_us initialization warning (#2238) 2 anni fa
  Qingyou Meng 672dda10e4 ggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG (#2219) 2 anni fa
  Jiří Podivín 27ab66e437 py : turn verify-checksum-models.py into executable (#2245) 2 anni fa
  Xiao-Yong Jin 6e7cca4047 llama : add custom RoPE (#2054) 2 anni fa
  Dave Della Costa a6803cab94 flake : add runHook preInstall/postInstall to installPhase so hooks function (#2224) 2 anni fa
  wzy 7dabc66f3c make : use pkg-config for OpenBLAS (#2222) 2 anni fa
  Bach Le 7cdd30bf1f cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer (#2220) 2 anni fa
  Evan Miller e8035f141e ggml : fix static_assert with older compilers #2024 (#2218) 2 anni fa
  Bach Le 7513b7b0a1 llama : add functions that work directly on model (#2197) 2 anni fa
  Ali Chraghi de8342423d build.zig : install config header (#2216) 2 anni fa
  Shangning Xu c48c525f87 examples : fixed path typos in embd-input (#2214) 2 anni fa
  Jiahao Li 206e01de11 cuda : support broadcast add & mul (#2192) 2 anni fa
  Johannes Gäßler 4304bd3cde CUDA: mul_mat_vec_q kernels for k-quants (#2203) 2 anni fa
  James Reynolds 229aab351c make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208) 2 anni fa
  Georgi Gerganov 697966680b ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope) 2 anni fa