Historique des commits

Auteur SHA1 Message Date
  蕭澧邦 cddae4884c Correct typo run_llama2.sh > run-llama2.sh (#9149) il y a 1 an
  tc-mb 7ea8d80d53 llava : the function "clip" should be int (#9237) il y a 1 an
  Faisal Zaghloul 42c76d1358 Threadpool: take 2 (#8672) il y a 1 an
  Jan Boon 9f7d4bcf5c server : fix crash when error handler dumps invalid utf-8 json (#9195) il y a 1 an
  Georgi Gerganov 1d1ccce676 flake.lock: Update (#9162) il y a 1 an
  slaren 9fe94ccac9 docker : build images only once (#9225) il y a 1 an
  slaren 66b039a501 docker : update CUDA images (#9213) il y a 1 an
  Georgi Gerganov 20f1789dfb vulkan : fix build (#0) il y a 1 an
  Georgi Gerganov 231cff5f6f sync : ggml il y a 1 an
  Xie Yanbo 3246fe84d7 Fix minicpm example directory (#9111) il y a 1 an
  compilade 78eb487bb0 llama : fix qs.n_attention_wv for DeepSeek-V2 (#9156) il y a 1 an
  Xuan Son Nguyen a77feb5d71 server : add some missing env variables (#9116) il y a 1 an
  CausalLM 2e59d61c1b llama : fix ChatGLM4 wrong shape (#9194) il y a 1 an
  Carsten Kragelund Jørgensen 75e1dbbaab llama : fix llama3.1 rope_freqs not respecting custom head_dim (#9141) il y a 1 an
  arch-btw ad76569f8e common : Update stb_image.h to latest version (#9161) il y a 1 an
  slaren 7d787ed96c ggml : do not crash when quantizing q4_x_x with an imatrix (#9192) il y a 1 an
  Georgi Gerganov 06658ad7c3 metal : separate scale and mask from QKT in FA kernel (#9189) il y a 1 an
  Georgi Gerganov fc18425b6a ggml : add SSM Metal kernels (#8546) il y a 1 an
  Georgi Gerganov 879275ac98 tests : fix compile warnings for unreachable code (#9185) il y a 1 an
  Georgi Gerganov 7a3df798fc ci : add VULKAN support to ggml-ci (#9055) il y a 1 an
  Georgi Gerganov e5edb210cd server : update deps (#9183) il y a 1 an
  slaren 0c41e03ceb metal : gemma2 flash attention support (#9159) il y a 1 an
  slaren f12ceaca0c ggml-ci : try to improve build time (#9160) il y a 1 an
  Justine Tunney 436787f170 llama : fix time complexity of string replacement (#9163) il y a 1 an
  Herman Semenov 93bc3839f9 common: fixed not working find argument --n-gpu-layers-draft (#9175) il y a 1 an
  Johannes Gäßler f91fc5639b CUDA: fix Gemma 2 numerical issues for FA (#9166) il y a 1 an
  Johannes Gäßler e11bd856d5 CPU/CUDA: Gemma 2 FlashAttention support (#8542) il y a 1 an
  João Dinis Ferreira 8f824ffe8e quantize : fix typo in usage help of `quantize.cpp` (#9145) il y a 1 an
  Xuan Son Nguyen 3ba780e2a8 lora : fix llama conversion script with ROPE_FREQS (#9117) il y a 1 an
  piDack a07c32ea54 llama : use F32 precision in GLM4 attention and no FA (#9130) il y a 1 an