Commit History

Autor SHA1 Mensaxe Data
  R0CKSTAR bb115d2bf7 musa: override warp_size of musa device to 32 (#12445) hai 10 meses
  Xuan-Son Nguyen 29fff308c7 llama : support converting Mistral Small text-only (#12450) hai 10 meses
  Georgi Gerganov c6af2161b2 speculative : fix seg fault in certain cases (#12454) hai 10 meses
  Xuan-Son Nguyen 99aa304fb9 llama : add support for EXAONE tied word embeddings (#12451) hai 10 meses
  Georgi Gerganov 8551c44d84 context : always use non-causal attention for encoder graphs (#12447) hai 10 meses
  Łukasz Ślusarczyk 35cae5ba05 SYCL: using graphs is configurable by environment variable and compile option (#12371) hai 10 meses
  Georgi Gerganov 810e0af3f5 server : fix warmup draft cache type (#12446) hai 10 meses
  Prajwal B Mehendarkar eba92d64c3 cmake : fix PowerPC build (#12241) hai 10 meses
  fj-y-saito d9a14523bb ggml : add SVE support for q6_K_q8_K (#12361) hai 10 meses
  0cc4m fd123cfead Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (#12434) hai 10 meses
  Łukasz Ślusarczyk a53f7f7b88 fixed compilation warnings in ggml-sycl (#12424) hai 10 meses
  Molly Sophia 7dfad387e3 llama: Add support for RWKV v7 architecture (#12412) hai 10 meses
  Sigbjørn Skjæret 60c902926c docs : bring llama-cli conversation/template docs up-to-date (#12426) hai 10 meses
  Gaurav Garg b1b132efcb cuda : enable CUDA Graph on CUDA Toolkit < 12.x (#12394) hai 10 meses
  Guus Waals 01e8f2138b ggml-vulkan: remove unused find_program(glslc) (#12416) hai 10 meses
  Jeff Bolz 484a8ab513 vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader (#12312) hai 10 meses
  Daniele cf2270e4d3 vulkan: subgroup size tuning (#12087) hai 10 meses
  Jeff Bolz f07690c930 vulkan: use fp32 in coopmat2 q4_k dequant function (#12309) hai 10 meses
  Jeff Bolz 891c63956d vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (#12273) hai 10 meses
  Jeff Bolz 2f21123c1d vulkan: Adjust coopmat2 tile sizes and selection heuristic (#12258) hai 10 meses
  Christian Kastner 374101fd74 cmake : enable building llama.cpp using system libggml (#12321) hai 10 meses
  Akarshan Biswas b3c9a65673 SYCL: set extras only on GGML_TYPE_Q4_0 (#12366) hai 10 meses
  Sigbjørn Skjæret 8ba95dca20 llama : fix OLMo-2-0325-32B-Instruct K-norm size (#12400) hai 10 meses
  Georgi Gerganov dc079cfdff context : fix init of n_outputs (#12397) hai 10 meses
  Daniel Bevenius 7b61bcc87c ci : add --symlinks to xcframework zip command (#12409) hai 10 meses
  marcoStocchi f4c3dd5daa llama-tts : add '-o' option (#12398) hai 10 meses
  aubreyli 3d35d87b41 SYCL: Delete redundant plus sign and space (#12391) hai 10 meses
  fairydreaming b19bd064c0 SYCL : support non-contiguous tensors in binary ops (add, sub, etc) (#12399) hai 10 meses
  Chenguang Li 92a391327e [CANN]MUL_MAT optimization (#12382) hai 10 meses
  Eric Curtin 9f2250ba72 Add CLI arg to llama-run to adjust the number of threads used (#12370) hai 10 meses