compilade
|
0a2f5496be
imatrix : fix 3d activation handling for hybrid and recurrent models (#14994)
|
hace 5 meses |
compilade
|
11a3811164
memory : handle kv_unified for hybrid models (#15050)
|
hace 5 meses |
Csaba Kecskemeti
|
97366dc6ab
vocab : JetBrains Mellum pre-tokenizer (#15045)
|
hace 5 meses |
Gabriel Larson
|
83bc2f288c
model : add text-only support for Kimi-VL (and find special tokens in text_config) (#15051)
|
hace 5 meses |
Jeff Bolz
|
6c7a441161
vulkan: Use coopmat2 for conv2d (#14982)
|
hace 5 meses |
lhez
|
5c0eb5ef54
opencl: fix adreno compiler detection logic (#15029)
|
hace 5 meses |
Johannes Gäßler
|
03d4698218
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (#15035)
|
hace 5 meses |
leejet
|
3303c19b16
cuda: make im2col a little faster (#15025)
|
hace 5 meses |
Daniel Bevenius
|
4fdea540bd
kv-cache : skip alignment of n_stream in kv-cache log msg [no ci] (#15040)
|
hace 5 meses |
Georgi Gerganov
|
a4569c41fd
llama : enable LLAMA_SET_ROWS=1 by default (#14959)
|
hace 5 meses |
Georgi Gerganov
|
15e92fd337
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (#15038)
|
hace 5 meses |
Sigbjørn Skjæret
|
2bf3fbf0b5
ci : check that pre-tokenizer hashes are up-to-date (#15032)
|
hace 5 meses |
Douglas Hanley
|
711d5e6fe6
convert : fix Qwen3-Embedding pre-tokenizer hash (#15030)
|
hace 5 meses |
Jhen-Jie Hong
|
f738989dcb
chat : fix multiple tool_calls on hermes-2-pro (#14962)
|
hace 5 meses |
Jeff Bolz
|
4cb208c93c
vulkan: coopmat2 mul_mat optimizations (#14934)
|
hace 5 meses |
R0CKSTAR
|
3025b621d1
llama-bench: rename DB table name from test to llama_bench (#15003)
|
hace 5 meses |
Jeff Bolz
|
ec0b18802c
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (#15015)
|
hace 5 meses |
Douglas Hanley
|
339bd0268c
model : support Qwen3-Embedding (#15023)
|
hace 5 meses |
Johannes Gäßler
|
f906275537
server: enable token array inputs for OAI API (#15001)
|
hace 5 meses |
Jeff Bolz
|
a9f7541ec2
vulkan: optimizations for direct convolution (#14933)
|
hace 5 meses |
Johannes Gäßler
|
9c35706b98
CUDA: fix MMQ nwarps for AMD with warp_size==32 (#15014)
|
hace 5 meses |
l-austenfeld
|
c76b420e4c
vendor : update vendored copy of google/minja (#15011)
|
hace 5 meses |
stevenkuang
|
0f5ccd6fd1
model : add hunyuan dense (#14878)
|
hace 5 meses |
lhez
|
1c872f71fb
opencl: add f16 for `add`, `sub`, `mul`, `div` (#14984)
|
hace 5 meses |
Srihari-mcw
|
baad94885d
ggml : Q2k interleaving implementation - x86/x64 SIMD (#14373)
|
hace 5 meses |
Georgi Gerganov
|
ba42794c9e
graph : fix equal_seq() check (#14986)
|
hace 5 meses |
diannao
|
2860d479b4
docker : add cann build pipline (#14591)
|
hace 5 meses |
R0CKSTAR
|
484b2091ce
compare-commits.sh: support both llama-bench and test-backend-ops (#14392)
|
hace 5 meses |
Ed Addario
|
daf2dd7880
quantize : skip tensor override when in fallback mode (#14995)
|
hace 5 meses |
Diego Devesa
|
a06ed5feae
llama : add simple option to enable CPU for MoE weights (--cpu-moe) (#14992)
|
hace 5 meses |