Christian Kastner
|
21fcc21ad5
cmake: Factor out CPU architecture detection (#13883)
|
hace 7 meses |
Vineel Abhinav
|
dd8ba93416
ggml: aarch64: Implement SVE F32 kernels for Mamba Sequential Scan Algorithm (#13882)
|
hace 7 meses |
Georgi Gerganov
|
66c92061f5
tests : remove json.hpp from a test (#13880)
|
hace 7 meses |
Sigbjørn Skjæret
|
5ca82fc1d7
convert : workaround for AutoConfig dummy labels (#13881)
|
hace 7 meses |
Sigbjørn Skjæret
|
6385b843a8
llama : add RobertaForSequenceClassification reranker support (#13875)
|
hace 7 meses |
Vineel Abhinav
|
1b8fb8152d
ggml: aarch64: Implement SVE F32 kernels for vector functions (#13843)
|
hace 7 meses |
Beinsezii
|
53ae30640e
gguf-py : fix SafetensorRemote return on undefined size (< 0) (#13841)
|
hace 7 meses |
Xuan-Son Nguyen
|
763d06edb7
llama : fix KV shift for qwen2vl (#13870)
|
hace 7 meses |
Xuan-Son Nguyen
|
10961339b2
mtmd : move helpers to dedicated library (⚠️ breaking change) (#13866)
|
hace 7 meses |
bandoti
|
d98f2a35fc
ci: disable LLAMA_CURL for Linux cross-builds (#13871)
|
hace 7 meses |
Đinh Trọng Huy
|
e0e3aa231d
llama : add support for BertForSequenceClassification reranker (#13858)
|
hace 7 meses |
Đinh Trọng Huy
|
aa6dff05be
convert: small addition to support LlamaModel (#13838)
|
hace 7 meses |
Sky
|
c962ae3382
server: fix remove 'image_url'/'input_audio' json-object effectlly for 'llama_params' in multimodal-model-mode (#13853)
|
hace 7 meses |
Xuan-Son Nguyen
|
a3938fb53d
convert : fix qwen omni conversion (#13859)
|
hace 7 meses |
Alex Fanthome
|
f7873fc698
tests : change umlaut test (#11600)
|
hace 7 meses |
Johannes Gäßler
|
a68247439b
CUDA: fix FA tg at long context for CC >= 8.9 (#13852)
|
hace 7 meses |
Xuan-Son Nguyen
|
26b79b6cb3
convert : fix tensor naming conflict for llama 4 vision (#13836)
|
hace 7 meses |
leo-pony
|
1e8659e65a
CANN: Add SOC TYPE printing in cmake configuration (#13837)
|
hace 7 meses |
lhez
|
a3c30846e4
opencl: add new ops - `argsort`, `div`, `sub`, `addrows`, `sigmoid`, `group_norm` (#13787)
|
hace 7 meses |
lhez
|
1701d4c54f
opencl: mark `mul_mat` `f32f32` as supporting non-contiguous tensors (#13790)
|
hace 7 meses |
Jeff Bolz
|
bef8176387
vulkan: use timestamp queries for GGML_VULKAN_PERF (#13817)
|
hace 7 meses |
Georgi Gerganov
|
34b7c0439e
cmake : add llama-cparams.cpp to build (#13832)
|
hace 7 meses |
Akarshan Biswas
|
f3101a8cc6
SYCL: add gelu_erf kernel (#13749)
|
hace 7 meses |
Georgi Gerganov
|
1c49c70d07
sync : ggml
|
hace 7 meses |
Xuan-Son Nguyen
|
a8ea03d8ad
ggml : add ggml_repeat_4d (#13824)
|
hace 7 meses |
xctan
|
05f6ac6283
ggml : riscv: add xtheadvector support (#13720)
|
hace 7 meses |
Xuan-Son Nguyen
|
bc583e3c63
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (#13784)
|
hace 7 meses |
bandoti
|
72b090da2c
docs: remove link for llama-cli function calling (#13810)
|
hace 7 meses |
Christian Kastner
|
7fe03e7446
ggml-cpu: x86 feature detection is specific to x86 (#13811)
|
hace 7 meses |
Diego Devesa
|
952f3953c1
ggml : allow CUDA graphs when using pipeline parallelism (#13814)
|
hace 7 meses |