ebraminio
|
437e05f714
server : (UI) Support for RTL text as models input or output (#11208)
|
1 anno fa |
Georgi Gerganov
|
ca001f6656
contrib : add naming guidelines (cont) (#11177)
|
1 anno fa |
Xuan Son Nguyen
|
00b4c3da62
common : support tag-based --hf-repo like on ollama (#11195)
|
1 anno fa |
Georgi Gerganov
|
7426a26b24
contrib : add naming guidelines (#11177)
|
1 anno fa |
Daniel Bevenius
|
8f70fc3d1b
llama : remove 'd' from bad special token log (#11212)
|
1 anno fa |
Radoslav Gerganov
|
1244cdcf14
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211)
|
1 anno fa |
Eric Curtin
|
924518e2e5
Reset color before we exit (#11205)
|
1 anno fa |
Xuan Son Nguyen
|
9a483999a6
llama : fix chat template gguf key (#11201)
|
1 anno fa |
Georgi Gerganov
|
08f10f69c3
llama : remove notion of CLS token (#11064)
|
1 anno fa |
Georgi Gerganov
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 anno fa |
Vinesh Janarthanan
|
c05e8c9934
gguf-py: fixed local detection of gguf package (#11180)
|
1 anno fa |
Daniel Bevenius
|
2739a71e4b
convert : sort print supported models [no ci] (#11179)
|
1 anno fa |
Daniel Bevenius
|
ba8a1f9c5b
examples : add README.md to tts example [no ci] (#11155)
|
1 anno fa |
Daniel Bevenius
|
ff3fcabc72
convert : add --print-supported-models option (#11172)
|
1 anno fa |
0cc4m
|
c3f9d25706
Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161)
|
1 anno fa |
Molly Sophia
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 anno fa |
Akarshan Biswas
|
c6860cc734
SYCL: Refactor ggml_sycl_compute_forward (#11121)
|
1 anno fa |
Tei Home
|
1204f97270
doc: add cuda guide for fedora (#11135)
|
1 anno fa |
Daniel Bevenius
|
8eceb888d7
server : add tooltips to settings and themes btn (#11154)
|
1 anno fa |
Pierrick Hymbert
|
f8feb4b01a
model: Add support for PhiMoE arch (#11003)
|
1 anno fa |
Georgi Gerganov
|
be0e950c91
media : remove old img [no ci]
|
1 anno fa |
Xuan Son Nguyen
|
d9feae1c06
llama-chat : add phi 4 template (#11148)
|
1 anno fa |
hydai
|
8d59d91171
fix: add missing msg in static_assert (#11143)
|
1 anno fa |
Vinesh Janarthanan
|
8a1d9c25fa
gguf-py : move scripts directory (#11116)
|
1 anno fa |
Eric Curtin
|
1bf839b1e8
Enhance user input handling for llama-run (#11138)
|
1 anno fa |
Xuan Son Nguyen
|
f7cd13301c
ci : use actions from ggml-org (#11140)
|
1 anno fa |
Xuan Son Nguyen
|
4d2b3d8804
lora : improve compat with `mergekit-extract-lora` (#11131)
|
1 anno fa |
Georgi Gerganov
|
c07d437bbd
llama : avoid hardcoded QK_K (#11061)
|
1 anno fa |
Georgi Gerganov
|
99a3755a3c
sync : ggml
|
1 anno fa |
Radoslav Gerganov
|
c792dcf488
ggml : allow loading backend with env variable (ggml/1059)
|
1 anno fa |