| .. |
|
.gitignore
|
2c4f566c88
tests : gitignore ggml-common.h
|
1 سال پیش |
|
CMakeLists.txt
|
bd3f59f812
cmake : enable curl by default (#12761)
|
9 ماه پیش |
|
get-model.cpp
|
413e7b0559
ci : add model tests + script wrapper (#4586)
|
2 سال پیش |
|
get-model.h
|
413e7b0559
ci : add model tests + script wrapper (#4586)
|
2 سال پیش |
|
run-json-schema-to-grammar.mjs
|
a71d81cf8c
server : revamp chat UI with vuejs and daisyui (#10175)
|
1 سال پیش |
|
test-arg-parser.cpp
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 ماه پیش |
|
test-autorelease.cpp
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 سال پیش |
|
test-backend-ops.cpp
|
fe92821ea9
ggml : add bilinear upscale support (ggml/1185)
|
9 ماه پیش |
|
test-barrier.cpp
|
9f40989351
ggml : move CPU backend to a separate file (#10144)
|
1 سال پیش |
|
test-c.c
|
fbf1ddec69
Nomic Vulkan backend (#4456)
|
1 سال پیش |
|
test-chat-template.cpp
|
381603a775
ci: detach common from the library (#12827)
|
9 ماه پیش |
|
test-chat.cpp
|
4e39a3c332
`server`: extract <think> tags from qwq outputs (#12297)
|
10 ماه پیش |
|
test-double-float.cpp
|
370b1f7e7a
ggml : minor naming changes (#8433)
|
1 سال پیش |
|
test-gguf.cpp
|
fef0cbeadf
cleanup: fix compile warnings associated with gnu_printf (#11811)
|
11 ماه پیش |
|
test-grammar-integration.cpp
|
ff227703d6
sampling : support for llguidance grammars (#10224)
|
11 ماه پیش |
|
test-grammar-llguidance.cpp
|
2447ad8a98
upgrade to llguidance 0.7.10 (#12576)
|
9 ماه پیش |
|
test-grammar-parser.cpp
|
df270ef745
llama : refactor sampling v2 (#9294)
|
1 سال پیش |
|
test-json-schema-to-grammar.cpp
|
669912d9a5
`tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034)
|
10 ماه پیش |
|
test-llama-grammar.cpp
|
5cab3e4aaa
llama : minor grammar refactor (#10897)
|
1 سال پیش |
|
test-log.cpp
|
7eee341bee
common : use common_ prefix for common library functions (#9805)
|
1 سال پیش |
|
test-lora-conversion-inference.sh
|
f11cfdfd7f
ci : use -no-cnv in gguf-split tests (#11254)
|
1 سال پیش |
|
test-model-load-cancel.cpp
|
47182dd03f
llama : update llama_model API names (#11063)
|
1 سال پیش |
|
test-opt.cpp
|
24203e9dd7
ggml : inttypes.h -> cinttypes (#0)
|
1 سال پیش |
|
test-quantize-fns.cpp
|
e128a1bf5b
tests : fix test-quantize-fns to init the CPU backend (#12306)
|
10 ماه پیش |
|
test-quantize-perf.cpp
|
24203e9dd7
ggml : inttypes.h -> cinttypes (#0)
|
1 سال پیش |
|
test-rope.cpp
|
ba1cb19cdd
llama : add Qwen2VL support + multimodal RoPE (#10361)
|
1 سال پیش |
|
test-sampling.cpp
|
27e8a23300
sampling: add Top-nσ sampler (#11223)
|
11 ماه پیش |
|
test-tokenizer-0.cpp
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 سال پیش |
|
test-tokenizer-0.py
|
6fbd432211
py : logging and flake8 suppression refactoring (#7081)
|
1 سال پیش |
|
test-tokenizer-0.sh
|
edc29433fa
tests : fix test-tokenizer-0.sh
|
1 سال پیش |
|
test-tokenizer-1-bpe.cpp
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 سال پیش |
|
test-tokenizer-1-spm.cpp
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 سال پیش |
|
test-tokenizer-random.py
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 سال پیش |