| .. |
|
batched
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
batched-bench
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
batched.swift
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
convert-llama2c-to-ggml
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 год назад |
|
cvector-generator
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
deprecation-warning
|
f112d198cd
Update deprecation-warning.cpp (#10619)
|
1 год назад |
|
embedding
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
eval-callback
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 год назад |
|
export-lora
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
gen-docs
|
7cc2d2c889
ggml : move AMX to the CPU backend (#10570)
|
1 год назад |
|
gguf
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 год назад |
|
gguf-hash
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 год назад |
|
gguf-split
|
23106f94ea
gguf-split : --merge now respects --dry-run option (#12681)
|
9 месяцев назад |
|
gritlm
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
imatrix
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
infill
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
jeopardy
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
llama-bench
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
llama.android
|
bd3f59f812
cmake : enable curl by default (#12761)
|
9 месяцев назад |
|
llama.swiftui
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
llava
|
7c727fbe39
arg : add --no-mmproj-offload (#13093)
|
8 месяцев назад |
|
lookahead
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
lookup
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
main
|
6408210082
main : Fix Ctrl+D/newline handling (#12951)
|
9 месяцев назад |
|
parallel
|
a10b36c91a
llama : refactor kv cache guard (#12695)
|
9 месяцев назад |
|
passkey
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
perplexity
|
4ccea213bc
hellaswag: display estimated score confidence interval (#12797)
|
9 месяцев назад |
|
quantize
|
71e90e8813
quantize: Handle user-defined quantization levels for additional tensors (#12511)
|
9 месяцев назад |
|
retrieval
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
rpc
|
2cca6c01e4
rpc : add command line option for number of threads for the CPU backend (#13060)
|
8 месяцев назад |
|
run
|
b2034c2b55
contrib: support modelscope community (#12664)
|
9 месяцев назад |
|
save-load-state
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
server
|
35370ba945
server : use std::move whenever possible (#12936)
|
9 месяцев назад |
|
simple
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 год назад |
|
simple-chat
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
simple-cmake-pkg
|
68ff663a04
repo : update links to new url (#11886)
|
11 месяцев назад |
|
speculative
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
speculative-simple
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
sycl
|
81c7e64fc2
dsiable curl lib check, this action is missed by commit bd3f59f81289b920bcc597a208c14f55e39ed37e (#12761) (#12937)
|
9 месяцев назад |
|
tokenize
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 год назад |
|
tts
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
CMakeLists.txt
|
13b4548877
cmake : do not include ./src as public for libllama (#13062)
|
8 месяцев назад |
|
Miku.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
chat-13B.bat
|
d9ad104440
Create chat-13B.bat (#592)
|
2 лет назад |
|
chat-13B.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
chat-persistent.sh
|
8fc393f246
scripts : fix pattern and get n_tokens in one go (#10221)
|
1 год назад |
|
chat-vicuna.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
chat.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
convert_legacy_llama.py
|
a0ec17b32e
metadata: Detailed Dataset Authorship Metadata (#8875)
|
1 год назад |
|
json_schema_pydantic_example.py
|
3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
|
1 год назад |
|
json_schema_to_grammar.py
|
669912d9a5
`tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034)
|
10 месяцев назад |
|
llama.vim
|
68ff663a04
repo : update links to new url (#11886)
|
11 месяцев назад |
|
llm.vim
|
ad9ddcff6e
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2 лет назад |
|
pydantic_models_to_grammar.py
|
090fca7a07
pydantic : replace uses of __annotations__ with get_type_hints (#8474)
|
1 год назад |
|
pydantic_models_to_grammar_examples.py
|
68ff663a04
repo : update links to new url (#11886)
|
11 месяцев назад |
|
reason-act.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
regex_to_grammar.py
|
e235b267a2
py : switch to snake_case (#8305)
|
1 год назад |
|
server-llama2-13B.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
server_embd.py
|
a19b5cef16
llama : fix FA when KV cache is not used (i.e. embeddings) (#12825)
|
9 месяцев назад |
|
ts-type-to-grammar.sh
|
ab9a3240a9
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
1 год назад |