| .. |
|
batched
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
batched-bench
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
batched.swift
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
convert-llama2c-to-ggml
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
cvector-generator
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
deprecation-warning
|
f112d198cd
Update deprecation-warning.cpp (#10619)
|
hai 1 ano |
|
embedding
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
eval-callback
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
export-lora
|
e28245f35f
export-lora : fix tok_embd tensor (#11330)
|
hai 1 ano |
|
gbnf-validator
|
5cab3e4aaa
llama : minor grammar refactor (#10897)
|
hai 1 ano |
|
gen-docs
|
7cc2d2c889
ggml : move AMX to the CPU backend (#10570)
|
hai 1 ano |
|
gguf
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
hai 1 ano |
|
gguf-hash
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
hai 1 ano |
|
gguf-split
|
f11cfdfd7f
ci : use -no-cnv in gguf-split tests (#11254)
|
hai 1 ano |
|
gritlm
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
imatrix
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
infill
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
jeopardy
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
llama-bench
|
667d72846c
rpc : early register backend devices (#11262)
|
hai 1 ano |
|
llama.android
|
3edfa7d375
llama.android: add field formatChat to control whether to parse special tokens when send message (#11270)
|
hai 1 ano |
|
llama.swiftui
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
llava
|
3e3357fd77
llava : support Minicpm-omni (#11289)
|
hai 1 ano |
|
lookahead
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
lookup
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
main
|
6152129d05
main : update README documentation for batch size (#11353)
|
hai 11 meses |
|
parallel
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
passkey
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
perplexity
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
quantize
|
f11cfdfd7f
ci : use -no-cnv in gguf-split tests (#11254)
|
hai 1 ano |
|
quantize-stats
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
retrieval
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
rpc
|
86bf31cfe6
rpc-server : add support for the SYCL backend (#10934)
|
hai 1 ano |
|
run
|
7fee2889e6
Add github protocol pulling and http:// (#11465)
|
hai 11 meses |
|
save-load-state
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
server
|
49b0e3cec4
server : fix cleaning up stream task (#11418)
|
hai 11 meses |
|
simple
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
simple-chat
|
6171c9d258
Add Jinja template support (#11016)
|
hai 1 ano |
|
simple-cmake-pkg
|
19f65187cb
cmake: add ggml find package (#11369)
|
hai 11 meses |
|
speculative
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
speculative-simple
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
sycl
|
faf67b3de4
[SYCL]set context default value to avoid memory issue, update guide (#9476)
|
hai 1 ano |
|
tokenize
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
hai 1 ano |
|
tts
|
6390a998bf
tts : add guide tokens support (#11186)
|
hai 1 ano |
|
CMakeLists.txt
|
0bf2d10c55
tts : add OuteTTS support (#10784)
|
hai 1 ano |
|
Miku.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
chat-13B.bat
|
d9ad104440
Create chat-13B.bat (#592)
|
%!s(int64=2) %!d(string=hai) anos |
|
chat-13B.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
chat-persistent.sh
|
8fc393f246
scripts : fix pattern and get n_tokens in one go (#10221)
|
hai 1 ano |
|
chat-vicuna.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
chat.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
convert_legacy_llama.py
|
a0ec17b32e
metadata: Detailed Dataset Authorship Metadata (#8875)
|
hai 1 ano |
|
json_schema_pydantic_example.py
|
3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
|
hai 1 ano |
|
json_schema_to_grammar.py
|
66c2c93082
grammar : fix JSON Schema for string regex with top-level alt. (#9903)
|
hai 1 ano |
|
llama.vim
|
2d3aba9ee8
llama.vim : bump generation time limit to 3s [no ci]
|
hai 1 ano |
|
llm.vim
|
ad9ddcff6e
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
%!s(int64=2) %!d(string=hai) anos |
|
pydantic_models_to_grammar.py
|
090fca7a07
pydantic : replace uses of __annotations__ with get_type_hints (#8474)
|
hai 1 ano |
|
pydantic_models_to_grammar_examples.py
|
22f281aa16
examples : Rewrite pydantic_models_to_grammar_examples.py (#8493)
|
hai 1 ano |
|
reason-act.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
regex_to_grammar.py
|
e235b267a2
py : switch to snake_case (#8305)
|
hai 1 ano |
|
server-llama2-13B.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
hai 1 ano |
|
server_embd.py
|
3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
|
hai 1 ano |
|
ts-type-to-grammar.sh
|
ab9a3240a9
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
hai 1 ano |