Georgi Gerganov f11cfdfd7f ci : use -no-cnv in gguf-split tests (#11254) hai 1 ano
..
batched afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
batched-bench afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
batched.swift afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
convert-llama2c-to-ggml afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
cvector-generator afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
deprecation-warning f112d198cd Update deprecation-warning.cpp (#10619) hai 1 ano
embedding afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
eval-callback afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
export-lora afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
gbnf-validator 5cab3e4aaa llama : minor grammar refactor (#10897) hai 1 ano
gen-docs 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) hai 1 ano
gguf 53ff6b9b9f GGUF: C++ refactor, backend support, misc fixes (#11030) hai 1 ano
gguf-hash 53ff6b9b9f GGUF: C++ refactor, backend support, misc fixes (#11030) hai 1 ano
gguf-split f11cfdfd7f ci : use -no-cnv in gguf-split tests (#11254) hai 1 ano
gritlm afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
imatrix afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
infill afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
jeopardy 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
llama-bench afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
llama.android afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
llama.swiftui afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
llava afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
lookahead afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
lookup afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
main 84a44815f7 cli : auto activate conversation mode if chat template is available (#11214) hai 1 ano
main-cmake-pkg 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) hai 1 ano
parallel afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
passkey afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
perplexity afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
quantize f11cfdfd7f ci : use -no-cnv in gguf-split tests (#11254) hai 1 ano
quantize-stats afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
retrieval afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
rpc 86bf31cfe6 rpc-server : add support for the SYCL backend (#10934) hai 1 ano
run 924518e2e5 Reset color before we exit (#11205) hai 1 ano
save-load-state afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
server c5bf0d1bd7 server : Improve code snippets direction between RTL text (#11221) hai 1 ano
simple afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
simple-chat afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
speculative afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
speculative-simple afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
sycl faf67b3de4 [SYCL]set context default value to avoid memory issue, update guide (#9476) hai 1 ano
tokenize afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) hai 1 ano
tts 0ccd7f3eb2 examples : add embd_to_audio to tts-outetts.py [no ci] (#11235) hai 1 ano
CMakeLists.txt 0bf2d10c55 tts : add OuteTTS support (#10784) hai 1 ano
Miku.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) %!s(int64=2) %!d(string=hai) anos
chat-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat-persistent.sh 8fc393f246 scripts : fix pattern and get n_tokens in one go (#10221) hai 1 ano
chat-vicuna.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
convert_legacy_llama.py a0ec17b32e metadata: Detailed Dataset Authorship Metadata (#8875) hai 1 ano
json_schema_pydantic_example.py 3fd62a6b1c py : type-check all Python scripts with Pyright (#8341) hai 1 ano
json_schema_to_grammar.py 66c2c93082 grammar : fix JSON Schema for string regex with top-level alt. (#9903) hai 1 ano
llama.vim 2d3aba9ee8 llama.vim : bump generation time limit to 3s [no ci] hai 1 ano
llm.vim ad9ddcff6e llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) %!s(int64=2) %!d(string=hai) anos
pydantic_models_to_grammar.py 090fca7a07 pydantic : replace uses of __annotations__ with get_type_hints (#8474) hai 1 ano
pydantic_models_to_grammar_examples.py 22f281aa16 examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) hai 1 ano
reason-act.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
regex_to_grammar.py e235b267a2 py : switch to snake_case (#8305) hai 1 ano
server-llama2-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
server_embd.py 3fd62a6b1c py : type-check all Python scripts with Pyright (#8341) hai 1 ano
ts-type-to-grammar.sh ab9a3240a9 JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555) hai 1 ano