Pierrick Hymbert 2f0ee84b9b server: bench: minor fixes (#10765) 1 سال پیش
..
batched 644fd71b44 sampling : refactor + optimize penalties sampler (#10803) 1 سال پیش
batched-bench 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
batched.swift 0abc6a2c25 llama : llama_perf + option to disable timings during decode (#9355) 1 سال پیش
convert-llama2c-to-ggml 8648c52101 make : deprecate (#10514) 1 سال پیش
cvector-generator d283d02bf2 examples, ggml : fix GCC compiler warnings (#10983) 1 سال پیش
deprecation-warning f112d198cd Update deprecation-warning.cpp (#10619) 1 سال پیش
embedding 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
eval-callback 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
export-lora d283d02bf2 examples, ggml : fix GCC compiler warnings (#10983) 1 سال پیش
gbnf-validator 5cab3e4aaa llama : minor grammar refactor (#10897) 1 سال پیش
gen-docs 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
gguf 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
gguf-hash 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
gguf-split cb13ef85a4 remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 1 سال پیش
gritlm 152610eda9 server : output embeddings for all tokens when pooling = none (#10861) 1 سال پیش
imatrix 8648c52101 make : deprecate (#10514) 1 سال پیش
infill 82bca2257b readme : add option, update default value, fix formatting (#10271) 1 سال پیش
jeopardy 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
llama-bench cb13ef85a4 remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 1 سال پیش
llama.android c250ecb315 android : fix llama_batch free (#11014) 1 سال پیش
llama.swiftui 43ed389a3f llama : use cmake for swift build (#10525) 1 سال پیش
llava d408bb9268 clip : disable GPU support (#10896) 1 سال پیش
lookahead 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
lookup 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
main 644fd71b44 sampling : refactor + optimize penalties sampler (#10803) 1 سال پیش
main-cmake-pkg 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
parallel 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
passkey 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
perplexity 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
quantize 1a31d0dc00 Update README.md (#10772) 1 سال پیش
quantize-stats 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
retrieval 152610eda9 server : output embeddings for all tokens when pooling = none (#10861) 1 سال پیش
rpc 86bf31cfe6 rpc-server : add support for the SYCL backend (#10934) 1 سال پیش
run 6e1531aca5 common, examples, ggml : fix MSYS2 GCC compiler errors and warnings when building with LLAMA_CURL=ON and GGML_OPENCL=ON (#11013) 1 سال پیش
save-load-state 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
server 2f0ee84b9b server: bench: minor fixes (#10765) 1 سال پیش
simple 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
simple-chat 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
speculative 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
speculative-simple 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 سال پیش
sycl faf67b3de4 [SYCL]set context default value to avoid memory issue, update guide (#9476) 1 سال پیش
tokenize cb13ef85a4 remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 1 سال پیش
tts 0bf2d10c55 tts : add OuteTTS support (#10784) 1 سال پیش
CMakeLists.txt 0bf2d10c55 tts : add OuteTTS support (#10784) 1 سال پیش
Miku.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) 2 سال پیش
chat-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
chat-persistent.sh 8fc393f246 scripts : fix pattern and get n_tokens in one go (#10221) 1 سال پیش
chat-vicuna.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
chat.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
convert_legacy_llama.py a0ec17b32e metadata: Detailed Dataset Authorship Metadata (#8875) 1 سال پیش
json_schema_pydantic_example.py 3fd62a6b1c py : type-check all Python scripts with Pyright (#8341) 1 سال پیش
json_schema_to_grammar.py 66c2c93082 grammar : fix JSON Schema for string regex with top-level alt. (#9903) 1 سال پیش
llama.vim 2d3aba9ee8 llama.vim : bump generation time limit to 3s [no ci] 1 سال پیش
llm.vim ad9ddcff6e llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2 سال پیش
pydantic_models_to_grammar.py 090fca7a07 pydantic : replace uses of __annotations__ with get_type_hints (#8474) 1 سال پیش
pydantic_models_to_grammar_examples.py 22f281aa16 examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 1 سال پیش
reason-act.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
regex_to_grammar.py e235b267a2 py : switch to snake_case (#8305) 1 سال پیش
server-llama2-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 سال پیش
server_embd.py 3fd62a6b1c py : type-check all Python scripts with Pyright (#8341) 1 سال پیش
ts-type-to-grammar.sh ab9a3240a9 JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555) 1 سال پیش