Daniel Bevenius e6bf007744 llama : return nullptr from llama_grammar_init (#8093) hai 1 ano
..
baby-llama 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
batched 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
batched-bench 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
batched.swift 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
benchmark 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
convert-llama2c-to-ggml 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
cvector-generator 49c03c79cd cvector: better prompt handling, add "mean vector" method (#8069) hai 1 ano
embedding 646ef4a9cf embedding : more cli arguments (#7458) hai 1 ano
eval-callback 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
export-lora 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
finetune 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
gbnf-validator e6bf007744 llama : return nullptr from llama_grammar_init (#8093) hai 1 ano
gguf 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
gguf-split 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
gritlm 80ea089d77 llama : allow pooled embeddings on any model (#7477) hai 1 ano
imatrix 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
infill 91c188d6c2 Only use FIM middle token if it exists (#7648) hai 1 ano
jeopardy 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
llama-bench e65bbf606c llama-bench : fix RPC indication (#7936) hai 1 ano
llama.android 9791f40258 android : module (#7502) hai 1 ano
llama.swiftui 0e64591e82 swiftui : enable stream updating (#7754) hai 1 ano
llava 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
lookahead 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
lookup 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
main 48e6b92cc3 Add chat template support for llama-cli (#8068) hai 1 ano
main-cmake-pkg 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
parallel 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
passkey 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
perplexity 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
quantize 5b48cd53a8 Update llama-quantize ppl/file size output from LLaMA-v1 to Llama-3 values (#8058) hai 1 ano
quantize-stats 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
retrieval 80ea089d77 llama : allow pooled embeddings on any model (#7477) hai 1 ano
rpc 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
save-load-state 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
server 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) hai 1 ano
simple 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
speculative 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
sycl de391e4c80 [SYCL] Fix windows build and inference (#8003) hai 1 ano
tokenize 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
train-text-from-scratch 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
CMakeLists.txt 0c7b3595b9 Add `cvector-generator` example (#7514) hai 1 ano
Miku.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
base-translate.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) %!s(int64=2) %!d(string=hai) anos
chat-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat-persistent.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat-vicuna.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
chat.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
convert-legacy-llama.py 2b3389677a ggml : refactor rope norm/neox (#7634) hai 1 ano
json-schema-pydantic-example.py 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) hai 1 ano
json_schema_to_grammar.py 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) hai 1 ano
llama.vim 125d03a503 llama.vim : added api key support (#5090) %!s(int64=2) %!d(string=hai) anos
llm.vim ad9ddcff6e llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) %!s(int64=2) %!d(string=hai) anos
pydantic-models-to-grammar-examples.py d292f4f204 examples : make pydantic scripts pass mypy and support py3.8 (#5099) %!s(int64=2) %!d(string=hai) anos
pydantic_models_to_grammar.py 55b2d0849d grammars: x{min,max} repetition operator (#6640) hai 1 ano
reason-act.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
regex-to-grammar.py ab9a3240a9 JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555) hai 1 ano
server-embd.py 2002bc96bf server : refactor (#5882) hai 1 ano
server-llama2-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) hai 1 ano
ts-type-to-grammar.sh ab9a3240a9 JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555) hai 1 ano