Daniel Bevenius e6bf007744 llama : return nullptr from llama_grammar_init (#8093) 1 anno fa
..
baby-llama 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
batched 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
batched-bench 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
batched.swift 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
benchmark 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
convert-llama2c-to-ggml 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
cvector-generator 49c03c79cd cvector: better prompt handling, add "mean vector" method (#8069) 1 anno fa
embedding 646ef4a9cf embedding : more cli arguments (#7458) 1 anno fa
eval-callback 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
export-lora 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
finetune 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
gbnf-validator e6bf007744 llama : return nullptr from llama_grammar_init (#8093) 1 anno fa
gguf 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
gguf-split 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
gritlm 80ea089d77 llama : allow pooled embeddings on any model (#7477) 1 anno fa
imatrix 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
infill 91c188d6c2 Only use FIM middle token if it exists (#7648) 1 anno fa
jeopardy 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
llama-bench e65bbf606c llama-bench : fix RPC indication (#7936) 1 anno fa
llama.android 9791f40258 android : module (#7502) 1 anno fa
llama.swiftui 0e64591e82 swiftui : enable stream updating (#7754) 1 anno fa
llava 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
lookahead 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
lookup 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
main 48e6b92cc3 Add chat template support for llama-cli (#8068) 1 anno fa
main-cmake-pkg 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
parallel 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
passkey 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
perplexity 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
quantize 5b48cd53a8 Update llama-quantize ppl/file size output from LLaMA-v1 to Llama-3 values (#8058) 1 anno fa
quantize-stats 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
retrieval 80ea089d77 llama : allow pooled embeddings on any model (#7477) 1 anno fa
rpc 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
save-load-state 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
server 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) 1 anno fa
simple 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
speculative 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
sycl de391e4c80 [SYCL] Fix windows build and inference (#8003) 1 anno fa
tokenize 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
train-text-from-scratch 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
CMakeLists.txt 0c7b3595b9 Add `cvector-generator` example (#7514) 1 anno fa
Miku.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
base-translate.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) 2 anni fa
chat-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
chat-persistent.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
chat-vicuna.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
chat.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
convert-legacy-llama.py 2b3389677a ggml : refactor rope norm/neox (#7634) 1 anno fa
json-schema-pydantic-example.py 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) 1 anno fa
json_schema_to_grammar.py 84631fe150 `json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) 1 anno fa
llama.vim 125d03a503 llama.vim : added api key support (#5090) 2 anni fa
llm.vim ad9ddcff6e llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2 anni fa
pydantic-models-to-grammar-examples.py d292f4f204 examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2 anni fa
pydantic_models_to_grammar.py 55b2d0849d grammars: x{min,max} repetition operator (#6640) 1 anno fa
reason-act.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
regex-to-grammar.py ab9a3240a9 JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555) 1 anno fa
server-embd.py 2002bc96bf server : refactor (#5882) 1 anno fa
server-llama2-13B.sh 1c641e6aac `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 1 anno fa
ts-type-to-grammar.sh ab9a3240a9 JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555) 1 anno fa