staviq 10151bee2e server : support for saving templates in browser LocalStorage (#2486) 2 years ago
..
baby-llama eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) 2 years ago
benchmark b1f4290953 cmake : install targets (#2256) 2 years ago
convert-llama2c-to-ggml b19edd54d5 Adding support for llama2.c models (#2559) 2 years ago
embd-input ff966e7ca6 build : fix several cast and printf warnings (#2499) 2 years ago
embedding b1f4290953 cmake : install targets (#2256) 2 years ago
jeopardy 5ddf7ea1fb hooks : setting up flake8 and pre-commit hooks (#1681) 2 years ago
main e59fcb2bc1 Add --n-predict -2 for stopping generation on full context (#2565) 2 years ago
metal b1f4290953 cmake : install targets (#2256) 2 years ago
perplexity ff966e7ca6 build : fix several cast and printf warnings (#2499) 2 years ago
quantize b1f4290953 cmake : install targets (#2256) 2 years ago
quantize-stats b1f4290953 cmake : install targets (#2256) 2 years ago
save-load-state 65cdf34bdc llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433) 2 years ago
server 10151bee2e server : support for saving templates in browser LocalStorage (#2486) 2 years ago
simple ff966e7ca6 build : fix several cast and printf warnings (#2499) 2 years ago
train-text-from-scratch eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) 2 years ago
CMakeLists.txt b19edd54d5 Adding support for llama2.c models (#2559) 2 years ago
Miku.sh 019fe257bb MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2 years ago
alpaca.sh a17a2683d8 alpaca.sh : update model file name (#2074) 2 years ago
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) 2 years ago
chat-13B.sh 6daa09d879 examples : read chat prompts from a template file (#1196) 2 years ago
chat-persistent.sh 1359b6aba5 chat-persistent.sh : use bracket expressions in grep (#1564) 2 years ago
chat-vicuna.sh c36e81da62 examples : add chat-vicuna.sh (#1854) 2 years ago
chat.sh 79b2b266db If n_predict == -1, generate forever 2 years ago
common.cpp 8dae7ce684 Add --cfg-negative-prompt-file option for examples (#2591) 2 years ago
common.h 3498588e0f Add --simple-io option for subprocesses and break out console.h and cpp (#1558) 2 years ago
console.cpp 9ca4abed89 Handle `ENABLE_VIRTUAL_TERMINAL_PROCESSING` more gracefully on earlier versions of Windows. 2 years ago
console.h 3498588e0f Add --simple-io option for subprocesses and break out console.h and cpp (#1558) 2 years ago
gpt4all.sh 107980d970 examples : add -n to alpaca and gpt4all scripts (#706) 2 years ago
grammar-parser.cpp ff966e7ca6 build : fix several cast and printf warnings (#2499) 2 years ago
grammar-parser.h 84e09a7d8b llama : add grammar-based sampling (#1773) 2 years ago
json-schema-to-grammar.py 8183159cf3 examples : generate JSON according to schema (#1887) 2 years ago
llama.vim 2d7baaf50f vim : streaming and more (#2495) 2 years ago
llama2-13b.sh 73643f5fb1 gitignore : changes for Poetry users + chat examples (#2284) 2 years ago
llama2.sh 73643f5fb1 gitignore : changes for Poetry users + chat examples (#2284) 2 years ago
llm.vim 7ed8d1fe7f llm.vim : multiline autocompletion, get rid of "^@" (#2543) 2 years ago
make-ggml.py 7d5f18468c examples : add easy python script to create quantized (k-bit support) GGML models from local HF Transformer models (#2311) 2 years ago
reason-act.sh a6956b25a1 add example of re-act pattern (#583) 2 years ago
server-llama2-13B.sh d73b8d48b4 examples : fix whitespace 2 years ago