Johannes Gäßler c63bb1d16a CUDA: use mul_mat_q kernels by default (#2683) 2 лет назад
..
baby-llama eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) 2 лет назад
benchmark b1f4290953 cmake : install targets (#2256) 2 лет назад
convert-llama2c-to-ggml 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
embd-input 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
embedding 519c981f8b embedding : evaluate prompt in batches (#2713) 2 лет назад
gguf 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
gptneox-wip 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
jeopardy 5ddf7ea1fb hooks : setting up flake8 and pre-commit hooks (#1681) 2 лет назад
llama-bench 8e4364f2af llama-bench : minor fixes (#2695) 2 лет назад
main 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
metal 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
perplexity 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
quantize 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
quantize-stats 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
save-load-state 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
server c63bb1d16a CUDA: use mul_mat_q kernels by default (#2683) 2 лет назад
simple 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
train-text-from-scratch ef3f333d37 ggml : sync latest (SAM + SD operators, CUDA alibi) (#2709) 2 лет назад
CMakeLists.txt 6381d4e110 gguf : new file format with flexible meta data (beta) (#2398) 2 лет назад
Miku.sh 019fe257bb MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2 лет назад
alpaca.sh a17a2683d8 alpaca.sh : update model file name (#2074) 2 лет назад
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) 2 лет назад
chat-13B.sh 6daa09d879 examples : read chat prompts from a template file (#1196) 2 лет назад
chat-persistent.sh 1359b6aba5 chat-persistent.sh : use bracket expressions in grep (#1564) 2 лет назад
chat-vicuna.sh c36e81da62 examples : add chat-vicuna.sh (#1854) 2 лет назад
chat.sh 79b2b266db If n_predict == -1, generate forever 2 лет назад
gpt4all.sh 107980d970 examples : add -n to alpaca and gpt4all scripts (#706) 2 лет назад
json-schema-to-grammar.py 8183159cf3 examples : generate JSON according to schema (#1887) 2 лет назад
llama.vim 2d7baaf50f vim : streaming and more (#2495) 2 лет назад
llama2-13b.sh 73643f5fb1 gitignore : changes for Poetry users + chat examples (#2284) 2 лет назад
llama2.sh 73643f5fb1 gitignore : changes for Poetry users + chat examples (#2284) 2 лет назад
llm.vim 7ed8d1fe7f llm.vim : multiline autocompletion, get rid of "^@" (#2543) 2 лет назад
make-ggml.py 7d5f18468c examples : add easy python script to create quantized (k-bit support) GGML models from local HF Transformer models (#2311) 2 лет назад
reason-act.sh a6956b25a1 add example of re-act pattern (#583) 2 лет назад
server-llama2-13B.sh d73b8d48b4 examples : fix whitespace 2 лет назад