eiery 10f19c1121 llama : have n_batch default to 512 (#1091) пре 2 година
..
benchmark c12b14b77f benchmark : fix result validation in benchmark-q4_0-matmult (#987) пре 2 година
embedding 489537e6cf examples: add missing <ctime> include for time() (#1011) пре 2 година
main 9411288271 main : evaluate tokens in batches after swapping context (#1014) пре 2 година
perplexity 3d59769c3b Show perplexity ETA in hours and minutes (#1096) пре 2 година
quantize 38de86a711 llama : multi-threaded quantization (#1075) пре 2 година
quantize-stats 38de86a711 llama : multi-threaded quantization (#1075) пре 2 година
CMakeLists.txt 62cfc54f77 Add quantize-stats command for testing quantization (#728) пре 2 година
Miku.sh 8b679987cd Fix whitespace, add .editorconfig, add GitHub workflow (#883) пре 2 година
alpaca.sh e9a9cb0c54 examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) пре 2 година
chat-13B.bat d9ad104440 Create chat-13B.bat (#592) пре 2 година
chat-13B.sh 55ad42af84 Move chat scripts into "./examples" пре 2 година
chat.sh 79b2b266db If n_predict == -1, generate forever пре 2 година
common.cpp 315a95a4d3 Add LoRA support (#820) пре 2 година
common.h 10f19c1121 llama : have n_batch default to 512 (#1091) пре 2 година
gpt4all.sh 107980d970 examples : add -n to alpaca and gpt4all scripts (#706) пре 2 година
reason-act.sh a6956b25a1 add example of re-act pattern (#583) пре 2 година