staviq
|
8341a25957
main : log file (#2748)
|
2 years ago |
Johannes Gäßler
|
6b73ef1201
YAML result logging + preset script (#2657)
|
2 years ago |
Georgi Gerganov
|
edd4c14817
llama : more tokenizer fixes (#2810)
|
2 years ago |
Dr. Tom Murphy VII Ph.D
|
72f895c923
main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (#1528)
|
2 years ago |
klosax
|
2ba83c8685
Fix spm whitespaces (#2806)
|
2 years ago |
Kerfuffle
|
7694adda8d
Fix for main example getting stuck when -n -2 and --interactive (#2767)
|
2 years ago |
Georgi Gerganov
|
cf658adc83
llm : add Falcon support (#2717)
|
2 years ago |
klosax
|
5290c38e6e
main : insert bos if no tokens (#2727)
|
2 years ago |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 years ago |
Christian Demsar
|
e59fcb2bc1
Add --n-predict -2 for stopping generation on full context (#2565)
|
2 years ago |
DannyDaemonic
|
3498588e0f
Add --simple-io option for subprocesses and break out console.h and cpp (#1558)
|
2 years ago |
Xiao-Yong Jin
|
0c06204fb3
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
|
2 years ago |
Evan Jones
|
84e09a7d8b
llama : add grammar-based sampling (#1773)
|
2 years ago |
Georgi Gerganov
|
e76d630df1
llama : grouped-query attention + LLaMAv2 70B support (#2276)
|
2 years ago |
Georgi Gerganov
|
b47b8a9cfe
llama : optimize memory buffers (#2325)
|
2 years ago |
Guillaume "Vermeille" Sanchez
|
ab0e26bdfb
llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280)
|
2 years ago |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 years ago |
Bach Le
|
c9c74b4e3f
llama : add classifier-free guidance (#2135)
|
2 years ago |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 years ago |
Judd
|
36680f6e40
convert : update for baichuan (#2081)
|
2 years ago |
Howard Su
|
b8c8dda75f
Use unsigned for random seed (#2006)
|
2 years ago |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 years ago |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 years ago |
Georgi Gerganov
|
4f9c43e3bd
minor : warning fixes
|
2 years ago |
FrankHB
|
5b9ccaf104
Fixed possible macro redefinition (#1892)
|
2 years ago |
Borislav Stanimirov
|
9cbf50c041
build : fix and ignore MSVC warnings (#1889)
|
2 years ago |
Georgi Gerganov
|
2347e45e7b
llama : do a warm-up eval at start for better timings (#1824)
|
2 years ago |
Kerfuffle
|
fa84c4b3e8
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
|
2 years ago |
Willy Tarreau
|
35a84916fb
main: add the possibility to open the prompt cache read-only (#1640)
|
2 years ago |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 years ago |