Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
%!s(int64=2) %!d(string=hai) anos |
xaedes
|
e32089b2c2
train : improved training-from-scratch example (#1652)
|
%!s(int64=2) %!d(string=hai) anos |
Kerfuffle
|
4f0154b0ba
llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691)
|
%!s(int64=2) %!d(string=hai) anos |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
%!s(int64=2) %!d(string=hai) anos |
Kawrakow
|
99009e72f8
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
%!s(int64=2) %!d(string=hai) anos |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
%!s(int64=2) %!d(string=hai) anos |
Juuso Alasuutari
|
29cf5596fe
llama : define magic numbers as integer constants (#1518) (#1520)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
ec2e10c444
llama : add llama_init_backend() API (close #1527)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
8a203f9fa1
llama : fix compile warnings in llama_set_state_data()
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
2d5db48371
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
|
%!s(int64=2) %!d(string=hai) anos |
Stephan Walter
|
dc271c52ed
Remove unused n_parts parameter (#1509)
|
%!s(int64=2) %!d(string=hai) anos |
Johannes Gäßler
|
905d87b70a
ggml : GPU-accelerated token generation (#1412)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
738ace394a
llama : free ggml context in set / copy state data (close #1425)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
b9fd7eee57
ggml : remove bit shuffling (#1405)
|
%!s(int64=2) %!d(string=hai) anos |
Jed Fox
|
3924088512
Remove default arguments from sampling functions (#1343)
|
%!s(int64=2) %!d(string=hai) anos |
Evan Jones
|
e216aa0463
llama : only copy used KV cache in get / set state (#1272)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
0e6cbff1b7
llama : fix compile warnings
|
%!s(int64=2) %!d(string=hai) anos |
Robert Brisita
|
2bb992f034
llama : allow 0 as a seed number. (#1275)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
70269cae37
llama : fix session load / save (#1263)
|
%!s(int64=2) %!d(string=hai) anos |
Alex Klinkhamer
|
90b19bd6ee
llama : let context be const when accessing const data (#1261)
|
%!s(int64=2) %!d(string=hai) anos |
Ivan Stepanov
|
dd7eff57d8
llama : new sampling algorithms (#1126)
|
%!s(int64=2) %!d(string=hai) anos |
Stephan Walter
|
36d19a603b
Remove Q4_3 which is no better than Q5 (#1218)
|
%!s(int64=2) %!d(string=hai) anos |
Evan Jones
|
1481a9cf25
llama : add session file format and saved sessions in main (#1169)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
574406dc7e
ggml : add Q5_0 and Q5_1 quantization (#1187)
|
%!s(int64=2) %!d(string=hai) anos |
Ásgeir Bjarni Ingvarsson
|
87a6f846d3
Allow setting the rng seed after initialization. (#1184)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
7a32fcb3b2
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
|
%!s(int64=2) %!d(string=hai) anos |
Georgi Gerganov
|
c4fe84fb0d
llama : refactor get / set state + remove redundant kv cache API (#1143)
|
%!s(int64=2) %!d(string=hai) anos |
xaedes
|
b6e7f9b09e
llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105)
|
%!s(int64=2) %!d(string=hai) anos |
Kawrakow
|
38de86a711
llama : multi-threaded quantization (#1075)
|
%!s(int64=2) %!d(string=hai) anos |