Guillaume "Vermeille" Sanchez
|
ab0e26bdfb
llama : remove cfg smooth factor as it is only a reparameterization of the guidance scale (#2280)
|
2 years ago |
Georgi Gerganov
|
ae178ab46b
llama : make tensor_split ptr instead of array (#2272)
|
2 years ago |
Rinne
|
294f424554
llama : extend API to get max devices at runtime (#2253)
|
2 years ago |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 years ago |
Bach Le
|
7513b7b0a1
llama : add functions that work directly on model (#2197)
|
2 years ago |
Bach Le
|
c9c74b4e3f
llama : add classifier-free guidance (#2135)
|
2 years ago |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 years ago |
Tobias Lütke
|
31cfbb1013
Expose generation timings from server & update completions.js (#2116)
|
2 years ago |
Howard Su
|
b8c8dda75f
Use unsigned for random seed (#2006)
|
2 years ago |
ningshanwutuobang
|
cfa0750bc9
llama : support input embeddings directly (#1910)
|
2 years ago |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 years ago |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
2 years ago |
Ettore Di Giacinto
|
aacdbd4056
llama : fix params struct slignment (#1936)
|
2 years ago |
yangli2
|
c36e81da62
examples : add chat-vicuna.sh (#1854)
|
2 years ago |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 years ago |
xaedes
|
e32089b2c2
train : improved training-from-scratch example (#1652)
|
2 years ago |
Kerfuffle
|
4f0154b0ba
llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691)
|
2 years ago |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 years ago |
Kawrakow
|
99009e72f8
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
|
2 years ago |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 years ago |
Kerfuffle
|
1b78ed2081
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
|
2 years ago |
Juuso Alasuutari
|
29cf5596fe
llama : define magic numbers as integer constants (#1518) (#1520)
|
2 years ago |
Georgi Gerganov
|
ec2e10c444
llama : add llama_init_backend() API (close #1527)
|
2 years ago |
Georgi Gerganov
|
8a203f9fa1
llama : fix compile warnings in llama_set_state_data()
|
2 years ago |
Georgi Gerganov
|
2d5db48371
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
|
2 years ago |
Stephan Walter
|
dc271c52ed
Remove unused n_parts parameter (#1509)
|
2 years ago |
Johannes Gäßler
|
905d87b70a
ggml : GPU-accelerated token generation (#1412)
|
2 years ago |
Georgi Gerganov
|
738ace394a
llama : free ggml context in set / copy state data (close #1425)
|
2 years ago |
Georgi Gerganov
|
b9fd7eee57
ggml : remove bit shuffling (#1405)
|
2 years ago |
Jed Fox
|
3924088512
Remove default arguments from sampling functions (#1343)
|
2 years ago |