Georgi Gerganov
|
513f861953
ggml : fix rope args order + assert (#2054)
|
2 years ago |
Xiao-Yong Jin
|
6e7cca4047
llama : add custom RoPE (#2054)
|
2 years ago |
Georgi Gerganov
|
4523d10d0c
ggml : add ggml_pool_1d and ggml_pool_2d
|
2 years ago |
Georgi Gerganov
|
20d7740a9b
ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)
|
2 years ago |
Spencer Sutton
|
5bf2a27718
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
|
2 years ago |
Qingyou Meng
|
1d656d6360
ggml : change ggml_graph_compute() API to not require context (#1999)
|
2 years ago |
Georgi Gerganov
|
dfd9fce6d6
ggml : fix restrict usage
|
2 years ago |
Stephan Walter
|
1b107b8550
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
|
2 years ago |
Georgi Gerganov
|
ed9a54e512
ggml : sync latest (new ops, macros, refactoring) (#2106)
|
2 years ago |
Qingyou Meng
|
b1ca8f36a9
ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)
|
2 years ago |
Georgi Gerganov
|
d9779021bd
ggml : add support for ChatGLM RoPE
|
2 years ago |
David Yang
|
eaa6ca5a61
ggml : increase max tensor name + clean up compiler warnings in train-text (#1988)
|
2 years ago |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 years ago |
Georgi Gerganov
|
bd34cdde38
ggml : sync latest ggml (custom operators)
|
2 years ago |
slaren
|
f2c754e1c3
ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)
|
2 years ago |
Georgi Gerganov
|
b97ca431db
ggml : sync latest ggml repo (#1924)
|
2 years ago |
Georgi Gerganov
|
ce2c7d72e2
metal : handle buffers larger than device's maxBufferLength (#1826)
|
2 years ago |
Johannes Gäßler
|
254a7a7a5f
CUDA full GPU acceleration, KV cache in VRAM (#1827)
|
2 years ago |
xaedes
|
e32089b2c2
train : improved training-from-scratch example (#1652)
|
2 years ago |
Johannes Gäßler
|
17366df842
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
|
2 years ago |
Kawrakow
|
99009e72f8
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
|
2 years ago |
Georgi Gerganov
|
ecb217db4f
llama : Metal inference (#1642)
|
2 years ago |
Georgi Gerganov
|
7552ac5863
ggml : sync cgraph import / export API
|
2 years ago |
Georgi Gerganov
|
93618031c7
ggml : add ggml_tensor_overhead()
|
2 years ago |
Georgi Gerganov
|
bdbda1b17a
ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())
|
2 years ago |
0cc4m
|
2e6cd4b025
OpenCL Token Generation Acceleration (#1459)
|
2 years ago |
Georgi Gerganov
|
3de84b2606
ggml : add ggml_clamp() (#1539)
|
2 years ago |
Georgi Gerganov
|
2d5db48371
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
|
2 years ago |
Georgi Gerganov
|
13c351ad72
ggml : various fixes (#1450)
|
2 years ago |
Georgi Gerganov
|
601a033475
ggml : add GGML_QNT_VERSION to track quantization format changes
|
2 years ago |