Douglas Hanley
|
475df1d6cf
llama : allow for user specified embedding pooling type (#5849)
|
1 year ago |
Michael Podvitskiy
|
4a6e2d6142
llama : add abort_callback to interrupt computation (#5409)
|
1 year ago |
Pierrick Hymbert
|
3ab8b3a92e
llama : cleanup unused mmq flags (#5772)
|
1 year ago |
Marcus Dunn
|
d5ab29757e
llama : constified `llama_set_state_data`'s `src` (#5774)
|
1 year ago |
Georgi Gerganov
|
08c5ee87e4
llama : remove deprecated API (#5770)
|
1 year ago |
Kawrakow
|
0becb22ac0
IQ4_XS: a 4.25 bpw quantization (#5747)
|
1 year ago |
Georgi Gerganov
|
9d533a77d0
llama : fix defrag bugs + add parameter (#5735)
|
1 year ago |
Kawrakow
|
a33e6a0d2a
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721)
|
1 year ago |
Georgi Gerganov
|
bf08e00643
llama : refactor k-shift implementation + KV defragmentation (#5691)
|
1 year ago |
Georgi Gerganov
|
ab336a9d5e
code : normalize enum names (#5697)
|
1 year ago |
Kawrakow
|
4c4cb30736
IQ3_S: a much better alternative to Q3_K (#5676)
|
1 year ago |
Xuan Son Nguyen
|
7c8bcc11dc
Add docs for llama_chat_apply_template (#5645)
|
1 year ago |
Kawrakow
|
a14679cc30
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
|
1 year ago |
Xuan Son Nguyen
|
11b12de39b
llama : add llama_chat_apply_template() (#5538)
|
1 year ago |
Kawrakow
|
bd2d4e393b
1.5 bit quantization (#5453)
|
1 year ago |
bmwl
|
f486f6e1e5
ggml : add numa options (#5377)
|
1 year ago |
Douglas Hanley
|
4524290e87
Use correct type of pooling for embedding models (#5500)
|
1 year ago |
Douglas Hanley
|
03bf161eb6
llama : support batched embeddings (#5466)
|
1 year ago |
Douglas Hanley
|
2891c8aa9a
Add support for BERT embedding models (#5423)
|
1 year ago |
Jared Van Bortel
|
1ec3332ade
YaRN : store rope scaling type as int32_t in memory (#5285)
|
1 year ago |
Georgi Gerganov
|
5cb04dbc16
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
|
2 years ago |
Kawrakow
|
f4d7e54974
SOTA 3-bit quants (#5196)
|
2 years ago |
Jared Van Bortel
|
fbf1ddec69
Nomic Vulkan backend (#4456)
|
2 years ago |
0cc4m
|
2307523d32
ggml : add Vulkan backend (#2059)
|
2 years ago |
Abhilash Majumder
|
0f648573dd
ggml : add unified SYCL backend for Intel GPUs (#2690)
|
2 years ago |
l3utterfly
|
5eaf9964fc
llama : dynamic temperature sampling (#4972)
|
2 years ago |
Kawrakow
|
66d575c45c
llama : add Q3_K_XS (#5060)
|
2 years ago |
Georgi Gerganov
|
44a1a4a41a
backend : add eval callback (#4935)
|
2 years ago |
David Friehs
|
4483396751
llama : apply classifier-free guidance to logits directly (#4951)
|
2 years ago |
Kawrakow
|
147b17ac94
2-bit quantizations (#4897)
|
2 years ago |