This website works better with JavaScript
Accueil
Explorer
Aide
Connexion
cturan
/
llama.cpp
miroir de
https://github.com/cturan/llama.cpp
Suivre
1
Voter
0
Fork
0
Fichiers
Tickets
0
Wiki
Aborescence:
72114edf06
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Historique des commits
Trouver
Auteur
SHA1
Message
Date
slaren
2bf8d0f7c4
backend : offload large batches to GPU (
#6083
)
il y a 1 an
slaren
f30ea47a87
llama : add pipeline parallelism support (
#6017
)
il y a 1 an
Michael Podvitskiy
9fa2627347
ggml : introduce ggml_status (ggml/750)
il y a 1 an
UEXTM.com
5f70671856
Introduce backend GUIDs (ggml/743)
il y a 1 an
Jared Van Bortel
fbf1ddec69
Nomic Vulkan backend (
#4456
)
il y a 2 ans