This website works better with JavaScript
Accueil
Explorer
Aide
Connexion
cturan
/
llama.cpp
miroir de
https://github.com/cturan/llama.cpp
Suivre
1
Voter
0
Fork
0
Fichiers
Tickets
0
Wiki
Aborescence:
160687b3ed
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Historique des commits
Trouver
Auteur
SHA1
Message
Date
Diego Devesa
9f40989351
ggml : move CPU backend to a separate file (
#10144
)
il y a 1 an
Faisal Zaghloul
42c76d1358
Threadpool: take 2 (
#8672
)
il y a 1 an
Clint Herron
07a3fc0608
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (
#8258
)
il y a 1 an
Georgi Gerganov
2b3389677a
ggml : refactor rope norm/neox (
#7634
)
il y a 1 an
Georgi Gerganov
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (
#3228
)
il y a 2 ans