This website works better with JavaScript
Accueil
Explorer
Aide
Connexion
cturan
/
llama.cpp
miroir de
https://github.com/cturan/llama.cpp
Suivre
1
Voter
0
Fork
0
Fichiers
Tickets
0
Wiki
Aborescence:
e2b065071c
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Historique des commits
Trouver
Auteur
SHA1
Message
Date
Johannes Gäßler
133d99c599
CUDA: deduplicate FlashAttention code (
#7352
)
il y a 1 an
Georgi Gerganov
9cb317f77e
ggml : full ALiBi support (
#7192
)
il y a 1 an
Georgi Gerganov
9c67c2773d
ggml : add Flash Attention (
#5021
)
il y a 1 an
DAN™
e00b4a8f81
Fix more int overflow during quant (PPL/CUDA). (
#6563
)
il y a 1 an
slaren
ae1f211ce2
cuda : refactor into multiple files (
#6269
)
il y a 1 an