This website works better with JavaScript
Página inicial
Explorar
Ajuda
Entrar
cturan
/
llama.cpp
mirror de
https://github.com/cturan/llama.cpp
Observar
1
Favorito
0
Fork
0
Arquivos
Issues
0
Wiki
Tree:
16639ba217
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Histórico de commits
Buscar
Autor
SHA1
Mensagem
Data
Reese Levine
a89002f07b
ggml webgpu: support for backend sampling (
#18880
)
2 semanas atrás
nwyin
e443fbcfa5
ggml webgpu: add CEIL operation support (
#18605
)
4 semanas atrás
Reese Levine
fd57b24c0f
ggml webgpu: unary op suppport, code refactoring, ops support (
#17764
)
1 mês atrás