This website works better with JavaScript
Página inicial
Explorar
Ajuda
Entrar
cturan
/
llama.cpp
mirror de
https://github.com/cturan/llama.cpp
Observar
1
Favorito
0
Fork
0
Arquivos
Issues
0
Wiki
Tree:
ce3bf9b1a4
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Histórico de commits
Buscar
Autor
SHA1
Mensagem
Data
Daniel Bevenius
d3dce4e0a5
sampling : add support for backend sampling (
#17004
)
1 mês atrás
Sigbjørn Skjæret
43dfd741a5
llguidance : set tokenizer slices to default (
#13424
)
9 meses atrás
Michał Moskal
2447ad8a98
upgrade to llguidance 0.7.10 (
#12576
)
10 meses atrás
Christian Fillion
7ee953a64a
llama : add llama_sampler_init for safe usage of llama_sampler_free (
#11727
)
1 ano atrás
Michał Moskal
ff227703d6
sampling : support for llguidance grammars (
#10224
)
1 ano atrás