This website works better with JavaScript
Home
Esplora
Aiuto
Accedi
cturan
/
llama.cpp
mirror da
https://github.com/cturan/llama.cpp
Segui
1
Vota
0
Forka
0
File
Problemi
0
Wiki
Albero (Tree):
963552903f
Rami (Branch)
Tag
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Cronologia Commit
Cerca
Autore
SHA1
Messaggio
Data
Georgi Gerganov
554c247caf
ggml : remove OpenCL (
#7735
)
1 anno fa
slaren
d359f30921
llama : remove MPI backend (
#7395
)
1 anno fa
slaren
280345968d
cuda : rename build flag to LLAMA_CUDA (
#6299
)
1 anno fa
bandoti
095231dfd3
cmake : fix transient definitions in find pkg (
#3411
)
2 anni fa
bandoti
990a5e226a
cmake : add relocatable Llama package (
#2960
)
2 anni fa