This website works better with JavaScript
Home
Explore
Help
Sign In
cturan
/
llama.cpp
mirror of
https://github.com/cturan/llama.cpp
Watch
1
Star
0
Fork
0
Files
Issues
0
Wiki
Tree:
e1ab084803
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Commit History
Find
Author
SHA1
Message
Date
Xuan-Son Nguyen
0dd58b6877
ggml : refactor forward_dup for cpu backend (
#16062
)
4 months ago
Aaron Teo
60ef23d6c1
ggml-cpu: enable IBM NNPA Vector Intrinsics (
#14317
)
7 months ago
xctan
f470bc36be
ggml-cpu : split arch-specific implementations (
#13892
)
8 months ago
cmdr2
a62d7fa7a9
cpu: de-duplicate some of the operators and refactor (ggml/1144)
10 months ago