This website works better with JavaScript
Home
Explore
Help
Sign In
cturan
/
llama.cpp
mirror of
https://github.com/cturan/llama.cpp
Watch
1
Star
0
Fork
0
Files
Issues
0
Wiki
Tree:
5220a991a5
Branches
Tags
k2v2
master
minimax
qwen3_next
qwen3_next_optimized
toolinjection
test
b6814
Commit History
Find
Author
SHA1
Message
Date
0cc4m
dcb2ed4826
OpenCL: Fix duplication of layers in VRAM and RAM, add GPU mul kernel (
#1653
)
2 years ago
Howard Su
bb051d9723
opencl : no need to allocate cl_mem on heap (
#1612
)
2 years ago
Howard Su
ca74884f66
opencl : use strstr to check if fp16 supported (
#1611
)
2 years ago
Maarten ter Huurne
7d873811f3
Fix handling of "invalid property" when creating OpenCL command queue (
#1565
)
2 years ago
0cc4m
2e6cd4b025
OpenCL Token Generation Acceleration (
#1459
)
2 years ago