| .. |
|
CMakeLists.txt
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-adapter.cpp
|
4d2b3d8804
lora : improve compat with `mergekit-extract-lora` (#11131)
|
1 year ago |
|
llama-adapter.h
|
4d2b3d8804
lora : improve compat with `mergekit-extract-lora` (#11131)
|
1 year ago |
|
llama-arch.cpp
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-arch.h
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-batch.cpp
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-batch.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-chat.cpp
|
d9feae1c06
llama-chat : add phi 4 template (#11148)
|
1 year ago |
|
llama-chat.h
|
d9feae1c06
llama-chat : add phi 4 template (#11148)
|
1 year ago |
|
llama-context.cpp
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-context.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-cparams.cpp
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-cparams.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-grammar.cpp
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-grammar.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-hparams.cpp
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-hparams.h
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-impl.cpp
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 year ago |
|
llama-impl.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-kv-cache.cpp
|
6369f867a4
llama : rename missed batch params/vars to ubatch (#10059)
|
1 year ago |
|
llama-kv-cache.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-mmap.cpp
|
ae2f606bb5
mmap : fix fileno macro clash (#11076)
|
1 year ago |
|
llama-mmap.h
|
ae2f606bb5
mmap : fix fileno macro clash (#11076)
|
1 year ago |
|
llama-model-loader.cpp
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 year ago |
|
llama-model-loader.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-model.cpp
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-model.h
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-quant.cpp
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
llama-quant.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama-sampling.cpp
|
727368c60f
llama : use LLAMA_TOKEN_NULL (#11062)
|
1 year ago |
|
llama-sampling.h
|
ff252ea48e
llama : add DRY sampler (#9702)
|
1 year ago |
|
llama-vocab.cpp
|
727368c60f
llama : use LLAMA_TOKEN_NULL (#11062)
|
1 year ago |
|
llama-vocab.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
1 year ago |
|
llama.cpp
|
ee7136c6d1
llama: add support for QRWKV6 model architecture (#11001)
|
1 year ago |
|
unicode-data.cpp
|
458367a906
server : better security control for public deployments (#9776)
|
1 year ago |
|
unicode-data.h
|
a39ab216aa
llama : reduce compile time and binary size (#9712)
|
1 year ago |
|
unicode.cpp
|
9394bbd484
llama : Add support for DeepSeek V3 (#11049)
|
1 year ago |
|
unicode.h
|
08ea539df2
unicode : improve naming style (#10838)
|
1 year ago |