| .. |
|
models
|
23bc779a6e
model : detect GigaChat3-10-A1.8B as deepseek lite (#17420)
|
há 1 mês atrás |
|
CMakeLists.txt
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
há 2 meses atrás |
|
llama-adapter.cpp
|
fd621880f3
aLoRA Support (#15327)
|
há 4 meses atrás |
|
llama-adapter.h
|
fd621880f3
aLoRA Support (#15327)
|
há 4 meses atrás |
|
llama-arch.cpp
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
há 2 meses atrás |
|
llama-arch.h
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
há 2 meses atrás |
|
llama-batch.cpp
|
8da3c0e200
batch : fix consistency checks for the input positions (#16890)
|
há 2 meses atrás |
|
llama-batch.h
|
e3af5563bd
llama: store mrope data in KV cell (#16825)
|
há 2 meses atrás |
|
llama-chat.cpp
|
9f052478c2
model : add openPangu-Embedded (#16941)
|
há 2 meses atrás |
|
llama-chat.h
|
9f052478c2
model : add openPangu-Embedded (#16941)
|
há 2 meses atrás |
|
llama-context.cpp
|
9008027aa3
hparams : add n_embd_inp() to support extended embed (#16928)
|
há 2 meses atrás |
|
llama-context.h
|
cd5e3b5754
server : support unified cache across slots (#16736)
|
há 2 meses atrás |
|
llama-cparams.cpp
|
c311ac664d
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)
|
há 7 meses atrás |
|
llama-cparams.h
|
cd5e3b5754
server : support unified cache across slots (#16736)
|
há 2 meses atrás |
|
llama-grammar.cpp
|
054a45c3d3
grammar: fix regression caused by #17381 (#17412)
|
há 1 mês atrás |
|
llama-grammar.h
|
669912d9a5
`tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034)
|
há 10 meses atrás |
|
llama-graph.cpp
|
a90eb94ca9
CUDA: fuse rope + set_rows (#16884)
|
há 2 meses atrás |
|
llama-graph.h
|
e38b7c6e9e
graph : support cacheless embeddings with FA and iSWA (#16528)
|
há 3 meses atrás |
|
llama-hparams.cpp
|
9008027aa3
hparams : add n_embd_inp() to support extended embed (#16928)
|
há 2 meses atrás |
|
llama-hparams.h
|
9008027aa3
hparams : add n_embd_inp() to support extended embed (#16928)
|
há 2 meses atrás |
|
llama-impl.cpp
|
196f5083ef
common : more accurate sampling timing (#17382)
|
há 1 mês atrás |
|
llama-impl.h
|
e81b8e4b7f
llama: use FA + max. GPU layers by default (#15434)
|
há 4 meses atrás |
|
llama-io.cpp
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
há 10 meses atrás |
|
llama-io.h
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
há 10 meses atrás |
|
llama-kv-cache-iswa.cpp
|
16bcc1259d
kv-cache : pad the cache size to 256 for performance (#17046)
|
há 2 meses atrás |
|
llama-kv-cache-iswa.h
|
e789095502
llama: print memory breakdown on exit (#15860)
|
há 3 meses atrás |
|
llama-kv-cache.cpp
|
d261223d24
model: add support for qwen3vl series (#16780)
|
há 2 meses atrás |
|
llama-kv-cache.h
|
85a7d8677b
memory : remove KV cache size padding (#16812)
|
há 2 meses atrás |
|
llama-kv-cells.h
|
e3af5563bd
llama: store mrope data in KV cell (#16825)
|
há 2 meses atrás |
|
llama-memory-hybrid.cpp
|
0123ff38f5
memory : use sequential equal splits for recurrent modules (#16442)
|
há 3 meses atrás |
|
llama-memory-hybrid.h
|
e789095502
llama: print memory breakdown on exit (#15860)
|
há 3 meses atrás |
|
llama-memory-recurrent.cpp
|
0c74f32632
memory: Hybrid context shift (#17009)
|
há 2 meses atrás |
|
llama-memory-recurrent.h
|
7a0e900e36
llama: consistent ctx <-> buf order for KV cache (#16746)
|
há 2 meses atrás |
|
llama-memory.cpp
|
745f11fed0
memory : correctly handle failure in apply() (#14438)
|
há 6 meses atrás |
|
llama-memory.h
|
e789095502
llama: print memory breakdown on exit (#15860)
|
há 3 meses atrás |
|
llama-mmap.cpp
|
3a077146a4
llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013)
|
há 7 meses atrás |
|
llama-mmap.h
|
19b392d58d
llama-mmap: fix missing include (#11796)
|
há 11 meses atrás |
|
llama-model-loader.cpp
|
34fcc5a4ac
model : Apertus model implementation (#15852)
|
há 3 meses atrás |
|
llama-model-loader.h
|
ef0144c087
model: support GLM 4.5 family of models (#14939)
|
há 5 meses atrás |
|
llama-model-saver.cpp
|
88fc854b4b
llama : improve sep token handling (#14272)
|
há 7 meses atrás |
|
llama-model-saver.h
|
10d2af0eaa
llama/ggml: add LLM training support (#10544)
|
há 8 meses atrás |
|
llama-model.cpp
|
23bc779a6e
model : detect GigaChat3-10-A1.8B as deepseek lite (#17420)
|
há 1 mês atrás |
|
llama-model.h
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
há 2 meses atrás |
|
llama-quant.cpp
|
d7395115ba
llama : use std::abs instead of abs (#16853)
|
há 2 meses atrás |
|
llama-quant.h
|
f66f582927
llama : refactor `src/llama.cpp` (#10902)
|
há 1 ano atrás |
|
llama-sampling.cpp
|
196f5083ef
common : more accurate sampling timing (#17382)
|
há 1 mês atrás |
|
llama-sampling.h
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
há 1 ano atrás |
|
llama-vocab.cpp
|
a045492088
vocab : call reserve() for building plamo-2-translate suffix (#17343)
|
há 1 mês atrás |
|
llama-vocab.h
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
há 2 meses atrás |
|
llama.cpp
|
3e3cb19f64
llama-quant: add support for mmproj (#16592)
|
há 3 meses atrás |
|
unicode-data.cpp
|
458367a906
server : better security control for public deployments (#9776)
|
há 1 ano atrás |
|
unicode-data.h
|
a39ab216aa
llama : reduce compile time and binary size (#9712)
|
há 1 ano atrás |
|
unicode.cpp
|
e1fcf8b09b
model : add AfmoeForCausalLM support (#16477)
|
há 2 meses atrás |
|
unicode.h
|
624207e676
devops: add s390x & ppc64le CI (#15925)
|
há 3 meses atrás |