Johannes Gäßler 57c1e05643 llama: offload output layer to GPU first (#18148) 1 ماه پیش
..
models a5251ca11d Optimization: Qwen3 next autoregressive pass (#17996) 1 ماه پیش
CMakeLists.txt 63908b631a cmake: fix Mach-O current version number (#17877) 1 ماه پیش
llama-adapter.cpp fd621880f3 aLoRA Support (#15327) 4 ماه پیش
llama-adapter.h fd621880f3 aLoRA Support (#15327) 4 ماه پیش
llama-arch.cpp 982060fadc model: fix LFM2_MOE missing tensors (#18132) 1 ماه پیش
llama-arch.h 7f2b2f3c77 arch: refactor LLM_TENSOR_NAMES (#18051) 1 ماه پیش
llama-batch.cpp d9f8f60618 batch : fix sequence id ownership (#17915) 1 ماه پیش
llama-batch.h d9f8f60618 batch : fix sequence id ownership (#17915) 1 ماه پیش
llama-chat.cpp 9f052478c2 model : add openPangu-Embedded (#16941) 2 ماه پیش
llama-chat.h 9f052478c2 model : add openPangu-Embedded (#16941) 2 ماه پیش
llama-context.cpp b1f3a6e5db llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 1 ماه پیش
llama-context.h b1f3a6e5db llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 1 ماه پیش
llama-cparams.cpp c311ac664d cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 7 ماه پیش
llama-cparams.h cd5e3b5754 server : support unified cache across slots (#16736) 2 ماه پیش
llama-grammar.cpp e39502e74b llama : add token matching support to llama-grammar (#17816) 1 ماه پیش
llama-grammar.h e39502e74b llama : add token matching support to llama-grammar (#17816) 1 ماه پیش
llama-graph.cpp c560316440 graph : reuse SSM graphs (#16490) 1 ماه پیش
llama-graph.h c560316440 graph : reuse SSM graphs (#16490) 1 ماه پیش
llama-hparams.cpp 3d86c6c2b5 model: support GLM4V vision encoder (#18042) 1 ماه پیش
llama-hparams.h 3d86c6c2b5 model: support GLM4V vision encoder (#18042) 1 ماه پیش
llama-impl.cpp b1f3a6e5db llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 1 ماه پیش
llama-impl.h 37adc9c6ba ggml, llama : use defaulted constructors/destructors (#17649) 1 ماه پیش
llama-io.cpp e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 ماه پیش
llama-io.h e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 ماه پیش
llama-kv-cache-iswa.cpp 16bcc1259d kv-cache : pad the cache size to 256 for performance (#17046) 2 ماه پیش
llama-kv-cache-iswa.h e789095502 llama: print memory breakdown on exit (#15860) 3 ماه پیش
llama-kv-cache.cpp 4529c660c8 kv-cache: Fix state restore fragmented cache (#17982) 1 ماه پیش
llama-kv-cache.h 4529c660c8 kv-cache: Fix state restore fragmented cache (#17982) 1 ماه پیش
llama-kv-cells.h e3af5563bd llama: store mrope data in KV cell (#16825) 2 ماه پیش
llama-memory-hybrid.cpp c560316440 graph : reuse SSM graphs (#16490) 1 ماه پیش
llama-memory-hybrid.h e789095502 llama: print memory breakdown on exit (#15860) 3 ماه پیش
llama-memory-recurrent.cpp 0c74f32632 memory: Hybrid context shift (#17009) 2 ماه پیش
llama-memory-recurrent.h 7a0e900e36 llama: consistent ctx <-> buf order for KV cache (#16746) 2 ماه پیش
llama-memory.cpp 745f11fed0 memory : correctly handle failure in apply() (#14438) 6 ماه پیش
llama-memory.h e789095502 llama: print memory breakdown on exit (#15860) 3 ماه پیش
llama-mmap.cpp 4d4f4cacd1 llama : Async DirectIO model loading on Linux (#18012) 1 ماه پیش
llama-mmap.h 4d4f4cacd1 llama : Async DirectIO model loading on Linux (#18012) 1 ماه پیش
llama-model-loader.cpp 4d4f4cacd1 llama : Async DirectIO model loading on Linux (#18012) 1 ماه پیش
llama-model-loader.h b1f3a6e5db llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 1 ماه پیش
llama-model-saver.cpp 88fc854b4b llama : improve sep token handling (#14272) 7 ماه پیش
llama-model-saver.h 10d2af0eaa llama/ggml: add LLM training support (#10544) 8 ماه پیش
llama-model.cpp 57c1e05643 llama: offload output layer to GPU first (#18148) 1 ماه پیش
llama-model.h 2995341730 llama : add support for NVIDIA Nemotron 3 Nano (#18058) 1 ماه پیش
llama-quant.cpp b1f3a6e5db llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 1 ماه پیش
llama-quant.h f66f582927 llama : refactor `src/llama.cpp` (#10902) 1 سال پیش
llama-sampling.cpp 4301e27319 common : restore grammar-based rejection sampling (#18137) 1 ماه پیش
llama-sampling.h afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) 1 سال پیش
llama-vocab.cpp 9d52f17ae3 model : add KORMo model (#18032) 1 ماه پیش
llama-vocab.h e1fcf8b09b model : add AfmoeForCausalLM support (#16477) 2 ماه پیش
llama.cpp 57c1e05643 llama: offload output layer to GPU first (#18148) 1 ماه پیش
unicode-data.cpp 458367a906 server : better security control for public deployments (#9776) 1 سال پیش
unicode-data.h a39ab216aa llama : reduce compile time and binary size (#9712) 1 سال پیش
unicode.cpp 1be97831e4 fix: prevent segfault in tokenizer on highly repetitive input (#17786) 1 ماه پیش
unicode.h 624207e676 devops: add s390x & ppc64le CI (#15925) 3 ماه پیش