JJJYmmm d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
..
CMakeLists.txt 715a6db02c kv-cache : drop the "unified" prefix (#15467) 4 maanden geleden
llama-adapter.cpp fd621880f3 aLoRA Support (#15327) 4 maanden geleden
llama-adapter.h fd621880f3 aLoRA Support (#15327) 4 maanden geleden
llama-arch.cpp d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
llama-arch.h d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
llama-batch.cpp 3464bdac37 llama: fix ASAN error with M-RoPE (#16848) 2 maanden geleden
llama-batch.h e3af5563bd llama: store mrope data in KV cell (#16825) 2 maanden geleden
llama-chat.cpp 84bf3c6778 model : add BailingMoeV2 support (#16063) 2 maanden geleden
llama-chat.h 84bf3c6778 model : add BailingMoeV2 support (#16063) 2 maanden geleden
llama-context.cpp 5a4ff43e7d llama : disable pipeline parallelism if compute buffer allocation fails (#16748) 2 maanden geleden
llama-context.h e789095502 llama: print memory breakdown on exit (#15860) 3 maanden geleden
llama-cparams.cpp c311ac664d cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 7 maanden geleden
llama-cparams.h e58174cecb llama : bump max seq limit from 64 to 256 (#15916) 4 maanden geleden
llama-grammar.cpp f5cd27b71d `server`: streaming of tool calls and thoughts when `--jinja` is on (#12379) 7 maanden geleden
llama-grammar.h 669912d9a5 `tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 10 maanden geleden
llama-graph.cpp d7395115ba llama : use std::abs instead of abs (#16853) 2 maanden geleden
llama-graph.h e38b7c6e9e graph : support cacheless embeddings with FA and iSWA (#16528) 3 maanden geleden
llama-hparams.cpp d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
llama-hparams.h d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
llama-impl.cpp 53ff6b9b9f GGUF: C++ refactor, backend support, misc fixes (#11030) 1 jaar geleden
llama-impl.h e81b8e4b7f llama: use FA + max. GPU layers by default (#15434) 4 maanden geleden
llama-io.cpp e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 maanden geleden
llama-io.h e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 maanden geleden
llama-kv-cache-iswa.cpp f6dcda3900 server : context checkpointing for hybrid and recurrent models (#16382) 3 maanden geleden
llama-kv-cache-iswa.h e789095502 llama: print memory breakdown on exit (#15860) 3 maanden geleden
llama-kv-cache.cpp d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
llama-kv-cache.h 85a7d8677b memory : remove KV cache size padding (#16812) 2 maanden geleden
llama-kv-cells.h e3af5563bd llama: store mrope data in KV cell (#16825) 2 maanden geleden
llama-memory-hybrid.cpp 0123ff38f5 memory : use sequential equal splits for recurrent modules (#16442) 3 maanden geleden
llama-memory-hybrid.h e789095502 llama: print memory breakdown on exit (#15860) 3 maanden geleden
llama-memory-recurrent.cpp 7a0e900e36 llama: consistent ctx <-> buf order for KV cache (#16746) 2 maanden geleden
llama-memory-recurrent.h 7a0e900e36 llama: consistent ctx <-> buf order for KV cache (#16746) 2 maanden geleden
llama-memory.cpp 745f11fed0 memory : correctly handle failure in apply() (#14438) 6 maanden geleden
llama-memory.h e789095502 llama: print memory breakdown on exit (#15860) 3 maanden geleden
llama-mmap.cpp 3a077146a4 llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013) 7 maanden geleden
llama-mmap.h 19b392d58d llama-mmap: fix missing include (#11796) 11 maanden geleden
llama-model-loader.cpp 34fcc5a4ac model : Apertus model implementation (#15852) 3 maanden geleden
llama-model-loader.h ef0144c087 model: support GLM 4.5 family of models (#14939) 5 maanden geleden
llama-model-saver.cpp 88fc854b4b llama : improve sep token handling (#14272) 7 maanden geleden
llama-model-saver.h 10d2af0eaa llama/ggml: add LLM training support (#10544) 8 maanden geleden
llama-model.cpp d261223d24 model: add support for qwen3vl series (#16780) 2 maanden geleden
llama-model.h bacddc049a model: Add support for CogVLM model (#15002) 2 maanden geleden
llama-quant.cpp d7395115ba llama : use std::abs instead of abs (#16853) 2 maanden geleden
llama-quant.h f66f582927 llama : refactor `src/llama.cpp` (#10902) 1 jaar geleden
llama-sampling.cpp 81086cd6a3 vocab : mark EOT token for Granite models (#16499) 3 maanden geleden
llama-sampling.h afa8a9ec9b llama : add `llama_vocab`, functions -> methods, naming (#11110) 1 jaar geleden
llama-vocab.cpp 84bf3c6778 model : add BailingMoeV2 support (#16063) 2 maanden geleden
llama-vocab.h ca71fb9b36 model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) 3 maanden geleden
llama.cpp 3e3cb19f64 llama-quant: add support for mmproj (#16592) 3 maanden geleden
unicode-data.cpp 458367a906 server : better security control for public deployments (#9776) 1 jaar geleden
unicode-data.h a39ab216aa llama : reduce compile time and binary size (#9712) 1 jaar geleden
unicode.cpp 4a4f426944 model : add Kimi-K2 support (#14654) 6 maanden geleden
unicode.h 624207e676 devops: add s390x & ppc64le CI (#15925) 3 maanden geleden