| .. |
|
CMakeLists.txt
|
70c29da118
common : fix mirostat state when using multiple sequences (#3543)
|
2 лет назад |
|
common.cpp
|
6e08281e58
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
|
2 лет назад |
|
common.h
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 лет назад |
|
console.cpp
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
2 лет назад |
|
console.h
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 лет назад |
|
grammar-parser.cpp
|
f439e506e8
ggml : fix rope + llama minor optimizations (#3560)
|
2 лет назад |
|
grammar-parser.h
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 лет назад |
|
log.h
|
cc44877486
log : disable pid in log filenames
|
2 лет назад |
|
sampling.cpp
|
ee1a0ec9cb
llama : add option for greedy sampling with probs (#3813)
|
2 лет назад |
|
sampling.h
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 лет назад |
|
stb_image.h
|
370359e5ba
examples: support LLaVA v1.5 (multimodal model) (#3436)
|
2 лет назад |
|
train.cpp
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
train.h
|
0e76a8992c
train : finetune LORA (#2632)
|
2 лет назад |