The project differentiates between 3 levels of contributors:
llama-perplexity and llama-bench)ggml source, run the test-backend-ops tool to check whether different backend implementations of the ggml operators produce consistent results (this requires access to at least two different ggml backends)ggml operator or added a new one, add the corresponding test cases to test-backend-ops<module> : <commit title> (#<issue_number>). For example: utils : fix typo in utils.py (#1234)<module> from here: https://github.com/ggml-org/llama.cpp/wiki/Modulesfor loops, avoid templates, keep it simplevoid * ptr, int & aint32_t in the public API, e.g. size_t may also be appropriate for allocation sizes or byte offsetsDeclare structs with struct foo {} instead of typedef struct foo {} foo
In C++ code omit optional struct and enum keyword whenever they are not necessary
// OK
llama_context * ctx;
const llama_rope_type rope_type;
// not OK
struct llama_context * ctx;
const enum llama_rope_type rope_type;
(NOTE: this guideline is yet to be applied to the llama.cpp codebase. New code should follow this guideline.)
Try to follow the existing patterns in the code (indentation, spaces, etc.). In case of doubt use clang-format (from clang-tools v15+) to format the added code
For anything not covered in the current guidelines, refer to the C++ Core Guidelines
Tensors store data in row-major order. We refer to dimension 0 as columns, 1 as rows, 2 as matrices
Matrix multiplication is unconventional: C = ggml_mul_mat(ctx, A, B) means $C^T = A B^T \Leftrightarrow C = B A^T.$
snake_case for function, variable and type namesNaming usually optimizes for longest common prefix (see https://github.com/ggml-org/ggml/pull/302#discussion_r1243240963)
// not OK
int small_number;
int big_number;
// OK
int number_small;
int number_big;
Enum values are always in upper case and prefixed with the enum name
enum llama_vocab_type {
LLAMA_VOCAB_TYPE_NONE = 0,
LLAMA_VOCAB_TYPE_SPM = 1,
LLAMA_VOCAB_TYPE_BPE = 2,
LLAMA_VOCAB_TYPE_WPM = 3,
LLAMA_VOCAB_TYPE_UGM = 4,
LLAMA_VOCAB_TYPE_RWKV = 5,
};
The general naming pattern is <class>_<method>, with <method> being <action>_<noun>
llama_model_init(); // class: "llama_model", method: "init"
llama_sampler_chain_remove(); // class: "llama_sampler_chain", method: "remove"
llama_sampler_get_seed(); // class: "llama_sampler", method: "get_seed"
llama_set_embeddings(); // class: "llama_context", method: "set_embeddings"
llama_n_threads(); // class: "llama_context", method: "n_threads"
llama_adapter_lora_free(); // class: "llama_adapter_lora", method: "free"
get <action> can be omitted<noun> can be omitted if not necessary_context suffix of the <class> is optional. Use it to disambiguate symbols when neededinit/free for constructor/destructor <action>Use the _t suffix when a type is supposed to be opaque to the user - it's not relevant to them if it is a struct or anything else
typedef struct llama_context * llama_context_t;
enum llama_pooling_type llama_pooling_type(const llama_context_t ctx);
(NOTE: this guideline is yet to be applied to the llama.cpp codebase. New code should follow this guideline)
C/C++ filenames are all lowercase with dashes. Headers use the .h extension. Source files use the .c or .cpp extension
Python filenames are all lowercase with underscores
(TODO: abbreviations usage)
(TODO: add guidelines with examples and apply them to the codebase)
#ifdef FOO
#endif // FOO
Existing code should have designated collaborators and/or maintainers specified in the CODEOWNERS file reponsible for:
When adding or modifying a large piece of code:
New code should follow the guidelines (coding, naming, etc.) outlined in this document. Exceptions are allowed in isolated, backend-specific parts of the code that do not interface directly with the ggml interfaces.
(NOTE: for legacy reasons, existing code is not required to follow this guideline)
The Github issues, PRs and discussions contain a lot of information that can be useful to get familiar with the codebase. For convenience, some of the more important information is referenced from Github projects: