| .. |
|
CMakeLists.txt
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 年之前 |
|
test-backend-ops.cpp
|
fe680e3d10
sync : ggml (new ops, tests, backend, etc.) (#4359)
|
2 年之前 |
|
test-c.c
|
849408957c
tests : add a C compliance test (#2848)
|
2 年之前 |
|
test-double-float.cpp
|
207b51900e
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
|
2 年之前 |
|
test-grad0.cpp
|
4760e7cc0b
sync : ggml (backend v2) (#3912)
|
2 年之前 |
|
test-grammar-parser.cpp
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 年之前 |
|
test-llama-grammar.cpp
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 年之前 |
|
test-opt.cpp
|
4760e7cc0b
sync : ggml (backend v2) (#3912)
|
2 年之前 |
|
test-quantize-fns.cpp
|
207b51900e
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
|
2 年之前 |
|
test-quantize-perf.cpp
|
f93af02488
sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)
|
2 年之前 |
|
test-rope.cpp
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2 年之前 |
|
test-sampling.cpp
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 年之前 |
|
test-tokenizer-0-falcon.cpp
|
233fc1c69f
Minor improvements in GPT2 tokenizer (#3567)
|
2 年之前 |
|
test-tokenizer-0-falcon.py
|
f23c0359a3
ci : add flake8 to github actions (python linting) (#4129)
|
2 年之前 |
|
test-tokenizer-0-llama.cpp
|
233fc1c69f
Minor improvements in GPT2 tokenizer (#3567)
|
2 年之前 |
|
test-tokenizer-0-llama.py
|
f23c0359a3
ci : add flake8 to github actions (python linting) (#4129)
|
2 年之前 |
|
test-tokenizer-1-bpe.cpp
|
daab3d7f45
Add more tokenizer tests (#3742)
|
2 年之前 |
|
test-tokenizer-1-llama.cpp
|
ff5a3f0c09
Work on the BPE tokenizer (#3252)
|
2 年之前 |