Georgi Gerganov
|
da7455d046
readme : fix headings
|
2 лет назад |
Georgi Gerganov
|
25423e9185
scripts : helper convert script
|
2 лет назад |
Kawrakow
|
a6d1189fdd
k_quants tuning for Falcon-7b (#2816)
|
2 лет назад |
Georgi Gerganov
|
c48c5bb0b0
readme : update hot topics
|
2 лет назад |
Georgi Gerganov
|
d0cee0d36d
gguf : add 64-bit support (GGUF v2) (#2821)
|
2 лет назад |
Georgi Gerganov
|
edd4c14817
llama : more tokenizer fixes (#2810)
|
2 лет назад |
Przemysław Pawełczyk
|
1591e2e590
ggml : detect SSSE3 (#2825)
|
2 лет назад |
slaren
|
789c8c945a
ci : add LoRA test to CI (#2650)
|
2 лет назад |
Bruce MacDonald
|
c1ac54b77a
server : add `/detokenize` endpoint (#2802)
|
2 лет назад |
Kerfuffle
|
730d9c681e
convert.py : advanced option (#2753)
|
2 лет назад |
Tim Miller
|
c7d92e6dfe
llama : use Unicode Escape Sequence to replace encoded characters (#2814)
|
2 лет назад |
Tungsten842
|
61d1a2895e
flake.nix : add rocm support and cleanup (#2808)
|
2 лет назад |
Cebtenzzre
|
741ca7dd1c
llama : move #includes out of _GNU_SOURCE conditional (#2817)
|
2 лет назад |
Dr. Tom Murphy VII Ph.D
|
72f895c923
main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (#1528)
|
2 лет назад |
Cebtenzzre
|
50526f37eb
llama : use std::abs in llama_sample_tail_free (#2800)
|
2 лет назад |
Georgi Gerganov
|
04f4b1eb10
k-quants : remove unnecessary tensor shape restrictions (#2811)
|
2 лет назад |
Kawrakow
|
7592375403
Better perplexity for 2- and 3-bit quantization for LLaMA-v2-70B (#2807)
|
2 лет назад |
Kawrakow
|
771551a793
Fix HellaSwag (#2805)
|
2 лет назад |
Volodymyr Vitvitskyi
|
f305bad11e
flake : build llama.cpp on Intel with nix (#2795)
|
2 лет назад |
Nigel Bosch
|
a2ca4e9de9
Handle null rope scaling value (#2793)
|
2 лет назад |
klosax
|
2ba83c8685
Fix spm whitespaces (#2806)
|
2 лет назад |
lon
|
bae5c5f679
examples : skip unnecessary external lib in server README.md how-to (#2804)
|
2 лет назад |
Marcus Dunn
|
232caf3c15
llama : fix struct decl (#2790)
|
2 лет назад |
Kawrakow
|
d046dcee08
Faster perplexity computation (#2786)
|
2 лет назад |
Matt Pulver
|
c82742ac9c
llama : add llama_beam_search() (#2267)
|
2 лет назад |
Nigel Bosch
|
28b2c996ca
convert.py : Get rope scale from HuggingFace models (#2772)
|
2 лет назад |
slaren
|
154725c543
llama-bench : add model sizes (#2771)
|
2 лет назад |
slaren
|
12e2e33a97
convert.py : export rope freq_base when converting CodeLlama from an HF model (#2773)
|
2 лет назад |
Jhen-Jie Hong
|
29674ab4e8
server : display token probabilities in the UI (#2489)
|
2 лет назад |
Georgi Gerganov
|
5439a0ab57
ci : pip install gguf in editable mode (#2782)
|
2 лет назад |