Cebtenzzre
|
8781013ef6
make : restore build-info.h dependency for several targets (#3205)
|
2 лет назад |
Cebtenzzre
|
e6616cf0db
examples : add compiler version and target to build info (#2998)
|
2 лет назад |
Cebtenzzre
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
2 лет назад |
Cebtenzzre
|
00d62adb79
fix some warnings from gcc and clang-tidy (#3038)
|
2 лет назад |
Kerfuffle
|
5d6f19f16b
Allow quantize to only copy tensors, some other improvements (#2931)
|
2 лет назад |
Cebtenzzre
|
ebcee207b6
quantize : make output filename optional again (#2823)
|
2 лет назад |
Kawrakow
|
8207214b6a
Fix values shown in the quantize tool help (#2735)
|
2 лет назад |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 лет назад |
Georgi Gerganov
|
6cbf9dfb32
llama : shorten quantization descriptions
|
2 лет назад |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 лет назад |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 лет назад |
Kerfuffle
|
74d4cfa343
Allow "quantizing" to f16 and f32 (#1787)
|
2 лет назад |
Kerfuffle
|
4f0154b0ba
llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691)
|
2 лет назад |
Kawrakow
|
99009e72f8
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
|
2 лет назад |
Georgi Gerganov
|
ec2e10c444
llama : add llama_init_backend() API (close #1527)
|
2 лет назад |
Georgi Gerganov
|
b9fd7eee57
ggml : remove bit shuffling (#1405)
|
2 лет назад |
slaren
|
94c5652fc0
quantize: make output filename optional, default to ggml-model-<ftype>.bin (#1301)
|
2 лет назад |
DannyDaemonic
|
f4cef87edf
Add git-based build information for better issue tracking (#1232)
|
2 лет назад |
Stephan Walter
|
36d19a603b
Remove Q4_3 which is no better than Q5 (#1218)
|
2 лет назад |
Georgi Gerganov
|
574406dc7e
ggml : add Q5_0 and Q5_1 quantization (#1187)
|
2 лет назад |
Pavol Rusnak
|
859fee6dfb
quantize : use `map` to assign quantization type from `string` (#1191)
|
2 лет назад |
Georgi Gerganov
|
7a32fcb3b2
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179)
|
2 лет назад |
Kawrakow
|
38de86a711
llama : multi-threaded quantization (#1075)
|
2 лет назад |
Georgi Gerganov
|
e0305ead3a
ggml : add Q4_3 quantization (#1082)
|
2 лет назад |
Georgi Gerganov
|
77a73403ca
ggml : add new Q4_2 quantization (ARM only) (#1046)
|
2 лет назад |
Stephan Walter
|
3e6e70d8e8
Add enum llama_ftype, sync ggml_type to model files (#709)
|
2 лет назад |
Slaren
|
64bde3ffd4
Fix ggml_init_params in quantize
|
2 лет назад |
Stephan Walter
|
436e561931
all : be more strict about converting float to double (#458)
|
2 лет назад |
Stephan Walter
|
c1f885067c
ggml : introduce structs for the q4 data blocks (#356)
|
2 лет назад |
Georgi Gerganov
|
a316a425d0
Overhaul the examples structure
|
2 лет назад |