Kawrakow
|
a33e6a0d2a
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721)
|
1 سال پیش |
Kawrakow
|
4c4cb30736
IQ3_S: a much better alternative to Q3_K (#5676)
|
1 سال پیش |
Kawrakow
|
a14679cc30
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
|
1 سال پیش |
Kawrakow
|
bd2d4e393b
1.5 bit quantization (#5453)
|
1 سال پیش |
bmwl
|
f486f6e1e5
ggml : add numa options (#5377)
|
1 سال پیش |
Michael Klimenko
|
52bb63c708
refactor : switch to emplace_back to avoid extra object (#5291)
|
1 سال پیش |
Kawrakow
|
f4d7e54974
SOTA 3-bit quants (#5196)
|
1 سال پیش |
Vladimir Malyutin
|
7359016c7c
quantize : fix typo (#5211)
|
1 سال پیش |
Kawrakow
|
66d575c45c
llama : add Q3_K_XS (#5060)
|
2 سال پیش |
Kawrakow
|
467a882fd2
Add ability to use importance matrix for all k-quants (#4930)
|
2 سال پیش |
Kawrakow
|
147b17ac94
2-bit quantizations (#4897)
|
2 سال پیش |
Kawrakow
|
469e75d0a3
llama : restore intended k-quants mixes for MoE models (#4872)
|
2 سال پیش |
cebtenzzre
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 سال پیش |
Georgi Gerganov
|
d69d777c02
ggml : quantization refactoring (#3833)
|
2 سال پیش |
Cebtenzzre
|
bc39553c90
build : enable more non-default compiler warnings (#3200)
|
2 سال پیش |
Cebtenzzre
|
8781013ef6
make : restore build-info.h dependency for several targets (#3205)
|
2 سال پیش |
Cebtenzzre
|
e6616cf0db
examples : add compiler version and target to build info (#2998)
|
2 سال پیش |
Cebtenzzre
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
2 سال پیش |
Cebtenzzre
|
00d62adb79
fix some warnings from gcc and clang-tidy (#3038)
|
2 سال پیش |
Kerfuffle
|
5d6f19f16b
Allow quantize to only copy tensors, some other improvements (#2931)
|
2 سال پیش |
Cebtenzzre
|
ebcee207b6
quantize : make output filename optional again (#2823)
|
2 سال پیش |
Kawrakow
|
8207214b6a
Fix values shown in the quantize tool help (#2735)
|
2 سال پیش |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 سال پیش |
Georgi Gerganov
|
6cbf9dfb32
llama : shorten quantization descriptions
|
2 سال پیش |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
2 سال پیش |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
2 سال پیش |
Kerfuffle
|
74d4cfa343
Allow "quantizing" to f16 and f32 (#1787)
|
2 سال پیش |
Kerfuffle
|
4f0154b0ba
llama : support requantizing models instead of only allowing quantization from 16/32bit (#1691)
|
2 سال پیش |
Kawrakow
|
99009e72f8
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
|
2 سال پیش |
Georgi Gerganov
|
ec2e10c444
llama : add llama_init_backend() API (close #1527)
|
2 سال پیش |