pculliton e57dc62057 llama: Add support for Gemma2ForCausalLM (#8156) hai 1 ano
..
__init__.py ee52225067 convert-hf : support direct Q8_0 conversion (#7234) hai 1 ano
constants.py e57dc62057 llama: Add support for Gemma2ForCausalLM (#8156) hai 1 ano
gguf.py 34b0a08207 gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) %!s(int64=2) %!d(string=hai) anos
gguf_reader.py c8ad35955a Gguf dump start data offset via --data-offset and some extra refactor (#8054) hai 1 ano
gguf_writer.py 52fc8705a0 Option to split during conversion (#6942) hai 1 ano
lazy.py ee52225067 convert-hf : support direct Q8_0 conversion (#7234) hai 1 ano
py.typed dc07dc492e convert : various script cleanups/fixes + merges and special token handling (#2842) %!s(int64=2) %!d(string=hai) anos
quants.py b83bab15a5 gguf-py : fix and simplify quantized shape round-trip (#7483) hai 1 ano
tensor_mapping.py e57dc62057 llama: Add support for Gemma2ForCausalLM (#8156) hai 1 ano
vocab.py 9c4c9cc83f Move convert.py to examples/convert-legacy-llama.py (#7430) hai 1 ano