kustaaya
|
f675b20a3b
Added support for Viking pre-tokenizer (#8135)
|
1 gadu atpakaļ |
Christian Zhou-Zheng
|
52fc8705a0
Option to split during conversion (#6942)
|
1 gadu atpakaļ |
fairydreaming
|
de0d6a68ac
gguf-py, convert-hf : model conversion support for T5 and FLAN-T5 model variants (#5763)
|
1 gadu atpakaļ |
Eddie-Wang
|
e112b610a1
llama : add support for BitnetForCausalLM (#7931)
|
1 gadu atpakaļ |
Clint Herron
|
b5a5f34efa
Removing extra blank lines that were breaking Lint. (#8067)
|
1 gadu atpakaļ |
0xspringtime
|
3aa184a8c7
convert-hf : change assert to exception (#8015)
|
1 gadu atpakaļ |
Ștefan-Gabriel Muscalu
|
a94e6ff877
update: support Qwen2-57B-A14B (#7835)
|
1 gadu atpakaļ |
Elaine
|
41b9260f18
convert : add Poro-34B-chat tokenizer support (#7713)
|
1 gadu atpakaļ |
sasha0552
|
2decf57bc6
convert-hf : set the model name based on cli arg, if present (#7693)
|
1 gadu atpakaļ |
compilade
|
5795b94182
convert-hf : match model part name prefix and suffix (#7687)
|
1 gadu atpakaļ |
compilade
|
ed9f252118
gguf-py : decouple adding metadata from writing in GGUFWriter (#7827)
|
1 gadu atpakaļ |
Joan Fontanals
|
f5d7b268ec
llama : add jina v2 base code (#7596)
|
1 gadu atpakaļ |
Galunid
|
7672adeec7
Fix encoding in python scripts (#7733)
|
1 gadu atpakaļ |
Galunid
|
0515ad93f4
convert-hf : Handle NotImplementedError in convert-hf-to-gguf (#7660)
|
1 gadu atpakaļ |
Galunid
|
9c4c9cc83f
Move convert.py to examples/convert-legacy-llama.py (#7430)
|
1 gadu atpakaļ |
Giuseppe Scrivano
|
5442939fcc
llama : support small Granite models (#7481)
|
1 gadu atpakaļ |
fairydreaming
|
ee3dff6b8e
Add support for DeepseekV2ForCausalLM (#7519)
|
1 gadu atpakaļ |
Galunid
|
32a28217f4
Fix aya-23 conversion scripts (#7539)
|
1 gadu atpakaļ |
Bartowski
|
c429b33beb
llama : add Smaug 70B support (#7402)
|
1 gadu atpakaļ |
compilade
|
b83bab15a5
gguf-py : fix and simplify quantized shape round-trip (#7483)
|
1 gadu atpakaļ |
fairydreaming
|
fbca2f27fc
Add support for ArcticForCausalLM (#7020)
|
1 gadu atpakaļ |
fairydreaming
|
9b82476ee9
Add missing inference support for GPTNeoXForCausalLM (Pythia and GPT-NeoX base models) (#7461)
|
1 gadu atpakaļ |
liuwei-git
|
201cc11afa
llama : add phi3 128K model support (#7225)
|
1 gadu atpakaļ |
Georgi Gerganov
|
c3f8d58356
tests : test-tokenizer-0.sh print more info (#7402)
|
1 gadu atpakaļ |
jaime-m-p
|
d7e852c1bc
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
|
1 gadu atpakaļ |
jaime-m-p
|
917dc8cfa6
Tokenizer SPM fixes for phi-3 and llama-spm (#7375)
|
1 gadu atpakaļ |
Georgi Gerganov
|
fabf30b4c4
llama : remove Persimmon (#7408)
|
1 gadu atpakaļ |
Anas Ahouzi
|
6aade19ee7
Add StableLM2 pre-tokenizer (#7349)
|
1 gadu atpakaļ |
Georgi Gerganov
|
b49a13dd2f
convert : fix set_vocab_sentencepiece (#6866)
|
1 gadu atpakaļ |
Aarni Koskela
|
d273c1402b
py : convert-hf-to-gguf-update improvements (#7340)
|
1 gadu atpakaļ |