Commit History

Autor SHA1 Mensaxe Data
  John aa23412989 llava : support v1.6 (#5267) hai 1 ano
  Sang-Kil Park f68664ac24 convert : fix TypeError on GPT-2 vocab.json (#5288) hai 1 ano
  Georgi Gerganov 906cff55c2 py : handle byte tokens in `get_token_type` (#5341) hai 1 ano
  Georgi Gerganov 14fef85e2d py : fix except (#5194) hai 1 ano
  Sang-Kil Park e76627bcce py : improve BPE tokenizer support (#5189) hai 1 ano
  Jared Van Bortel b43ebde3b0 convert : partially revert PR #4818 (#5041) %!s(int64=2) %!d(string=hai) anos
  David Sommers b46757735d convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#5019) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 0f83e727af py : fix whitespace %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 4f4bf35f46 py : fix missing added_tokens_dict for SPM and BPE vocabs (#4971) %!s(int64=2) %!d(string=hai) anos
  Austin 6efb8eb30e convert.py : fix vanilla LLaMA model conversion (#4818) %!s(int64=2) %!d(string=hai) anos
  Nam D. Tran f6793491b5 llama : add AWQ for llama, llama2, mpt, and mistral models (#4593) %!s(int64=2) %!d(string=hai) anos
  wonjun Jang f56d6077d0 Add byte token type when tokenizer.model is not exists (#4641) %!s(int64=2) %!d(string=hai) anos
  wonjun Jang 873637afc7 convert : support loading vocab from fast tokenizer config (#3633) %!s(int64=2) %!d(string=hai) anos
  slaren 799a1cb13b llama : add Mixtral support (#4406) %!s(int64=2) %!d(string=hai) anos
  Richard Kiss 9494d7c477 english : use `typos` to fix comments and logs (#4354) %!s(int64=2) %!d(string=hai) anos
  slaren f4d973cecb convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#4258) %!s(int64=2) %!d(string=hai) anos
  crasm 3014b5415d Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189) %!s(int64=2) %!d(string=hai) anos
  Galunid f23c0359a3 ci : add flake8 to github actions (python linting) (#4129) %!s(int64=2) %!d(string=hai) anos
  Don Mahurin 2ab0707acb convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (#4089) %!s(int64=2) %!d(string=hai) anos
  afrideva b46d12f86d convert.py: also look for plain model.safetensors (#4043) %!s(int64=2) %!d(string=hai) anos
  Kerfuffle 34b0a08207 gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) %!s(int64=2) %!d(string=hai) anos
  Galunid a75fa576ab scripts: Generalize convert scripts (#3838) %!s(int64=2) %!d(string=hai) anos
  cebtenzzre 898aeca90a llama : implement YaRN RoPE scaling (#2268) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 8a2f2fea29 convert : ignore tokens if their IDs are within [0, vocab_size) (#3831) %!s(int64=2) %!d(string=hai) anos
  Kerfuffle a5e7dbd614 llama : validate special token ids are in range when loading GGUF model (#3635) %!s(int64=2) %!d(string=hai) anos
  Qin Yue Chen 8cf19d60dc gguf : support big endian platform (#3552) %!s(int64=2) %!d(string=hai) anos
  goerch ff5a3f0c09 Work on the BPE tokenizer (#3252) %!s(int64=2) %!d(string=hai) anos
  cebtenzzre 0fe321031a gguf : general usability improvements (#3409) %!s(int64=2) %!d(string=hai) anos
  Zhang Peiyuan e519621010 convert : remove bug in convert.py permute function (#3364) %!s(int64=2) %!d(string=hai) anos
  Erik Scholz 6eeb4d9083 convert: remove most of the n_mult usage in convert.py (#3098) %!s(int64=2) %!d(string=hai) anos