Commit History

Author SHA1 Message Date
  Georgi Gerganov f5a77a629b Introduce C-style API (#370) 2 years ago
  Georgi Gerganov 3bfa3b43b7 Fix convert script, warnings alpaca instructions, default params 2 years ago
  Mack Straight c98ae02668 fix typo in comment (#318) 2 years ago
  Georgi Gerganov eb34620aec Add tokenizer test + revert to C++11 (#355) 2 years ago
  Qingyou Meng 6b6d5b5024 Fixed tokenizer.model not found error when model dir is symlink (#325) 2 years ago
  Mack Straight 074bea2eb1 sentencepiece bpe compatible tokenizer (#252) 2 years ago
  Georgi Gerganov c1c7026b47 Fix python stuff (#109) 2 years ago
  qunash 467b149761 Refactoring `convert-pth-to-ggml.py`: more concise and readable (#109) 2 years ago
  Bernat Vadell 2af23d3043 🚀 Dockerize llamacpp (#132) 2 years ago
  Ronsor 956dfda8ad Use `tokenizer.vocab_size()` instead of hardcoding 32000 in convert-pth-to-ggml.py (#142) 2 years ago
  Val Kharitonov 2a20f48efa Fix UTF-8 handling (including colors) (#79) 2 years ago
  Georgi Gerganov 7c9e54e55e Revert "weights_only" arg - this causing more trouble than help 2 years ago
  Oleksandr Nikitin b9bd1d0141 python/pytorch compat notes (#44) 2 years ago
  deepdiffuser a93120236f use weights_only in conversion script (#32) 2 years ago
  Georgi Gerganov 007a8f6f45 Support all LLaMA models + change Q4_0 quantization storage 2 years ago
  Georgi Gerganov 70bc0b8b15 Fix a bug in the rope calculation 2 years ago
  Georgi Gerganov 26c0846629 Initial release 2 years ago