Historique des commits

Auteur SHA1 Message Date
  Galunid daab3d7f45 Add more tokenizer tests (#3742) il y a 2 ans
  Georgi Gerganov 469c9addef metal : handle ggml_scale for n%4 != 0 (close #3754) il y a 2 ans
  Georgi Gerganov e3932593d4 Revert "make : add optional CUDA_NATIVE_ARCH (#2482)" il y a 2 ans
  M. Yusuf Sarıgöz 9d02956443 issues : separate bug and enhancement template + no default title (#3748) il y a 2 ans
  Galunid 69a6735087 Update special token handling in conversion scripts for gpt2 derived tokenizers (#3746) il y a 2 ans
  Marcus Dunn 5be6c803fa llama : remove token functions with `context` args in favor of `model` (#3720) il y a 2 ans
  Galunid 6336701c93 Fix baichuan convert script not detecing model (#3739) il y a 2 ans
  Alex 96981f37b1 make : add optional CUDA_NATIVE_ARCH (#2482) il y a 2 ans
  Georgi Gerganov 438c2ca830 server : parallel decoding and multimodal (#3677) il y a 2 ans
  goerch 9e70cc0322 Add test for MPT tokenization (#3728) il y a 2 ans
  Ian Scrivener 5a42a5f8e8 readme : remove unsupported node.js library (#3703) il y a 2 ans
  Kerfuffle a5e7dbd614 llama : validate special token ids are in range when loading GGUF model (#3635) il y a 2 ans
  vvhg1 d3956aea53 main : escape prompt for cfg_negative_prompt and consecutive inputs in main with interactive (#3623) il y a 2 ans
  Georgi Gerganov 22c69a2794 batched : add len CLI argument il y a 2 ans
  shibe2 465219b914 CLBlast: Add outer loops over src0 for broadcasting in mulmat il y a 2 ans
  Georgi Gerganov d1031cf49c sampling : refactor init to use llama_sampling_params (#3696) il y a 2 ans
  Qin Yue Chen 8cf19d60dc gguf : support big endian platform (#3552) il y a 2 ans
  Georgi Gerganov a0edf73bda server : fix uninitialized sampling context (close #3685) il y a 2 ans
  Herman Semenov f439e506e8 ggml : fix rope + llama minor optimizations (#3560) il y a 2 ans
  cebtenzzre e78f3ef24a convert : restore compat with old Falcon models (#3680) il y a 2 ans
  M. Yusuf Sarıgöz f3b25e4043 multimodal : add BakLLaVA conversion support (#3682) il y a 2 ans
  M. Yusuf Sarıgöz 60abea9798 llava : avoid segfault in case of non-existent mmproj file (#3674) il y a 2 ans
  Georgi Gerganov 004797f6ac readme : update hot topics il y a 2 ans
  Georgi Gerganov 4e82b2ea3f speculative : bug fixes il y a 2 ans
  Georgi Gerganov 0e89203b51 speculative : add tree-based sampling example (#3624) il y a 2 ans
  Jhen-Jie Hong c67fe68e41 metal : implement q5_0 and q5_1 kernels (#3648) il y a 2 ans
  shibe2 1117d06607 opencl : fix element-wise multiplication (#3656) il y a 2 ans
  slaren cb33f43a2a fix embeddings when using CUDA (#3657) il y a 2 ans
  Georgi Gerganov e1675d133c llama : avoid fprintf in favor of LLAMA_LOG (#3538) il y a 2 ans
  BarfingLemurs 8402566a7c readme : update hot-topics & models, detail windows release in usage (#3615) il y a 2 ans