Historia zmian

Autor SHA1 Wiadomość Data
  Branden Butler 40a34fe8d0 speculative : fix prompt tokenization in speculative example (#4025) 2 lat temu
  Georgi Gerganov dae06c06e5 Revert "finetune : add --n-gpu-layers flag info to --help (#4128)" 2 lat temu
  Clark Saben 05e8301e45 finetune : add --n-gpu-layers flag info to --help (#4128) 2 lat temu
  SoftwareRenderer 936c79b227 server : relay error messages (#4131) 2 lat temu
  kchro3 262005ad9d common : comma should be semicolon (#4137) 2 lat temu
  Georgi Gerganov 35985acffa gitignore : tokenize 2 lat temu
  slaren e937066420 gguf-py : export chat templates (#4125) 2 lat temu
  Kerfuffle 28a2e6e7d4 tokenize example: Respect normal add BOS token behavior (#4126) 2 lat temu
  Galunid 0b5c3b0457 scripts : Remove missed baichuan convert script (#4127) 2 lat temu
  Kerfuffle 2923f17f6f Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124) 2 lat temu
  slaren bbecf3f415 llama : increase max nodes (#4115) 2 lat temu
  Roger Meier 8e9361089d build : support ppc64le build for make and CMake (#3963) 2 lat temu
  Georgi Gerganov 5ad387e994 tokenize : fix trailing whitespace 2 lat temu
  zakkor 2fa02b4b3d examples : add tokenize (#4039) 2 lat temu
  Don Mahurin 2ab0707acb convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (#4089) 2 lat temu
  John 11173c92d6 py : Falcon HF compatibility (#4104) 2 lat temu
  Jannis Schönleber 9e87ef60e1 common : improve yaml log escaping (#4080) 2 lat temu
  Huawei Lin c7cce1246e llava : fix compilation warning that fread return value is not used (#4069) 2 lat temu
  Jiří Podivín f7d5e97542 py : remove superfluous import statements (#4076) 2 lat temu
  Jiří Podivín ba4cf5c0bf train : move number of gpu layers argument parsing to common/train.cpp (#4074) 2 lat temu
  slaren e85bb1a8e7 llama : add functions to get the model's metadata (#4013) 2 lat temu
  gwjr 3e916a07ac finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079) 2 lat temu
  Andrew Godfrey 947f64f163 finetune : zero the loraB initial vectors (#4082) 2 lat temu
  Andrew Godfrey b83e149ec6 cuda : get_row_rounding F32 (#4095) 2 lat temu
  Georgi Gerganov 4f447a4833 llama : fix data units (#4101) 2 lat temu
  Kerfuffle 91f6499393 Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040) 2 lat temu
  texmex76 8da46278e1 gguf : fix potential infinite loops while parsing (#4100) 2 lat temu
  Jared Van Bortel a6fc554e26 llama : restore prefix space in llama tokenizer (#4081) 2 lat temu
  slaren 1cf2850d52 ggml-cuda : increase max graph size (#4084) 2 lat temu
  Michael Potter 6bb4908a17 Fix MacOS Sonoma model quantization (#4052) 2 lat temu