提交歷史

作者 SHA1 備註 提交日期
  Howard Su cc45a7feb8 Fix crash of test-tokenizer-0 under Debug build (#2064) 2 年之前
  Howard Su 55dbb915cc [llama] No need to check file version when loading vocab score (#2079) 2 年之前
  WangHaoranRobin d7d2e6a0f0 server: add option to output probabilities for completion (#1962) 2 年之前
  Georgi Gerganov 46088f7231 ggml : fix build with OpenBLAS (close #2066) 2 年之前
  Johannes Gäßler 0bc2cdfc87 Better CUDA synchronization logic (#2057) 2 年之前
  Johannes Gäßler befb3a3562 Test-based VRAM scratch size + context adjustment (#2056) 2 年之前
  Daniel Drake b213227067 cmake : don't force -mcpu=native on aarch64 (#2063) 2 年之前
  Aaron Miller 2f8cd979ec metal : release buffers when freeing metal context (#2062) 2 年之前
  Judd 471aab6e4c convert : add support of baichuan-7b (#2055) 2 年之前
  Georgi Gerganov 463f2f4c4f llama : fix return value of llama_load_session_file_internal (#2022) 2 年之前
  Rand Xie cb44dbc7de llama : catch llama_load_session_file_internal exceptions (#2022) 2 年之前
  Georgi Gerganov 79f634a19d embd-input : fix returning ptr to temporary 2 年之前
  Georgi Gerganov 04606a1599 train : fix compile warning 2 年之前
  Qingyou Meng b1ca8f36a9 ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995) 2 年之前
  Howard Su b8c8dda75f Use unsigned for random seed (#2006) 2 年之前
  LostRuins 96a712ca1b Porting the improved K-Quant CUDA kernels to OpenCL (#1966) 2 年之前
  m3ndax d3494bb86b llama : replacing auto &kv with const auto &kv (#2041) 2 年之前
  Salvador E. Tropea 5b351e94d0 cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028) 2 年之前
  Salvador E. Tropea 6432aabb6d cuda : fix missing const qualifier in casts (#2027) 2 年之前
  Howard Su b922bc351b llama : remove shards weight file support (#2000) 2 年之前
  Johannes Gäßler 7f9753fa12 CUDA GPU acceleration for LoRAs + f16 models (#1970) 2 年之前
  ningshanwutuobang cfa0750bc9 llama : support input embeddings directly (#1910) 2 年之前
  Erik Scholz 9d23589d63 fix pthreads setaffinity usage on android (#2020) 2 年之前
  Howard Su 0be54f75a6 baby-llama : fix build after ggml_rope change (#2016) 2 年之前
  Georgi Gerganov 181e8d9755 llama : fix rope usage after ChatGLM change 2 年之前
  Georgi Gerganov d9779021bd ggml : add support for ChatGLM RoPE 2 年之前
  Roman Parykin d38e451578 readme : add Scala 3 bindings repo (#2010) 2 年之前
  David Yang eaa6ca5a61 ggml : increase max tensor name + clean up compiler warnings in train-text (#1988) 2 年之前
  Gustavo Rocha Dias aa777abbb7 readme : LD_LIBRARY_PATH complement for some Android devices when building with CLBlast inside Termux (#2007) 2 年之前
  Georgi Gerganov c824d2e368 ggml : avoid conv 2d kernel round up 2 年之前