Commit History

Author SHA1 Message Date
  Georgi Gerganov ab336a9d5e code : normalize enum names (#5697) 1 year ago
  NawafAlansari 4480542b22 baby-llama : allocate graphs in ggml_context (#5573) 1 year ago
  Georgi Gerganov afefa319f1 ggml : change ggml_scale to take a float instead of tensor (#4573) 2 years ago
  slaren cafcd4f895 ggml : remove n_dims from ggml_tensor (#4469) 2 years ago
  Cebtenzzre bc39553c90 build : enable more non-default compiler warnings (#3200) 2 years ago
  xaedes 0e76a8992c train : finetune LORA (#2632) 2 years ago
  Georgi Gerganov ec893798b7 llama : custom attention mask + parallel decoding + no context swaps (#3228) 2 years ago
  Cebtenzzre 3aefaab9e5 check C++ code with -Wmissing-declarations (#3184) 2 years ago
  Cebtenzzre ef15649972 build : fix most gcc and clang warnings (#2861) 2 years ago
  Kawrakow eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) 2 years ago
  slaren 41c674161f make rms_norm_eps a parameter (#2374) 2 years ago
  Qingyou Meng 1d656d6360 ggml : change ggml_graph_compute() API to not require context (#1999) 2 years ago
  Howard Su 0be54f75a6 baby-llama : fix build after ggml_rope change (#2016) 2 years ago
  Borislav Stanimirov 9cbf50c041 build : fix and ignore MSVC warnings (#1889) 2 years ago
  0xspringtime 9254920265 baby-llama : fix operator!= (#1821) 2 years ago
  xaedes e32089b2c2 train : improved training-from-scratch example (#1652) 2 years ago
  xaedes f954edda93 ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360) 2 years ago