Commit History

Author SHA1 Message Date
  ldwang fce48caf9a convert.py : support bpe tokenizer (#2228) 2 years ago
  Jiahao Li 875086bdb9 ggml : relax contiguous constraints in activation function (#2371) 2 years ago
  slaren da1889834a ggml : improve graph build time via hash table lookup (#2329) 2 years ago
  Hesen Peng 82552b7f54 build : fix line breaking error in build-info.sh (#2349) 2 years ago
  Xiao-Yong Jin 0c06204fb3 main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304) 2 years ago
  Eve 1fed755b1f ci : add non-AVX scalar build/test (#2356) 2 years ago
  katsu560 be2301bcda k_quants : add AVX support to dot functions with QK_K as 64 (#2339) 2 years ago
  Shouzheng Liu 1aa18ef994 metal : concurrently dispatch commands (#2358) 2 years ago
  Kawrakow 9a08eaf3c4 Another speed gain for Q4_0 and Q4_1 on Metal (#2375) 2 years ago
  Kawrakow 129d844c87 Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359) 2 years ago
  slaren d5512b782b server: add rms_norm_eps parameter (#2380) 2 years ago
  Henri Vasserman c798308e3a [Server] Escape HTML in webchat (#2368) 2 years ago
  slaren 41c674161f make rms_norm_eps a parameter (#2374) 2 years ago
  Aarni Koskela b3f138d058 Chat UI extras (#2366) 2 years ago
  Georgi Gerganov 5b2b2dc6ae ggml : sync (unary ops refactor, static-correctness) (#2370) 2 years ago
  Kawrakow 42f70cb2f6 Fix scalar version of Q5_K when QK_K = 64 (#2362) 2 years ago
  Evan Jones 84e09a7d8b llama : add grammar-based sampling (#1773) 2 years ago
  Kawrakow 2f9cf974a0 Some more Q4_K and Q5_K speedup on CUDA (#2346) 2 years ago
  IgnacioFDM 4f06592cc6 Add gqa parameter support to the server (#2351) 2 years ago
  Johannes Gäßler 70d26ac388 Fix __dp4a documentation (#2348) 2 years ago
  wzy 57921ca6db common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347) 2 years ago
  slaren 3602ac4255 fix n_tasks (#2342) 2 years ago
  slaren 95a6c595e7 ggml: move op parameters from tensors to ggml_tensor::op_params (#2333) 2 years ago
  Georgi Gerganov e76d630df1 llama : grouped-query attention + LLaMAv2 70B support (#2276) 2 years ago
  maddes8cht 1d0824b247 llama : print help to stdout (#2338) 2 years ago
  wzy bc3ec2cdc9 flake : support `nix build '.#opencl'` (#2337) 2 years ago
  Christian Demsar a940458e48 llama : print max tensor size to stderr (#2336) 2 years ago
  Jose Maldonado 91171b8072 make : fix CLBLAST compile support in FreeBSD (#2331) 2 years ago
  AustinMroz 355c80f49e examples : simplify vim plugin (#2327) 2 years ago
  Jiahao Li 83a00ce69b metal : support bcast add & dup & cont op (#2323) 2 years ago