Commit History

Author SHA1 Message Date
  Bono Lv c574bddb36 fix a typo in examples/server/README.md (#2478) 2 years ago
  ebraminio 86aeb27734 server : Support dark mode (#2414) 2 years ago
  Matteo Boschini 1873ff586b metal : add gqa8 kernel to allow llama-2-70B on metal (#2459) 2 years ago
  Johannes Gäßler 49e7cb5bb1 CUDA: fixed LLAMA_FAST compilation option (#2473) 2 years ago
  Johannes Gäßler b772bba42e CUDA: fixed cmake F16 option (#2471) 2 years ago
  Johannes Gäßler 0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) 2 years ago
  Johannes Gäßler 1215ed7d5c CUDA: Implemented row flattening for non-glm RoPE (#2468) 2 years ago
  Johannes Gäßler 2dbf518911 CUDA: fewer memory bank conflicts for mul_mat_q (#2458) 2 years ago
  slaren 9d2382b3e4 Fix Metal backend broken from the allocator changes (#2455) 2 years ago
  slaren a113689571 ggml : add graph tensor allocator (#2411) 2 years ago
  Johannes Gäßler 11f3ca06b8 CUDA: Quantized matrix matrix multiplication (#2160) 2 years ago
  Johannes Gäßler 9baf9ef304 CUDA: faster multi GPU synchronization (#2448) 2 years ago
  klosax 8a88e5855c perplexity : add Hellaswag calculation (#2389) 2 years ago
  Lee a9559bf77b ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405) 2 years ago
  eric8607242 ee1b497c98 llama : support more diverse tokenizers? (#2420) 2 years ago
  Georgi Gerganov d73b8d48b4 examples : fix whitespace 2 years ago
  nhamanasu 34ae1caf7f examples : server chat mode with llama2 (#2400) 2 years ago
  Weird Constructor d91f3f0c55 readme : fix the description of the Tail free sampling (TFS) method (#2431) 2 years ago
  Rand Xie 65cdf34bdc llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433) 2 years ago
  niansa/tuxifan edcc7ae7d2 Obtaining LLaMA 2 instructions (#2308) 2 years ago
  mj-shifu 7c529cede6 convert.py : Update to support 70B HF format model files (#2427) 2 years ago
  Georgi Gerganov 1a941869cb metal : disable graph concurrency optimization due to bug (#2413) 2 years ago
  slaren b5472ea0ad ggml : fix assert in ggml_set_unary_op (#2410) 2 years ago
  Cebtenzzre 6df1f5940f make : build with -Wmissing-prototypes (#2394) 2 years ago
  slaren 5488fb789e ggml : allocate graphs in a context (#2392) 2 years ago
  Kawrakow eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) 2 years ago
  slaren 07aaa0f63f ggml : fix ggml_flash_attn to use op_params (#2387) 2 years ago
  ldwang fce48caf9a convert.py : support bpe tokenizer (#2228) 2 years ago
  Jiahao Li 875086bdb9 ggml : relax contiguous constraints in activation function (#2371) 2 years ago
  slaren da1889834a ggml : improve graph build time via hash table lookup (#2329) 2 years ago