Historique des commits

Auteur SHA1 Message Date
  Cebtenzzre 182af739c4 server: regenerate completion.js.hpp (#2515) il y a 2 ans
  Cebtenzzre 4329d1acb0 CUDA: use min compute capability of GPUs actually used (#2506) il y a 2 ans
  Cebtenzzre 02f9d96a86 CUDA: check if event is NULL before cudaStreamWaitEvent (#2505) il y a 2 ans
  DannyDaemonic 3498588e0f Add --simple-io option for subprocesses and break out console.h and cpp (#1558) il y a 2 ans
  Stephen Nichols 5f631c2679 Fixing race condition in server and partial stream handling in frontend. (#2391) il y a 2 ans
  l3utterfly 415e99fec2 Stream save llama context data to file instead of allocating entire buffer upfront (#2488) il y a 2 ans
  Borislav Stanimirov ff966e7ca6 build : fix several cast and printf warnings (#2499) il y a 2 ans
  Evan Jones 8183159cf3 examples : generate JSON according to schema (#1887) il y a 2 ans
  Johannes Gäßler 468ea24fb4 CUDA: faster non k-quant mul_mat_q kernels (#2483) il y a 2 ans
  Johannes Gäßler 4f6b60c776 CUDA: Fix models with output size != 32000 (#2480) il y a 2 ans
  ldwang 220d931864 readme : add Aquila-7B model series to supported models (#2487) il y a 2 ans
  Eve 81844fbcfd tests : Fix compilation warnings (Linux/GCC) (#2451) il y a 2 ans
  Yiming Cui a312193e18 readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475) il y a 2 ans
  Bono Lv c574bddb36 fix a typo in examples/server/README.md (#2478) il y a 2 ans
  ebraminio 86aeb27734 server : Support dark mode (#2414) il y a 2 ans
  Matteo Boschini 1873ff586b metal : add gqa8 kernel to allow llama-2-70B on metal (#2459) il y a 2 ans
  Johannes Gäßler 49e7cb5bb1 CUDA: fixed LLAMA_FAST compilation option (#2473) il y a 2 ans
  Johannes Gäßler b772bba42e CUDA: fixed cmake F16 option (#2471) il y a 2 ans
  Johannes Gäßler 0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) il y a 2 ans
  Johannes Gäßler 1215ed7d5c CUDA: Implemented row flattening for non-glm RoPE (#2468) il y a 2 ans
  Johannes Gäßler 2dbf518911 CUDA: fewer memory bank conflicts for mul_mat_q (#2458) il y a 2 ans
  slaren 9d2382b3e4 Fix Metal backend broken from the allocator changes (#2455) il y a 2 ans
  slaren a113689571 ggml : add graph tensor allocator (#2411) il y a 2 ans
  Johannes Gäßler 11f3ca06b8 CUDA: Quantized matrix matrix multiplication (#2160) il y a 2 ans
  Johannes Gäßler 9baf9ef304 CUDA: faster multi GPU synchronization (#2448) il y a 2 ans
  klosax 8a88e5855c perplexity : add Hellaswag calculation (#2389) il y a 2 ans
  Lee a9559bf77b ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405) il y a 2 ans
  eric8607242 ee1b497c98 llama : support more diverse tokenizers? (#2420) il y a 2 ans
  Georgi Gerganov d73b8d48b4 examples : fix whitespace il y a 2 ans
  nhamanasu 34ae1caf7f examples : server chat mode with llama2 (#2400) il y a 2 ans