Historia zmian

Autor SHA1 Wiadomość Data
  Douglas Hanley 2891c8aa9a Add support for BERT embedding models (#5423) 2 lat temu
  github-actions[bot] 97a336507e flake.lock: Update 2 lat temu
  Sergio López c88c74f967 vulkan: only use M-sized matmul on Apple GPUs (#5412) 2 lat temu
  Alexey Parfenov a803333a4e common : use enums for sampler types (#5418) 2 lat temu
  Alexey Parfenov 684780141a server : allow to specify tokens as strings in logit_bias (#5003) 2 lat temu
  Georgi Gerganov 85910c5b30 main : ctrl+C print timing in non-interactive mode (#3873) 2 lat temu
  Georgi Gerganov 139b62a839 common : fix compile warning 2 lat temu
  Georgi Gerganov 0f2411f154 ggml : fix compile warnings (unused vars) (#4966) 2 lat temu
  snadampal a07d0fee1f ggml : add mmla kernels for quantized GEMM (#4966) 2 lat temu
  Johannes Gäßler e4640d8fdf lookup: add print for drafting performance (#5450) 2 lat temu
  Xuan Son Nguyen 907e08c110 server : add llama2 chat template (#5425) 2 lat temu
  Ian Bull f026f8120f metal : use autoreleasepool to avoid memory leaks (#5437) 2 lat temu
  Georgi Gerganov cd9aea63b5 scripts : update sync scripts with new backends 2 lat temu
  Georgi Gerganov 43b65f5eb8 sync : ggml 2 lat temu
  Michael Podvitskiy 4633d93af0 ggml : add abort_callback for cpu backend (ggml/725) 2 lat temu
  Neuman Vong 4b7b38bef5 vulkan: Set limit for task concurrency (#5427) 2 lat temu
  Daniel Bevenius e00d2a62dd llava : add requirements.txt and update README.md (#5428) 2 lat temu
  Riley Stewart 7c777fcd5d server : fix prompt caching for repeated prompts (#5420) 2 lat temu
  Paul Tsochantaris e5ca3937c6 llama : do not cap thread count when MoE on CPU (#5419) 2 lat temu
  Marko Tasic e4124c2477 readme : add JavaScript/Wasm repo (#5415) 2 lat temu
  Michael Podvitskiy b2f87cb64d ggml : fix `error C2078: too many initializers` for MSVC ARM64 (#5404) 2 lat temu
  0cc4m 44fbe34360 Fix Vulkan crash on APUs with very little device memory (#5424) 2 lat temu
  Johannes Gäßler 8e6a9d2de0 CUDA: more warps for mmvq on NVIDIA (#5394) 2 lat temu
  slaren 41f308f58e llama : do not print "offloading layers" message in CPU-only builds (#5416) 2 lat temu
  Abhilash Majumder 6e99f2a04f Fix f16_sycl cpy call from Arc (#5411) 2 lat temu
  Daniel Bevenius ff4ff05c5f llava : add missing .py, and fix paths in README.md (#5414) 2 lat temu
  Johannes Gäßler b7b74cef36 fix trailing whitespace (#5407) 2 lat temu
  runfuture 4aa43fab56 llama : fix MiniCPM (#5392) 2 lat temu
  Daniel Bevenius a6e514a85f llava: fix typo/formatting in README.md (#5405) 2 lat temu
  Johannes Gäßler 26d4efd11e sampling: fix top_k <= 0 (#5388) 2 lat temu