Commit History

Autor SHA1 Mensaxe Data
  slaren f30ea47a87 llama : add pipeline parallelism support (#6017) hai 1 ano
  Georgi Gerganov 05b06210c9 llama : more consistent names of count variables (#5994) hai 1 ano
  SeungWon Jeong fb215c3832 server : normalize embeddings (#5956) hai 1 ano
  compilade c2101a2e90 llama : support Mamba Selective State Space Models (#5328) hai 1 ano
  Georgi Gerganov 29ae62d2ae llama : fix embeddings (#5796) hai 1 ano
  Minsoo Cheong 6d341ab6c5 speculative : implement stochastic speculative sampling (#5625) hai 1 ano
  Douglas Hanley 475df1d6cf llama : allow for user specified embedding pooling type (#5849) hai 1 ano
  Neo Zhang Jianyu 715641391d Support multiple GPUs (split mode) on SYCL backend (#5806) hai 1 ano
  Miwa / Ensan f49a535686 common : fix flag `--logits-all` to `--all-logits` (#5805) hai 1 ano
  Pierrick Hymbert 3ab8b3a92e llama : cleanup unused mmq flags (#5772) hai 1 ano
  Georgi Gerganov 9d533a77d0 llama : fix defrag bugs + add parameter (#5735) hai 1 ano
  Georgi Gerganov ab336a9d5e code : normalize enum names (#5697) hai 1 ano
  Robey Holderith 5ee99c32f5 common, server : surface min_keep as its own parameter (#5567) hai 1 ano
  Georgi Gerganov 1dcc3fde00 common : fix ub (#5530) hai 1 ano
  Herman Semenov 5d3de51f97 ggml, common, examples, tests : fixed type arguments in printf (#5528) hai 1 ano
  Alexey Parfenov 6dcc02d244 server : add "samplers" param to control the samplers order (#5494) hai 1 ano
  bmwl f486f6e1e5 ggml : add numa options (#5377) hai 1 ano
  Alexey Parfenov a803333a4e common : use enums for sampler types (#5418) hai 1 ano
  snadampal a07d0fee1f ggml : add mmla kernels for quantized GEMM (#4966) hai 1 ano
  0cc4m ee1628bdfe Basic Vulkan Multi-GPU implementation (#5321) hai 1 ano
  l3utterfly e6f8177532 common : add dynamic temperature parameters to main example cli (#5295) hai 1 ano
  Michael Klimenko 52bb63c708 refactor : switch to emplace_back to avoid extra object (#5291) hai 1 ano
  Georgi Gerganov 5cb04dbc16 llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240) hai 1 ano
  0cc4m f8e9140cb4 Vulkan Fixes (#5223) hai 1 ano
  Jared Van Bortel e8dc55d006 kompute : llama-bench support and ggml_cpu_has_kompute() (#5226) hai 1 ano
  Abhilash Majumder 0f648573dd ggml : add unified SYCL backend for Intel GPUs (#2690) %!s(int64=2) %!d(string=hai) anos
  Georgi Gerganov 89758723c7 minor : clean-up some warnings and style (#5094) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 6f9939d119 KL-divergence (#5076) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 7dcbe39d36 Add ability to evauate multiple choice tasks (#5047) %!s(int64=2) %!d(string=hai) anos
  Kawrakow 682986a08e Add Winogrande evaluation (#5015) %!s(int64=2) %!d(string=hai) anos