Commit History

Автор SHA1 Съобщение Дата
  Nigel Bosch eb7cf15a80 server : add /apply-template endpoint for additional use cases of Minja functionality (#11489) преди 11 месеца
  Rémy Oudompheng 66ee4f297c vulkan: implement initial support for IQ2 and IQ3 quantizations (#11360) преди 11 месеца
  Daniel Bevenius e51c47b401 server : update auto gen files comments [no ci] (#11484) преди 11 месеца
  Jeff Bolz 2711d0215f vulkan: Catch pipeline creation failure and print an error message (#11436) преди 11 месеца
  Eric Curtin f0d4b29edf Parse https://ollama.com/library/ syntax (#11480) преди 11 месеца
  Georgi Gerganov 815857791d sync : ggml преди 11 месеца
  William Tambellini 1a0e87d291 ggml : add option to not print stack on abort (ggml/1081) преди 1 година
  issixx d2e518e9b4 ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065) преди 1 година
  Daniel Bevenius b636228c0a embedding : enable --no-warmup option (#11475) преди 11 месеца
  Molly Sophia 325afb370a llama: fix missing k_cache store for rwkv6qwen2 (#11445) преди 11 месеца
  Emreerdog 794fe23f29 cmake: add hints for locating ggml on Windows using Llama find-package (#11466) преди 11 месеца
  peidaqi cf8cc856d7 server : Fixed wrong function name in llamacpp server unit test (#11473) преди 11 месеца
  Xuan-Son Nguyen d0c08040b6 ci : fix build CPU arm64 (#11472) преди 11 месеца
  uvos be5ef7963f HIP: Supress transformation warning in softmax.cu преди 11 месеца
  Nikita Sarychev cae9fb4361 HIP: Only call rocblas_initialize on rocblas versions with the multiple instantation bug (#11080) преди 11 месеца
  Eric Curtin 7fee2889e6 Add github protocol pulling and http:// (#11465) преди 11 месеца
  Nuno d7d1eccacc docker: allow installing pip packages system-wide (#11437) преди 11 месеца
  someone13574 4bf3119d61 cmake : don't fail on `GGML_CPU=OFF` (#11457) преди 11 месеца
  Nuno f643120bad docker: add perplexity and bench commands to full image (#11438) преди 11 месеца
  Akarshan Biswas 6e84b0ab8e SYCL : SOFTMAX F16 mask support and other fixes (#11261) преди 11 месеца
  Michael Engel 2b8525d5c8 Handle missing model in CLI parameters for llama-run (#11399) преди 11 месеца
  Eric Curtin a4417ddda9 Add new hf protocol for ollama (#11449) преди 11 месеца
  Haus1 d6d24cd9ed AMD: parse the architecture as supplied by gcnArchName (#11244) преди 11 месеца
  lexasub a5203b4465 llama : minor fixes for up llama load model speed (#11448) преди 11 месеца
  Johannes Gäßler df984e0147 llama: refactor llama_decode_impl (#11381) преди 11 месеца
  Ihar Hrachyshka acd38efee3 metal: Handle null returned from MTLCreateSystemDefaultDevice() (#11441) преди 11 месеца
  Xuan Son Nguyen caf773f249 docker : fix ARM build and Vulkan build (#11434) преди 11 месеца
  Georgi Gerganov 178a7eb952 metal : use residency sets (#11427) преди 11 месеца
  Nuno 6f53d8a6b4 docker: add missing vulkan library to base layer and update to 24.04 (#11422) преди 11 месеца
  bandoti 19f65187cb cmake: add ggml find package (#11369) преди 11 месеца