Commit History

Author SHA1 Message Date
  slaren ab33f7a338 cuda : clear error after buffer allocation failure (#7376) 1 year ago
  fraxy-v f5bf761747 Capture CUDA logging output (#7298) 1 year ago
  agray3 dc020985b8 Avoid unnecessarily disabling CUDA graphs (#7302) 1 year ago
  Johannes Gäßler dc685be466 CUDA: add FP32 FlashAttention vector kernel (#7188) 1 year ago
  Justina Cho f5ef34e428 feat: implemented sigmoid function (ggml/806) 1 year ago
  Georgi Gerganov 9cb317f77e ggml : full ALiBi support (#7192) 1 year ago
  agray3 bc4bba364f Introduction of CUDA Graphs to LLama.cpp (#6766) 1 year ago
  William Tambellini 858f6b73f6 Add an option to build without CUDA VMM (#7067) 1 year ago
  Georgi Gerganov 9c67c2773d ggml : add Flash Attention (#5021) 1 year ago
  slaren 0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505) 1 year ago
  Johannes Gäßler b5e7285baf CUDA: fix matrix multiplication logic for tests (#6667) 1 year ago
  Carolinabanana 5dc9dd7152 llama : add Command R Plus support (#6491) 1 year ago
  Slava Primenko f77261a7c5 ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020) 1 year ago
  slaren 08a0c02060 ggml : mul_mat_id use the same tensor for all the experts (#6387) 1 year ago
  compilade 557410b8f0 llama : greatly reduce output buffer memory usage (#6122) 1 year ago
  Kawrakow 55c1b2a3bb IQ1_M: 1.75 bpw quantization (#6302) 1 year ago
  slaren ae1f211ce2 cuda : refactor into multiple files (#6269) 1 year ago
  slaren 2f0e81e053 cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy (#6208) 1 year ago
  slaren d0a71233fb cuda : disable host register by default (#6206) 1 year ago
  slaren 03a8f8fafe cuda : fix LLAMA_CUDA_F16 build (#6197) 1 year ago
  Kawrakow 76aa30a263 Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183) 1 year ago
  slaren 42e21c6882 cuda : fix conflict with std::swap (#6186) 1 year ago
  slaren 1c51f98adc cuda : print the returned error when CUDA initialization fails (#6185) 1 year ago
  slaren ccf58aa3ec cuda : refactor to remove global resources (#6170) 1 year ago
  slaren 2bf8d0f7c4 backend : offload large batches to GPU (#6083) 1 year ago
  slaren 3020327f6c cuda : disable unused cudaLaunchHostFunc code (#6078) 1 year ago
  slaren f30ea47a87 llama : add pipeline parallelism support (#6017) 1 year ago
  Georgi Gerganov 8030da7afe ggml : reuse quantum structs across backends (#5943) 1 year ago
  Kawrakow 44ca159faf 1.5 bit: we can do even better (#5999) 1 year ago
  Kawrakow be858f6205 Better 1.5 bit quantization (#5971) 1 year ago