Commit History

Author SHA1 Message Date
  SuperUserNameMan b41b4cad6f examples : add "simple" (#1840) 2 years ago
  Zenix 13fe9d2d84 cmake : add auto detection of BLAS_INCLUDE_DIRS (#1886) 2 years ago
  Johannes Gäßler ac3b886953 llama : fix embd when offloading non-repeating layers (#1891) 2 years ago
  FrankHB 5b9ccaf104 Fixed possible macro redefinition (#1892) 2 years ago
  Borislav Stanimirov 9cbf50c041 build : fix and ignore MSVC warnings (#1889) 2 years ago
  Kawrakow 3d01122610 CUDA : faster k-quant dot kernels (#1862) 2 years ago
  Borislav Stanimirov 602c748863 gitignore : add several entries specific to Visual Studio (#1888) 2 years ago
  Johannes Gäßler a09f9195be Fixed CUDA runtime version check (#1879) 2 years ago
  Georgi Gerganov bed9275617 cmake : remove whitespaces 2 years ago
  yangli2 c36e81da62 examples : add chat-vicuna.sh (#1854) 2 years ago
  Igor Okulist 3559433fec cmake : set include path for OpenBlas (#1830) 2 years ago
  Frederik Vogel 69b34a0e80 swift : Package compile breaks due to ggml-metal.metal (#1831) 2 years ago
  daboe01 cf267d1c71 make : add train-text-from-scratch (#1850) 2 years ago
  Srinivas Billa 9dda13e5e1 readme : server compile flag (#1874) 2 years ago
  sandyiscool 37e257c48e make : clean *.so files (#1857) 2 years ago
  Howard Su 64cc19b4fe Fix the validation of main device (#1872) 2 years ago
  Georgi Gerganov 4bfcc855ab metal : parallel command buffer encoding (#1860) 2 years ago
  Johannes Gäßler 6b8312e797 Better error when using both LoRA + GPU layers (#1861) 2 years ago
  Johannes Gäßler 254a7a7a5f CUDA full GPU acceleration, KV cache in VRAM (#1827) 2 years ago
  0xspringtime 9254920265 baby-llama : fix operator!= (#1821) 2 years ago
  xaedes e32089b2c2 train : improved training-from-scratch example (#1652) 2 years ago
  Georgi Gerganov 2347e45e7b llama : do a warm-up eval at start for better timings (#1824) 2 years ago
  Kerfuffle 74d4cfa343 Allow "quantizing" to f16 and f32 (#1787) 2 years ago
  Kawrakow 74a6d922f1 Metal implementation for all k_quants (#1807) 2 years ago
  slaren e4caa8da59 ci : run when changing only the CUDA sources (#1800) 2 years ago
  Howard Su 58970a4c39 Leverage mmap for offloading tensors to GPU (#1597) 2 years ago
  Kawrakow 8c0a10e64d metal : fix failure to load model (#1817) 2 years ago
  Kerfuffle fa84c4b3e8 Fix issue where interactive mode crashes when input exceeds ctx size (#1789) 2 years ago
  Kyle Liang 12b063f0ec Fixed WSL cuda's OOM error (#1594) 2 years ago
  Ryan Landay 31d2b5f4a4 Update SHA256SUMS with current hashes for models quantized using q4_0 (#1798) 2 years ago