Commit History

Author SHA1 Message Date
  Georgi Gerganov 447ccbe8c3 readme : add new roadmap + manifesto 2 years ago
  Georgi Gerganov bd34cdde38 ggml : sync latest ggml (custom operators) 2 years ago
  anon998 c2a08f87b8 fix server sampling: top k sampler first (#1977) 2 years ago
  Georgi Gerganov 66a2555ba6 readme : add Azure CI discussion link 2 years ago
  sjinzh e65ca7e14a zig : upgrade build system support (#1981) 2 years ago
  Robyn 5ec8dd5a3c #1869 Fix null reference errors when training from scratch with CUDA (#1907) 2 years ago
  Georgi Gerganov 65bdd52a86 tests : sync test-grad0 from ggml 2 years ago
  Rowan Hart fdd1860911 flake : fix ggml-metal.metal path and run nixfmt (#1974) 2 years ago
  AN Long c943d823c1 convert : fix invalid params in write_vocab_only (#1975) 2 years ago
  slaren f2c754e1c3 ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978) 2 years ago
  Georgi Gerganov 11da1a85cd readme : fix whitespaces 2 years ago
  Alberto 235b610d65 readme : fixed termux instructions (#1973) 2 years ago
  Alex Renda b061ba9e2a llama : fix top-p sampling to match the canonical definition (#1953) 2 years ago
  Didzis Gosko 527b6fba1d llama : make model stateless and context stateful (llama_state) (#1797) 2 years ago
  eiery d7b7484f74 Add OpenLLaMA instructions to the README (#1954) 2 years ago
  Erik Scholz 7487137227 rework convert.py to read hyper-parameters from config.json (#1958) 2 years ago
  Johannes Gäßler bbca06e269 cmake: revert CUDA arch default to 52, 61 if f16 (#1959) 2 years ago
  Rahul Vivek Nair fb98254f99 Fix typo in README.md (#1961) 2 years ago
  Georgi Gerganov 049aa16b8c readme : add link to p1 2 years ago
  Xiake Sun 2322ec223a Fix typo (#1949) 2 years ago
  Ettore Di Giacinto aacdbd4056 llama : fix params struct slignment (#1936) 2 years ago
  Henri Vasserman 20568fe60f [Fix] Reenable server embedding endpoint (#1937) 2 years ago
  Georgi Gerganov 18b35625c3 ggml : fix bug in LBFGS optimizer (found by ggml tests) 2 years ago
  l3utterfly ba4e85a833 llama : use aligned memory during ggml_init call from loading saved sessions (#1934) 2 years ago
  Georgi Gerganov 23fc5c219a cmake : fix trailing whitespaces 2 years ago
  Kawrakow cb40dfca69 llama : only use Q6_K for output weights if tensor size is multiple of 256 (#1932) 2 years ago
  Kawrakow ca7c3f4da5 cuda : faster k-quants on older GPUs (#1930) 2 years ago
  Georgi Gerganov b97ca431db ggml : sync latest ggml repo (#1924) 2 years ago
  Howard Su 1e3abfcef0 cmake : fix build shared ggml when CUDA is enabled (#1929) 2 years ago
  Johannes Gäßler 16b9cd1939 Convert vector to f16 for dequantize mul mat vec (#1913) 2 years ago