Historique des commits

Auteur SHA1 Message Date
  bhubbb 698f7b5d63 make : add libllama.so target for llama-cpp-python (#797) il y a 2 ans
  iacore c1950c3431 zig : don't link examples/common.cpp for non-example (#814) il y a 2 ans
  Ivan Stepanov 4953e9007f llama : always sort logits before nucleus sampling (#812) il y a 2 ans
  Sergey Alirzaev cc9cee8e9e Do not crash when it has nothing to say. (#796) il y a 2 ans
  Pavol Rusnak d2beca95dc Make docker instructions more explicit (#785) il y a 2 ans
  Georgi Gerganov eeaa7b0492 ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781) il y a 2 ans
  Georgi Gerganov 986b6ce9f9 ggml, llama : avoid heavy V transpose + improvements (#775) il y a 2 ans
  Georgi Gerganov 3416298929 Update README.md il y a 2 ans
  Ivan Stepanov 5a8c4f6240 llama : define non-positive top_k; top_k range check (#779) il y a 2 ans
  at8u ff05d05c96 miku.sh : add executable bit (#780) il y a 2 ans
  Georgi Gerganov 62b3e81aae media : add logos and banners il y a 2 ans
  Georgi Gerganov 8d10406d6e readme : change logo + add bindings + add uis + add wiki il y a 2 ans
  iacore ed1c214e66 zig : add build.zig (#773) il y a 2 ans
  Ivan Stepanov 0c44427df1 make : missing host optimizations in CXXFLAGS (#763) il y a 2 ans
  Adithya Balaji 594cc95fab readme : update with CMake and windows example (#748) il y a 2 ans
  at8u 88ed5761b8 examples : add Miku.sh (#724) il y a 2 ans
  Andrew Duffy 58c438cf7d Add Accelerate/BLAS when using Swift (#765) il y a 2 ans
  mgroeber9110 53dbba7695 Windows: reactive sigint handler after each Ctrl-C (#736) il y a 2 ans
  SebastianApel 437e77855a 10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654) il y a 2 ans
  Ivan Stepanov cd7fa95690 Define non-positive temperature behavior (#720) il y a 2 ans
  bsilvereagle a0c0516416 Remove torch GPU dependencies from the Docker.full image (#665) il y a 2 ans
  Thatcher Chamberlin d8d4e865cd Add a missing step to the gpt4all instructions (#690) il y a 2 ans
  Christian Falch e986f94829 Added api for getting/setting the kv_cache (#685) il y a 2 ans
  Marian Cepok c0bb1d3ce2 ggml : change ne to int64_t (#626) il y a 2 ans
  Leonardo Neumann 6e7801d08d examples : add gpt4all script (#658) il y a 2 ans
  Stephan Walter 81040f10aa llama : do not allocate KV cache for "vocab_only == true" (#682) il y a 2 ans
  Fabian c4f89d8d73 make : use -march=native -mtune=native on x86 (#609) il y a 2 ans
  Murilo Santana 5b70e7de4c fix default params for examples/main (#697) il y a 2 ans
  Ikko Eltociear Ashimine a717cba844 py: huggingface -> Hugging Face (#686) il y a 2 ans
  rimoliga d0a7f742e7 readme: replace termux links with homepage, play store is deprecated (#680) il y a 2 ans