Georgi Gerganov
|
f3f65429c4
llama : reorganize source code + improve CMake (#8006)
|
1 سال پیش |
Isaac McFadyen
|
8854044561
Clarify default MMQ for CUDA and LLAMA_CUDA_FORCE_MMQ flag (#8115)
|
1 سال پیش |
Johannes Gäßler
|
a818f3028d
CUDA: use MMQ instead of cuBLAS by default (#8075)
|
1 سال پیش |
Abheek Gulati
|
1193778105
readme : update UI list (#7943)
|
1 سال پیش |
Bryan Honof
|
b473e95084
Add Nix and Flox install instructions (#7899)
|
1 سال پیش |
hopkins385
|
6fe1c62741
readme : update UI list [no ci] (#7958)
|
1 سال پیش |
Galunid
|
a55eb1bf0f
readme : Remove outdated instructions from README.md (#7914) [no ci]
|
1 سال پیش |
Olivier Chafik
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 سال پیش |
Patrice Ferlet
|
f2b5764beb
Fix a typo and add Fedora 40 pacakge to install for Vulkan (#7794) [no ci]
|
1 سال پیش |
Georgi Gerganov
|
c28a83902c
examples : remove --instruct remnants (#7846)
|
1 سال پیش |
Mattheus Chediak
|
a143c04375
README minor fixes (#7798) [no ci]
|
1 سال پیش |
Georgi Gerganov
|
554c247caf
ggml : remove OpenCL (#7735)
|
1 سال پیش |
Georgi Gerganov
|
5ca0944a15
readme : remove obsolete Zig instructions (#7471)
|
1 سال پیش |
HanishKVC
|
2ac95c9d56
SimpleChat: Simple histogram/repeatMatching driven garbageTrimming, Settings UI, Streaming mode, OpenAi Compat (Model, Authorization Bearer), Save/Restore session, Auto Settings UI (#7548)
|
1 سال پیش |
Johannes Gäßler
|
9b596417af
CUDA: quantized KV support for FA vec (#7527)
|
1 سال پیش |
Georgi Gerganov
|
16926dff92
readme : link homebrew discussion
|
1 سال پیش |
Galunid
|
2e32f874e6
Somehow '**' got lost (#7663)
|
1 سال پیش |
Galunid
|
1af511fc22
Add convert.py removal to hot topics (#7662)
|
1 سال پیش |
Sertaç Özercan
|
0541f06296
[no ci] docs: add aikit to readme (#7650)
|
1 سال پیش |
Martin Delille
|
5dcdf94676
Fix conan badge display [no ci] (#7645)
|
1 سال پیش |
Manuel
|
2e2340de17
Add brew installation instruction to README [no ci] (#7616)
|
1 سال پیش |
Martin Delille
|
7846540bd2
readme : add Conan badge (#7638)
|
1 سال پیش |
Galunid
|
9c4c9cc83f
Move convert.py to examples/convert-legacy-llama.py (#7430)
|
1 سال پیش |
Johannes Gäßler
|
972b555ab9
README: explain parallel build [no ci] (#7618)
|
1 سال پیش |
Meng, Hengyu
|
b864b50ce5
[SYCL] Align GEMM dispatch (#7566)
|
1 سال پیش |
Aarni Koskela
|
9146d36fe7
Readme: add akx/ggify to tools (#1484)
|
1 سال پیش |
Georgi Gerganov
|
74f33adf5f
readme : remove trailing space (#7469)
|
1 سال پیش |
Raj Hammeer Singh Hada
|
8b94e799df
readme : add Bunny in supported models [no ci] (#7469)
|
1 سال پیش |
Victor Nogueira
|
dacfcebd60
readme : add GPT-NeoX + Pythia to the list of supported models (#7491)
|
1 سال پیش |
Georgi Gerganov
|
fabf30b4c4
llama : remove Persimmon (#7408)
|
1 سال پیش |