DannyDaemonic
|
ef955fbd23
Tag release with build number (#2732)
|
2 anos atrás |
Georgi Gerganov
|
d67777c202
metal : add Q8_0 support (#2763)
|
2 anos atrás |
Georgi Gerganov
|
c3e53b421a
llama : escape all U+2581 in a string (#2750)
|
2 anos atrás |
Evan Jones
|
6e91a1b070
llama : fix grammar sometimes generating null char (#2756)
|
2 anos atrás |
Georgi Gerganov
|
44d5462b5c
readme : fix link
|
2 anos atrás |
Georgi Gerganov
|
c7868b0753
minor : fix trailing whitespace
|
2 anos atrás |
Georgi Gerganov
|
79da24b58c
readme : update hot topics
|
2 anos atrás |
Georgi Gerganov
|
cf658adc83
llm : add Falcon support (#2717)
|
2 anos atrás |
Georgi Gerganov
|
a192860cfe
minor : fix trailing whitespace
|
2 anos atrás |
Olivier Chafik
|
95385241a9
examples : restore the functionality to import llama2.c models (#2685)
|
2 anos atrás |
slaren
|
335acd2ffd
fix convert-lora-to-ggml.py (#2738)
|
2 anos atrás |
klosax
|
5290c38e6e
main : insert bos if no tokens (#2727)
|
2 anos atrás |
akawrykow
|
cc34dbda96
gitignore : fix for windows (#2729)
|
2 anos atrás |
Cebtenzzre
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 anos atrás |
JohnnyB
|
f19dca04ea
devops : RPM Specs (#2723)
|
2 anos atrás |
Kawrakow
|
8207214b6a
Fix values shown in the quantize tool help (#2735)
|
2 anos atrás |
Kawrakow
|
62959e740e
Strided perplexity (#2714)
|
2 anos atrás |
IgnacioFDM
|
7f7ddd5002
Fix ggml to gguf conversion on Windows (#2733)
|
2 anos atrás |
Xiao-Yong Jin
|
b8ad1b66b2
server : allow json array in prompt or content for direct token input (#2306)
|
2 anos atrás |
Evan Jones
|
f5fe98d11b
docs : add grammar docs (#2701)
|
2 anos atrás |
Kerfuffle
|
777f42ba18
Improve handling of special tokens in GGML to GGUF converter (#2725)
|
2 anos atrás |
goerch
|
46ef5b5fcf
llama : fix whitespace escaping in tokenizer (#2724)
|
2 anos atrás |
Johannes Gäßler
|
c63bb1d16a
CUDA: use mul_mat_q kernels by default (#2683)
|
2 anos atrás |
Alex Petenchea
|
3b6cfe7c92
convert.py : clarifying error message (#2718)
|
2 anos atrás |
Jiahao Li
|
800c9635b4
Fix CUDA softmax by subtracting max value before exp (#2665)
|
2 anos atrás |
Georgi Gerganov
|
deb7dfca4b
gguf : add ftype meta info to the model (#2710)
|
2 anos atrás |
Kawrakow
|
bac66994cf
Quantization imrovements for k_quants (#2707)
|
2 anos atrás |
slaren
|
519c981f8b
embedding : evaluate prompt in batches (#2713)
|
2 anos atrás |
slaren
|
1123f7fbdf
ggml-cuda : use graph allocator (#2684)
|
2 anos atrás |
Georgi Gerganov
|
ef3f333d37
ggml : sync latest (SAM + SD operators, CUDA alibi) (#2709)
|
2 anos atrás |