| .. |
|
baby-llama
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 лет назад |
|
batched
|
b0034d93ce
examples : add passkey test (#3856)
|
2 лет назад |
|
batched-bench
|
ef47ec18da
ggml : add ggml_soft_max_ext (#4256)
|
2 лет назад |
|
batched.swift
|
5c9f90cba1
swift : fix prompt tokenization logic (#4321)
|
2 лет назад |
|
beam-search
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
benchmark
|
20a68a7030
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
|
2 лет назад |
|
convert-llama2c-to-ggml
|
cafcd4f895
ggml : remove n_dims from ggml_tensor (#4469)
|
2 лет назад |
|
embedding
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 лет назад |
|
export-lora
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 лет назад |
|
finetune
|
b3a7c20b5c
finetune : remove unused includes (#4756)
|
2 лет назад |
|
gguf
|
32259b2dad
gguf : simplify example dependencies
|
2 лет назад |
|
infill
|
881800d1f0
main : Add ChatML functionality to main example (#4046)
|
2 лет назад |
|
jeopardy
|
a8777ad84e
parallel : add option to load external prompt file (#3416)
|
2 лет назад |
|
llama-bench
|
226460cc0d
llama-bench : add no-kv-offload parameter (#4812)
|
2 лет назад |
|
llama.swiftui
|
42ea63c5a3
llama.swiftui : update readme
|
2 лет назад |
|
llava
|
36e5a08b20
llava-cli : don't crash if --image flag is invalid (#4835)
|
2 лет назад |
|
lookahead
|
9494d7c477
english : use `typos` to fix comments and logs (#4354)
|
2 лет назад |
|
lookup
|
7082d24cec
lookup : add prompt lookup decoding example (#4484)
|
2 лет назад |
|
main
|
52531fdff8
main : add self-extend support (#4815)
|
2 лет назад |
|
main-cmake-pkg
|
82d6eab224
main-cmake-pkg : fix build issue (#4665)
|
2 лет назад |
|
metal
|
4760e7cc0b
sync : ggml (backend v2) (#3912)
|
2 лет назад |
|
parallel
|
6b0a7420d0
llama : KV cache view API + better KV cache management (#4170)
|
2 лет назад |
|
passkey
|
b0034d93ce
examples : add passkey test (#3856)
|
2 лет назад |
|
perplexity
|
91f6499393
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
|
2 лет назад |
|
quantize
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 лет назад |
|
quantize-stats
|
bcc0eb4591
llama : per-layer KV cache + quantum K cache (#4309)
|
2 лет назад |
|
save-load-state
|
b12fa0d1c1
build : link against build info instead of compiling against it (#3879)
|
2 лет назад |
|
server
|
128de3585b
server : update readme about token probs (#4777)
|
2 лет назад |
|
simple
|
23b5e12eb5
simple : update error message for KV cache check (#4324)
|
2 лет назад |
|
speculative
|
9494d7c477
english : use `typos` to fix comments and logs (#4354)
|
2 лет назад |
|
tokenize
|
28a2e6e7d4
tokenize example: Respect normal add BOS token behavior (#4126)
|
2 лет назад |
|
train-text-from-scratch
|
afefa319f1
ggml : change ggml_scale to take a float instead of tensor (#4573)
|
2 лет назад |
|
CMakeLists.txt
|
b0034d93ce
examples : add passkey test (#3856)
|
2 лет назад |
|
Miku.sh
|
019fe257bb
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
|
2 лет назад |
|
alpaca.sh
|
a17a2683d8
alpaca.sh : update model file name (#2074)
|
2 лет назад |
|
base-translate.sh
|
96e80dabc6
examples : improve base-translate.sh script (#4783)
|
2 лет назад |
|
chat-13B.bat
|
d9ad104440
Create chat-13B.bat (#592)
|
2 лет назад |
|
chat-13B.sh
|
6daa09d879
examples : read chat prompts from a template file (#1196)
|
2 лет назад |
|
chat-persistent.sh
|
ac2219fef3
llama : fix session saving/loading (#3400)
|
2 лет назад |
|
chat-vicuna.sh
|
c36e81da62
examples : add chat-vicuna.sh (#1854)
|
2 лет назад |
|
chat.sh
|
8341a25957
main : log file (#2748)
|
2 лет назад |
|
gpt4all.sh
|
107980d970
examples : add -n to alpaca and gpt4all scripts (#706)
|
2 лет назад |
|
json-schema-to-grammar.py
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 лет назад |
|
llama.vim
|
2d7baaf50f
vim : streaming and more (#2495)
|
2 лет назад |
|
llama2-13b.sh
|
73643f5fb1
gitignore : changes for Poetry users + chat examples (#2284)
|
2 лет назад |
|
llama2.sh
|
73643f5fb1
gitignore : changes for Poetry users + chat examples (#2284)
|
2 лет назад |
|
llm.vim
|
ad9ddcff6e
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2 лет назад |
|
make-ggml.py
|
ac43576124
make-ggml.py : compatibility with more models and GGUF (#3290)
|
2 лет назад |
|
reason-act.sh
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 лет назад |
|
server-llama2-13B.sh
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 лет назад |