| .. |
|
baby-llama
|
bc39553c90
build : enable more non-default compiler warnings (#3200)
|
2 лет назад |
|
batched
|
2b4ea35e56
cuda : add batched cuBLAS GEMM for faster attention (#3749)
|
2 лет назад |
|
batched-bench
|
6961c4bd0b
batched-bench : print params at start
|
2 лет назад |
|
batched.swift
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
2 лет назад |
|
beam-search
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
benchmark
|
65c2c1c5ab
benchmark-matmult : do not use integer abs() on a float (#3277)
|
2 лет назад |
|
convert-llama2c-to-ggml
|
8cf19d60dc
gguf : support big endian platform (#3552)
|
2 лет назад |
|
embedding
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 лет назад |
|
export-lora
|
0e76a8992c
train : finetune LORA (#2632)
|
2 лет назад |
|
finetune
|
424b6381c4
ggml : add context enumeration functions (#3605)
|
2 лет назад |
|
gguf
|
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184)
|
2 лет назад |
|
infill
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
jeopardy
|
a8777ad84e
parallel : add option to load external prompt file (#3416)
|
2 лет назад |
|
llama-bench
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
llava
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
main
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
main-cmake-pkg
|
abd21fc99f
cmake : add missed dependencies (#3763)
|
2 лет назад |
|
metal
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
2 лет назад |
|
parallel
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
perplexity
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
2 лет назад |
|
quantize
|
bc39553c90
build : enable more non-default compiler warnings (#3200)
|
2 лет назад |
|
quantize-stats
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
2 лет назад |
|
save-load-state
|
1142013da4
save-load-state : fix example + add ci test (#3655)
|
2 лет назад |
|
server
|
34b2a5e1ee
server : do not release slot on image input (#3798)
|
2 лет назад |
|
simple
|
c8d6a1f34a
simple : fix batch handling (#3803)
|
2 лет назад |
|
speculative
|
ee1a0ec9cb
llama : add option for greedy sampling with probs (#3813)
|
2 лет назад |
|
train-text-from-scratch
|
a5e8c1d8c7
train-text-from-scratch : fix assert failure in ggml-alloc (#3618)
|
2 лет назад |
|
CMakeLists.txt
|
d1031cf49c
sampling : refactor init to use llama_sampling_params (#3696)
|
2 лет назад |
|
Miku.sh
|
019fe257bb
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
|
2 лет назад |
|
alpaca.sh
|
a17a2683d8
alpaca.sh : update model file name (#2074)
|
2 лет назад |
|
chat-13B.bat
|
d9ad104440
Create chat-13B.bat (#592)
|
2 лет назад |
|
chat-13B.sh
|
6daa09d879
examples : read chat prompts from a template file (#1196)
|
2 лет назад |
|
chat-persistent.sh
|
ac2219fef3
llama : fix session saving/loading (#3400)
|
2 лет назад |
|
chat-vicuna.sh
|
c36e81da62
examples : add chat-vicuna.sh (#1854)
|
2 лет назад |
|
chat.sh
|
8341a25957
main : log file (#2748)
|
2 лет назад |
|
gpt4all.sh
|
107980d970
examples : add -n to alpaca and gpt4all scripts (#706)
|
2 лет назад |
|
json-schema-to-grammar.py
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 лет назад |
|
llama.vim
|
2d7baaf50f
vim : streaming and more (#2495)
|
2 лет назад |
|
llama2-13b.sh
|
73643f5fb1
gitignore : changes for Poetry users + chat examples (#2284)
|
2 лет назад |
|
llama2.sh
|
73643f5fb1
gitignore : changes for Poetry users + chat examples (#2284)
|
2 лет назад |
|
llm.vim
|
ad9ddcff6e
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2 лет назад |
|
make-ggml.py
|
ac43576124
make-ggml.py : compatibility with more models and GGUF (#3290)
|
2 лет назад |
|
reason-act.sh
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 лет назад |
|
server-llama2-13B.sh
|
7c2227a197
chmod : make scripts executable (#2675)
|
2 лет назад |