| .. |
|
batched
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
batched.swift
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
convert-llama2c-to-ggml
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 год назад |
|
deprecation-warning
|
f112d198cd
Update deprecation-warning.cpp (#10619)
|
1 год назад |
|
embedding
|
79c137f776
examples : allow extracting embeddings from decoder contexts (#13797)
|
7 месяцев назад |
|
eval-callback
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
1 год назад |
|
gen-docs
|
7cc2d2c889
ggml : move AMX to the CPU backend (#10570)
|
1 год назад |
|
gguf
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 год назад |
|
gguf-hash
|
53ff6b9b9f
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
1 год назад |
|
gritlm
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
jeopardy
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
llama.android
|
bd3f59f812
cmake : enable curl by default (#12761)
|
9 месяцев назад |
|
llama.swiftui
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
lookahead
|
a4090d1174
llama : remove llama_kv_cache_view API + remove deprecated (#13653)
|
8 месяцев назад |
|
lookup
|
a4090d1174
llama : remove llama_kv_cache_view API + remove deprecated (#13653)
|
8 месяцев назад |
|
parallel
|
c04621711a
parallel : fix n_junk == 0 (#13952)
|
7 месяцев назад |
|
passkey
|
803f8baf4f
llama : deprecate explicit kv_self defrag/update calls (#13921)
|
7 месяцев назад |
|
retrieval
|
79c137f776
examples : allow extracting embeddings from decoder contexts (#13797)
|
7 месяцев назад |
|
save-load-state
|
e0dbec0bc6
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
10 месяцев назад |
|
simple
|
9c55e5c5c2
fix: check model pointer validity before use (#13631)
|
8 месяцев назад |
|
simple-chat
|
797f2ac062
kv-cache : simplify the interface (#13660)
|
8 месяцев назад |
|
simple-cmake-pkg
|
68ff663a04
repo : update links to new url (#11886)
|
11 месяцев назад |
|
speculative
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
speculative-simple
|
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
|
9 месяцев назад |
|
sycl
|
725f23f1f3
sycl : backend documentation review (#13544)
|
8 месяцев назад |
|
training
|
88c125f2ac
examples/training: Fix file name in README (#13803)
|
7 месяцев назад |
|
CMakeLists.txt
|
10d2af0eaa
llama/ggml: add LLM training support (#10544)
|
8 месяцев назад |
|
Miku.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
chat-13B.bat
|
d9ad104440
Create chat-13B.bat (#592)
|
2 лет назад |
|
chat-13B.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
chat-persistent.sh
|
8fc393f246
scripts : fix pattern and get n_tokens in one go (#10221)
|
1 год назад |
|
chat-vicuna.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
chat.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
convert_legacy_llama.py
|
a0ec17b32e
metadata: Detailed Dataset Authorship Metadata (#8875)
|
1 год назад |
|
json_schema_pydantic_example.py
|
3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
|
1 год назад |
|
json_schema_to_grammar.py
|
d5fe4e81bd
grammar : handle maxItems == 0 in JSON schema (#13117)
|
8 месяцев назад |
|
llama.vim
|
68ff663a04
repo : update links to new url (#11886)
|
11 месяцев назад |
|
llm.vim
|
ad9ddcff6e
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2 лет назад |
|
pydantic_models_to_grammar.py
|
090fca7a07
pydantic : replace uses of __annotations__ with get_type_hints (#8474)
|
1 год назад |
|
pydantic_models_to_grammar_examples.py
|
1d36b3670b
llama : move end-user examples to tools directory (#13249)
|
8 месяцев назад |
|
reason-act.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
regex_to_grammar.py
|
e235b267a2
py : switch to snake_case (#8305)
|
1 год назад |
|
server-llama2-13B.sh
|
1c641e6aac
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
1 год назад |
|
server_embd.py
|
a19b5cef16
llama : fix FA when KV cache is not used (i.e. embeddings) (#12825)
|
9 месяцев назад |
|
ts-type-to-grammar.sh
|
ab9a3240a9
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
1 год назад |