Jie Fu (傅杰)
|
1cbd80f8cf
examples : support encoder-decoder models in the simple example (#16002)
|
vor 4 Monaten |
Nick
|
9c55e5c5c2
fix: check model pointer validity before use (#13631)
|
vor 8 Monaten |
Georgi Gerganov
|
afa8a9ec9b
llama : add `llama_vocab`, functions -> methods, naming (#11110)
|
vor 1 Jahr |
Georgi Gerganov
|
47182dd03f
llama : update llama_model API names (#11063)
|
vor 1 Jahr |
Diego Devesa
|
5931c1f233
ggml : add support for dynamic loading of backends (#10469)
|
vor 1 Jahr |
Xuan Son Nguyen
|
cda0e4b648
llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745)
|
vor 1 Jahr |
Diego Devesa
|
c7499c557c
examples : do not use common library in simple example (#9803)
|
vor 1 Jahr |
Georgi Gerganov
|
6262d13e0b
common : reimplement logging (#9418)
|
vor 1 Jahr |
Georgi Gerganov
|
0abc6a2c25
llama : llama_perf + option to disable timings during decode (#9355)
|
vor 1 Jahr |
Xuan Son Nguyen
|
bfe76d4a17
common : move arg parser code to `arg.cpp` (#9388)
|
vor 1 Jahr |
slaren
|
5fb5e24811
llama : minor sampling refactor (2) (#9386)
|
vor 1 Jahr |
Xuan Son Nguyen
|
1b9ae5189c
common : refactor arg parser (#9308)
|
vor 1 Jahr |
Georgi Gerganov
|
df270ef745
llama : refactor sampling v2 (#9294)
|
vor 1 Jahr |
Georgi Gerganov
|
1442677f92
common : refactor cli arg parsing (#7675)
|
vor 1 Jahr |
Pedro Cuenca
|
b97bc3966e
llama : support Llama 3 HF conversion (#6745)
|
vor 1 Jahr |
bmwl
|
f486f6e1e5
ggml : add numa options (#5377)
|
vor 1 Jahr |
Daniel Bevenius
|
23b5e12eb5
simple : update error message for KV cache check (#4324)
|
vor 2 Jahren |
Thibault Terrasson
|
c8d6a1f34a
simple : fix batch handling (#3803)
|
vor 2 Jahren |
Marcus Dunn
|
5be6c803fa
llama : remove token functions with `context` args in favor of `model` (#3720)
|
vor 2 Jahren |
Georgi Gerganov
|
0e89203b51
speculative : add tree-based sampling example (#3624)
|
vor 2 Jahren |
slaren
|
16bc66d947
llama.cpp : split llama_context_params into model and context params (#3301)
|
vor 2 Jahren |
Georgi Gerganov
|
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
vor 2 Jahren |
Cebtenzzre
|
e6616cf0db
examples : add compiler version and target to build info (#2998)
|
vor 2 Jahren |
Przemysław Pawełczyk
|
cb6c44c5e0
build : do not use _GNU_SOURCE gratuitously (#2035)
|
vor 2 Jahren |
Georgi Gerganov
|
edd4c14817
llama : more tokenizer fixes (#2810)
|
vor 2 Jahren |
Georgi Gerganov
|
6381d4e110
gguf : new file format with flexible meta data (beta) (#2398)
|
vor 2 Jahren |
Borislav Stanimirov
|
ff966e7ca6
build : fix several cast and printf warnings (#2499)
|
vor 2 Jahren |
Evan Miller
|
5656d10599
mpi : add support for distributed inference via MPI (#2099)
|
vor 2 Jahren |
zrm
|
b853d45601
ggml : add NUMA support (#1556)
|
vor 2 Jahren |
Didzis Gosko
|
527b6fba1d
llama : make model stateless and context stateful (llama_state) (#1797)
|
vor 2 Jahren |