Georgi Gerganov 745aa5319b llama : deprecate llama_kv_self_ API (#14030) 7 hónapja
..
CMakeLists.txt 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 éve
README.md 68ff663a04 repo : update links to new url (#11886) 11 hónapja
lookup-create.cpp f66f582927 llama : refactor `src/llama.cpp` (#10902) 1 éve
lookup-merge.cpp 7eee341bee common : use common_ prefix for common library functions (#9805) 1 éve
lookup-stats.cpp f66f582927 llama : refactor `src/llama.cpp` (#10902) 1 éve
lookup.cpp 745aa5319b llama : deprecate llama_kv_self_ API (#14030) 7 hónapja

README.md

llama.cpp/examples/lookup

Demonstration of Prompt Lookup Decoding

https://github.com/apoorvumang/prompt-lookup-decoding

The key parameters for lookup decoding are ngram_min, ngram_max and n_draft. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.

More info:

https://github.com/ggml-org/llama.cpp/pull/4484 https://github.com/ggml-org/llama.cpp/issues/4226