Georgi Gerganov 745aa5319b llama : deprecate llama_kv_self_ API (#14030) hai 7 meses
..
CMakeLists.txt 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) hai 1 ano
README.md 68ff663a04 repo : update links to new url (#11886) hai 11 meses
lookup-create.cpp f66f582927 llama : refactor `src/llama.cpp` (#10902) hai 1 ano
lookup-merge.cpp 7eee341bee common : use common_ prefix for common library functions (#9805) hai 1 ano
lookup-stats.cpp f66f582927 llama : refactor `src/llama.cpp` (#10902) hai 1 ano
lookup.cpp 745aa5319b llama : deprecate llama_kv_self_ API (#14030) hai 7 meses

README.md

llama.cpp/examples/lookup

Demonstration of Prompt Lookup Decoding

https://github.com/apoorvumang/prompt-lookup-decoding

The key parameters for lookup decoding are ngram_min, ngram_max and n_draft. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.

More info:

https://github.com/ggml-org/llama.cpp/pull/4484 https://github.com/ggml-org/llama.cpp/issues/4226