Georgi Gerganov e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 ay önce
..
CMakeLists.txt 7cc2d2c889 ggml : move AMX to the CPU backend (#10570) 1 yıl önce
README.md 68ff663a04 repo : update links to new url (#11886) 11 ay önce
lookup-create.cpp f66f582927 llama : refactor `src/llama.cpp` (#10902) 1 yıl önce
lookup-merge.cpp 7eee341bee common : use common_ prefix for common library functions (#9805) 1 yıl önce
lookup-stats.cpp f66f582927 llama : refactor `src/llama.cpp` (#10902) 1 yıl önce
lookup.cpp e0dbec0bc6 llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 10 ay önce

README.md

llama.cpp/examples/lookup

Demonstration of Prompt Lookup Decoding

https://github.com/apoorvumang/prompt-lookup-decoding

The key parameters for lookup decoding are ngram_min, ngram_max and n_draft. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.

More info:

https://github.com/ggml-org/llama.cpp/pull/4484 https://github.com/ggml-org/llama.cpp/issues/4226