Просмотр исходного кода

common : ensure llama_batch size does not exceed max size (#9668)

A crash was observed when the number of tokens added to a batch exceeds
llama_batch size. An assertion in llama_batch_add was added to protect
against llama_batch size overflow.
matiaslin 1 год назад
Родитель
Сommit
faac0bae26
1 измененных файлов с 2 добавлено и 0 удалено
  1. 2 0
      common/common.cpp

+ 2 - 0
common/common.cpp

@@ -1437,6 +1437,8 @@ void llama_batch_add(
                           llama_pos   pos,
     const std::vector<llama_seq_id> & seq_ids,
                                bool   logits) {
+    GGML_ASSERT(batch.seq_id[batch.n_tokens] && "llama_batch size exceeded");
+
     batch.token   [batch.n_tokens] = id;
     batch.pos     [batch.n_tokens] = pos;
     batch.n_seq_id[batch.n_tokens] = seq_ids.size();