Browse Source

llama : fix embd when offloading non-repeating layers (#1891)

Johannes Gäßler 2 years ago
parent
commit
ac3b886953
1 changed files with 1 additions and 1 deletions
  1. 1 1
      llama.cpp

+ 1 - 1
llama.cpp

@@ -1658,7 +1658,7 @@ static bool llama_eval_internal(
 
         // cur = cur*norm(broadcasted)
         cur = ggml_mul(ctx0, cur, model.norm);
-        offload_func_nr(cur);
+        // offload_func_nr(cur); // TODO CPU + GPU mirrored backend
         ggml_set_name(cur, "result_norm");
 
         embeddings = cur;