Skip to content

Commit

Permalink
edit warning
Browse files Browse the repository at this point in the history
  • Loading branch information
LostRuins committed Feb 24, 2024
1 parent 359a14d commit 6021282
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4970,7 +4970,7 @@ static struct ggml_tensor * llm_build_kqv(

#if defined(GGML_USE_VULKAN) || defined(GGML_USE_KOMPUTE) || defined(GGML_USE_SYCL)
#pragma message("TODO: ALiBi support in ggml_soft_max_ext is not implemented for Vulkan, Kompute, and SYCL")
#pragma message(" Falling back to ggml_alibi(). Will become an error in Mar 2024")
#pragma message(" Falling back to ggml_alibi(). Will become an error in Mar 2024. But koboldcpp will deal with it.")
#pragma message("ref: https://github.com/ggerganov/llama.cpp/pull/5488")
if (hparams.f_max_alibi_bias > 0.0f) {
kq = ggml_scale(ctx, kq, kq_scale);
Expand Down

1 comment on commit 6021282

@Spacellary
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅

Please sign in to comment.