-
Notifications
You must be signed in to change notification settings - Fork 13.6k
hparams : add n_embd_inp() to support extended embed #16928
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Tested this. It works correctly with a 1d cvector of the size 5120, and for basic MTMD use cases. Thanks! |
This comment was marked as outdated.
This comment was marked as outdated.
|
Hmm, please ignore what I said earlier. Indeed, I think there is currently a misunderstanding here. The I would suggest calling it |
| const int n_embd_inp = hparams.n_embd_inp(); | ||
| ggml_tensor * cur = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd_inp); | ||
| ggml_tensor * layer_dir = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd_inp); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sure if n_embd_inp is correct here.
As far as I can understand, the cur is the output of the current layer. That means it should use the internal embedding length n_embd. CC @65a in case you know this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I just know what I learned from trying to make cvectors for Qwen3VL, so with that in mind:
- The Transformers hiddenstate (and output embedding, being last_hidden_state) size of that model is 5120 (n_embd) and not 20480 (n_embd_inp). But I didn't use multimodal, and not sure if it changes.
- I think the deepstack layers are only used for multimodal
This at least is called from llama-adapter.cpp for cvectors, but the check in cvector is hardcoded against n_embd, so it will still pass regardless (was wondering why my cvector worked with this code change). I assume since the cvector is for the language model, it doesn't matter if the other ones are empty so much so long as they are zeroed.
It seems like the answer might be use n_embd here unless there is somewhere multimodal where this is used, or multimodal cvectors are a thing, or this is called from other contexts that need the "rest" of it?
| } | ||
|
|
||
| return embd + j*model.hparams.n_embd; | ||
| return embd + j*model.hparams.n_embd_inp(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically this should be n_embd_out (aka currently n_embd), right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be, I wasn't entirely sure.
| const int n_embd = hparams.n_embd; | ||
| ggml_tensor * b = ggml_new_tensor_4d(ctx, GGML_TYPE_F32, n_embd, w->ne[1], 1, 1); | ||
| const int n_embd_inp = hparams.n_embd_inp(); | ||
| ggml_tensor * b = ggml_new_tensor_4d(ctx, GGML_TYPE_F32, n_embd_inp, w->ne[1], 1, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure if this is correct as well...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think either n_embd_inp or n_embd should not be very important. at least for now, models that use this ops will have n_embd_inp == n_embd
Required for proper handling of
Qwen3-VLDeepStack embeds.May change more than currently necessary for future use, f.ex. in
llama-context(or maybe even not enough), please review carefully!Fixes #16908