You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this line, what is the best way to think about this matmul? I see that it is calculating dot products for the final_feat and each emb in item_embs. If item_embs were normalized, then I could see this being essentially evaluating the cosine similarity (within a scaling factor) of the item_embs with respect to final_feat, but because the item_embs can vary in magnitude by ~30% or so it is not quite the same. Can you give any insight into this?
Thanks!
The text was updated successfully, but these errors were encountered:
From my view, the final matmul is kind of like evaluating the cosine similarity, as inherited from classical matrix factorization method for recommendation https://developers.google.com/machine-learning/recommendation/collaborative/matrix, w/o theoretical guarantee, people just want to have something to compare item distance, so just picked an older component to try.
It may not be the best choice, and elaborating into details may help finding something better for replacement.
Thus, instead of trying to interpret the final matmul accurately, personally, I suggest try something new beyong current approach.
In this line, what is the best way to think about this
matmul
? I see that it is calculating dot products for thefinal_feat
and each emb initem_embs
. Ifitem_embs
were normalized, then I could see this being essentially evaluating the cosine similarity (within a scaling factor) of the item_embs with respect tofinal_feat
, but because the item_embs can vary in magnitude by ~30% or so it is not quite the same. Can you give any insight into this?Thanks!
The text was updated successfully, but these errors were encountered: