You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 17, 2023. It is now read-only.
During the training of the model using the mini-batch approach, the L2 regularization term does not involve all model parameters, but only uses the part of the model parameters corresponding to the involved embeddings. Is this a deliberate trick in the experiment?
since in LightGCN there are no feature transformation matrix, only learnable parameters are user and item emebddings in 0th layer that's why they used them in regularization
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
During the training of the model using the mini-batch approach, the L2 regularization term does not involve all model parameters, but only uses the part of the model parameters corresponding to the involved embeddings. Is this a deliberate trick in the experiment?
在用mini-batch方式训练模型时,L2正则化项的计算并非使用了全部模型参数,而是只用了这一批次涉及到的用户、物品嵌入对应的那一部分模型参数。请问这是实验里有意为之的trick吗?
The text was updated successfully, but these errors were encountered: