Skip to content
This repository has been archived by the owner on Oct 17, 2023. It is now read-only.

The L2 regularization #39

Open
gzy02 opened this issue May 12, 2023 · 1 comment
Open

The L2 regularization #39

gzy02 opened this issue May 12, 2023 · 1 comment

Comments

@gzy02
Copy link

gzy02 commented May 12, 2023

During the training of the model using the mini-batch approach, the L2 regularization term does not involve all model parameters, but only uses the part of the model parameters corresponding to the involved embeddings. Is this a deliberate trick in the experiment?

在用mini-batch方式训练模型时,L2正则化项的计算并非使用了全部模型参数,而是只用了这一批次涉及到的用户、物品嵌入对应的那一部分模型参数。请问这是实验里有意为之的trick吗?

    def bpr_loss(self, users, pos, neg):
        (users_emb, pos_emb, neg_emb, 
        userEmb0,  posEmb0, negEmb0) = self.getEmbedding(users.long(), pos.long(), neg.long())
        reg_loss = (1/2)*(userEmb0.norm(2).pow(2) + 
                         posEmb0.norm(2).pow(2)  +
                         negEmb0.norm(2).pow(2))/float(len(users))
        pos_scores = torch.mul(users_emb, pos_emb)
        pos_scores = torch.sum(pos_scores, dim=1)
        neg_scores = torch.mul(users_emb, neg_emb)
        neg_scores = torch.sum(neg_scores, dim=1)
        
        loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores))
        
        return loss, reg_loss
@kashif-flask
Copy link

since in LightGCN there are no feature transformation matrix, only learnable parameters are user and item emebddings in 0th layer that's why they used them in regularization

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants