Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A academic question about AMF: how to understant a large-step backward followed by a small-step forward to improve robustness of the model! #11

Open
BinFuPKU opened this issue Apr 19, 2019 · 0 comments

Comments

@BinFuPKU
Copy link

Dear Prof. He:
I am a junior PhD student from Peking University, and I has read your paper and code, but still has a problem on understanding this whole idea essentially.

In previous months, i felt into thinking using GAN in Recommendation systems, while it's hard since rating is discrete and perturbing on rating is not proper!

We know GAN is used to enhance the robustness of model by creating more fine-grained negative samples using noise. Your paper has perturbed on the parameters (latent factors) of BPR-MF after its convergence.

While the parameters of BPR are stable and almost invariable after convergence. Then, the loss of adversarial part, plus the loss of BPR, reflecting on optimization, is a small-size gradient ascent plus a large-size gradient descent (value of delta = gradient of BPR, can be seen as gradient ascent to get larger adversarial loss), which is very like a large-step backward followed by a small-step forward to adjust the value of parameters (trade-off).

So the whole idea in essence can be interpreted as adjusting gradient optimization, right? GAN is just the outer wear.

But the improvement from BPR-MF to AMF is amazing, how did it get without more noisy input or less paramters or adjusting regulizer? It is very puzzling for me to find a good explanation.

Hope for your reply!

@BinFuPKU BinFuPKU reopened this May 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant