-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about POG paper #1
Comments
Hi Xixi,
Thank you for your interest.
The parameters in POG's Gen network are initialized as the pre-trained FOM, so as the transition layers.
And also they can be learned during the training process, if your dataset is large enough.
Best Regards
Wen Chen (陈雯)
…------------------ 原始邮件 ------------------
发件人: "Xixi Wu"<notifications@github.com>;
发送时间: 2019年12月8日(星期天) 下午3:40
收件人: "wenyuer/POG"<POG@noreply.github.com>;
抄送: "Subscribed"<subscribed@noreply.github.com>;
主题: [wenyuer/POG] A question about POG paper (#1)
Hi,
I have just read your paper in KDD 19. It is really a nice work!
I am confused about the training process. In the POG framework, are the parameters in transition layers(both in Per and Gen ) are sharable with FOM models' transition layer parameters?
Or they are all learned during the training process with the proposed loss function?
Thanks!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I got it. |
Hi,Is there a public code for pog papers? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I have just read your paper in KDD 19. It is really a nice work!
I am confused about the training process. In the POG framework, are the parameters in transition layers(both in Per and Gen ) are sharable with FOM models' transition layer parameters?
Or they are all learned during the training process with the proposed loss function?
Thanks!
The text was updated successfully, but these errors were encountered: