-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ganea model parameters #18
Comments
I wasn't able to replicate their paper in the beginning. But then reading their source code https://github.com/dalab/deep-ed was super helpful. |
Thank you, I've read their code and also tried to reimplement it (https://github.com/lej-la/deep-ed-pytorch), but I eventually gave up. I thought your model is a generalized version of their model and that it is able to produce the same results using only 1 relation (the number of relations = 1). So were you, eventually, able to replicate their results with this code, please? |
You are right the if the number of relation is set to 1, we will have their model. Yes, I successfully replicated their results (even got a bit higher scores, but not significant). |
Please, just for clarification. My assumption is that if I run your code like this with the following parameters:
I should be able to get a Ganea-like model with the performance ~92 micro F1 on AIDA-B. Because I've tried that several times, but the results were only 83.71 micro F1 on average. Thanks |
Unfortunately, I don't have a facility to run the code. When you ran the command line in the README
did you get the reported number? For Ganea-like, could you try:
|
The reason for trying "rel-norm" instead of "ment-norm" is that for ment-norm the model uses mention padding, which Genea model doesn't have. |
The results of your best model that I was able to get (by running the first command) were on average 91.62 micro F1. |
Hmm, could you please send me the log files (or what you get when running the cmd)? |
I don't have them, but I'll re-run the training and send it to you. Can I use your email from your last paper (https://arxiv.org/pdf/1906.01250.pdf)? |
Yes, please send to my gmail address (I no longer use UoEdin email address). Thanks! |
Thank you :) |
I have the same issue of not being able to achieve the claimed 93.07 score. Did you manage to find the issue? |
Yes, I found the issue, but we had a private discussion which is not shown here. Long story short, lej-la commented out an important line. Could you show your log file? |
Thanks for the fast response. I ran it once, got her result and assumed there's some issue. Let me run it at least five times and get back to you with the log file or a message that I've reproduced the result. |
Hey, I got a bit confused by the part in section 3.2 about rel-norm. The true parameters to replicate ganea global model are actually using ment-norm with K=1. In that way, the normalization factor becomes the same as in equation 3. Using rel-norm, the normalization becomes just 1, instead of 1/(n-1). I got as close as 91.6 micro F1 on AIDA-B. |
okay, so they differ at the normalization factor. Thanks for pointing out |
But then, actually the ment-norm model seems to have the same performance with K=1 and K=3. |
All is good, I managed to reproduce the results. Very simple steps, with no issues; good job @lephong. Maybe you could also close this thread as it seems to be resolved. Bests. |
I was trying to run your model with "rel-norm" type and 1 relation.
As you suggest in the paper it should be equivalent to the Ganea & Hofmann (2017) model, but the result I got on AIDA-B dataset was not the same as they reported.
Their reported number was 92.22 micro F1, while I got only 83.71 micro F1 on average (highest was 86.24).
Did you actually manage to replicate their results? Am I missing some parameter settings?
Thanks.
The text was updated successfully, but these errors were encountered: