Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sota configs with different version pytorch #238

Open
xlk369293141 opened this issue Oct 27, 2021 · 3 comments
Open

sota configs with different version pytorch #238

xlk369293141 opened this issue Oct 27, 2021 · 3 comments

Comments

@xlk369293141
Copy link

Thanks for you sharing. But I have some problem in recurring process.
I use your setup.py besides the pytorch version =1.9.0 + cuda 11.1 because of our limited GPU device support.
I try the ComplEx and ConvE configs you prrovided with three random seed, and report the best mean_reciprocal_rank_filtered_with_test.
ComplEx FB15k-237 : 27.0 (34.8) WNRR: 44.8 (47.5)
ConvE FB15k-237: 30.7 (33.9) WNRR: 42.5 (44.2)
The results reported by you are in brackets. What caused this difference?

And another question, How can I get the all tail entity indexes satisfy a particalar query (sp_) from Dataset class?

@AdrianKs
Copy link
Collaborator

Hi,
can you run one of the experiments without any random seed as a sanity check. I want to make sure that there are no issues with seeding that influence final quality.

Regarding you second question: To find out the most probable tail entities you need to score your sp-query against all objects and sort the resulting scores in a descending order. You can get all scores with the function self.model.score_sp.
The most probable objects should be the ones being ranked the highest.

def score_sp(self, s: Tensor, p: Tensor, o: Tensor = None) -> Tensor:

In case you are only looking for tail entities which answer the query with the triples given in the train set you can use the index self.dataset.index("train_sp_to_o"). We have indexes like this for all splits (also valid and test)

@rgemulla
Copy link
Member

rgemulla commented Nov 25, 2021

As a data point: I reran using the versions listed in setup.py and Pytorch 1.10 for ComplEx FB15k-237. In the paper, we reported 34.8. The rerun produced 35.1 (without seed) and 35.2 (with --random_seed.default 1). I cannot reproduce this issue.

@rgemulla
Copy link
Member

@xlk369293141 Are you still experiencing this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants