Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problem of training both models of classifier_model annd Randomwalk_Word2vec #12

Open
intellectwood opened this issue Sep 20, 2023 · 3 comments

Comments

@intellectwood
Copy link

intellectwood commented Sep 20, 2023

Hello, I am very admiring your work.But I notice last line of main.py can train both model of classifier_model and Randomwalk_Word2vec but setting loss=(loss, 1.0), (loss2, 0.0) can only train model of classifier_model, right?

Then here is my question, setting loss=(loss, 1.0), (loss2, 1.0), code got error:
image
image

And I want to know whether only one model of classifier_model perform better or both model?

@ruochiz
Copy link
Collaborator

ruochiz commented Sep 21, 2023

Hi, Thank you for your interest. Yeah, with the newer version of tensorflow, I stopped maintaining the code for the random walk part. The answer is that if you use the model with the adj mode, then just the classifier_model would work well. If it's random walk based, then that part of the loss would also be preferred. A quick fix, might just be changing line 132 of main from

example_emb = model.forward_u(examples)

to

example_embed, _ = model.forward_u(examples)

In addition, main_torch.py get rid of the dependencies of tensorflow 1.0 which would be slightly more up to date (while losing support for the random walk part of the model)

@intellectwood
Copy link
Author

Still can't run, got this error, have you tried?
image
Then I am interested in your new version removing random walk, is it because with the adj mode can show best result or training both models(classifier_model annd Randomwalk_Word2vec) improves little?

@ruochiz
Copy link
Collaborator

ruochiz commented Sep 21, 2023

the latter. training both models while offer some advantages when we did the benchmarking in the paper. But in some later applications of the model to other datasets, I found that training one model is good enough on its own.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants