Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate Code #9

Open
Gary-code opened this issue Apr 14, 2022 · 5 comments
Open

Evaluate Code #9

Gary-code opened this issue Apr 14, 2022 · 5 comments

Comments

@Gary-code
Copy link

Gary-code commented Apr 14, 2022

It seems like that after I trained the model with our own dataset using the code from master branch, I can not evaluate the model on the test set unless switching to ori- branch. However, I don't want to waste 2days training the model. What can I do to evaluate the testing data in the master branch.

@Gary-code
Copy link
Author

I check the code between the ori-code branch and the master branch. The checkpoint in the save model has a big difference. One save the encoder and generator. Another just simply save the model.

@dev-chauhan
Copy link
Owner

In master branch you can use evaluate_scores function to evaluate on your own data, if you have saved your model then you just have to generate para phrases and then you can pass generated para phrase and ground truth to evaluate_scores function which will return the scores w.r.t. your ground truth in data and generated para phrases.

@Gary-code
Copy link
Author

Thanks a lot.

@Gary-code
Copy link
Author

Which model in the paper is the best. EDLPGS?

@dev-chauhan
Copy link
Owner

Its EDLPS in terms of Bleu scores

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants