Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculate test loss and perplexity for language model #1

Open
NirantK opened this issue Mar 6, 2018 · 3 comments
Open

Calculate test loss and perplexity for language model #1

NirantK opened this issue Mar 6, 2018 · 3 comments

Comments

@NirantK
Copy link
Owner

NirantK commented Mar 6, 2018

No description provided.

@Shashi456
Copy link

Shashi456 commented Oct 4, 2018

@NirantK could you tell me how you calculated the perplexity for your model?
is it e^ loss ?

@NirantK
Copy link
Owner Author

NirantK commented Oct 4, 2018

The training (and/or validation) loss and perplexity are calculated during model training as shown in the notebook. This is done by fastai implementation of metrics via hooks.

We can repeat the same on test data. This is yet to be done - as indicated by this issue.

@Shashi456
Copy link

You aren't printing perplexity anywhere, you've just mentioned it in a comment. So I had a doubt about the same. Thank you for your answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants