Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate models? #96

Open
liang-hou opened this issue Jun 29, 2023 · 3 comments
Open

How to evaluate models? #96

liang-hou opened this issue Jun 29, 2023 · 3 comments

Comments

@liang-hou
Copy link
Contributor

liang-hou commented Jun 29, 2023

Hi, thanks for the great work. I would like to know how to evaluate the generation performance of models. Specifically, I am interested in how to calculate FID and other metrics such as IS, and whether you have implemented automated evaluation pipelines.

@clementchadebec
Copy link
Owner

Hi @liang-hou,
Thank you for opening this issue and sorry for the late reply. There is no automated evaluation pipeline for the moment. This is a feature that can be added in the future if you think it would be helpful. To evaluate your models you can use the samplers to generate new images and store them in a dedicated folder. Then, you can use the official implementations (see for instance the repo for the FID) or use torchmetrics which also includes easy metric computation.

I hope this helps,

Best,

Clément

@MiguelCosta94
Copy link

Hi @clementchadebec ,
Which network have you used as a feature extractor to calculate the FID (values shown on the paper)?

Best regards,
Miguel

@clementchadebec
Copy link
Owner

Hi @MiguelCosta94 ,

I used this piece of code to compute the FID values.
https://github.com/bioinf-jku/TTUR/blob/master/fid.py#L271

Best,

Clément

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants