You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in using pretrained models on this dataset and wondered: is it safe to assume that pretrained networks on iNaturalist 2018 have not seen any images in the validation split for iNaturalist 2021?
For context, I'm running into an issue where many of the pretrained models I've found (for iNat-2021) were likely selected based on the validation split, leaving no labelled data with which to run additional experiments- unless the test set labels for iNat-2021 have been published somewhere, since the competition has ended?
Thanks! Appreciate the work you've put in to create & maintain the dataset.
The text was updated successfully, but these errors were encountered:
Hello,
I'm interested in using pretrained models on this dataset and wondered: is it safe to assume that pretrained networks on iNaturalist 2018 have not seen any images in the validation split for iNaturalist 2021?
For context, I'm running into an issue where many of the pretrained models I've found (for iNat-2021) were likely selected based on the validation split, leaving no labelled data with which to run additional experiments- unless the test set labels for iNat-2021 have been published somewhere, since the competition has ended?
Thanks! Appreciate the work you've put in to create & maintain the dataset.
The text was updated successfully, but these errors were encountered: