-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fails when batch is an odd number #1
Comments
Hey, @golunovas Could you please provide a use case for the odd batch_size? It seems that the best way to handle that problem is to include an explicit check into AutoAlbument which ensures that batch_size is an odd number before running a search phase. Otherwise, I think some problems related to the different number of augmented and not-augmented images may arise. |
Well, I ran into that issue on the last batch of the epoch when I just ran a search on the generated config search.yaml where drop_last wasn't set to true for the dataloader and I had an odd number of samples in the dataset. But the odd batch size will lead to exactly the same issue. IMO, the easiest solution is to set drop_last to true by default and require an even batch size to be set in the config. |
I have added the |
It seems like the issue is coming from here.
autoalbument/autoalbument/faster_autoaugment/search.py
Line 254 in dbeffa7
As far as I understand, it requires an even batch otherwise it fails here
autoalbument/autoalbument/faster_autoaugment/search.py
Line 201 in dbeffa7
The text was updated successfully, but these errors were encountered: