-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MNT] Avoid CI fail on deep test by generating at least 2 classes in random data #485
Conversation
it might be worth just printing a message if catching the exception, then skipping the print from CI? I get a bit nervous doing nothing on a catch, I drum into my students not to do this :) |
yeah you're right makes sense :p |
Just to confirm, this test is expected to work as intended the majority of the time but in the case of weird IO magic we dont want to raise a failure? The pytest functionality for expected failures may be a better fit i.e. |
This would end the test though, so stuff after would not be run. |
@MatthewMiddlehurst i dont think we should fail anything, given that its a CI problem and not ours, the idea is to skip the CI magic failing to avoid people getting random issues in their PRs even when it doesn't have anything to do with DL. |
xfail does not cause the full test set to fail. Ideally we would just fix the cause, as just skipping may cause actual issues from changes to also be skipped. Based on the error in #473 it doesn't seem to be IO related? If you just want to skip for now, I won't block. |
I agree @MatthewMiddlehurst that the issue should be fixed instead of ignoring, but am not sure that is fixable given that it won't fail locally but only on CI, tried it multiple times. I think its coming from the fact of saving to file in CI and it may not like that. |
Error log is here: https://github.com/aeon-toolkit/aeon/actions/runs/5186055397/jobs/9346945374?pr=473 Function call to
results in
so it is not even reaching the file saving part. Could be related to the random data used? I'm really not sure. The numpy random seed used was 379863931. |
hmm, probably better to fix a seed for the test you think ? |
I'm not sure. It would probably help with stopping random failures, but the fact that this happened with any data is a little concerning. No idea if something happened in all the tensorflow stuff. The way the data is randomly generated theres a small chance it becomes a 1 class problem maybe? For other tests we use |
great spot @MatthewMiddlehurst |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Compliment to #473