-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds figure test capability #38
Conversation
That's odd, it seems to be working fine on the azure pipelines |
There seems to be some issue with the tests working on pytest and not working with tox. |
So I think we just need to either reduce the tolerance of the |
@nabobalis @dstansby Would it be better to test each mesh separately or test them plotted a map. |
In general it would be good to keep the number of figure tests to a minimum, as that then reduces the number of test images we have to store (and potentially change in the future). We can test multiple features by adding them to the same test, and just making sure they're in view of the camera (ie. visible in the saved image). At the moment, I think we should have two tests:
|
Also, since I've just merged #41 you'll need to rebase this to make sure the tests don't error due to any new warnings. |
Alright, I shall update them asap. |
SQUASH MERGE THIS |
The figure tests can be performed using
verify_cache_image()
method in test_plotting.3 command line arguments can be passed
The comparison happens through
pyvista.compare_images()
and the a threshold is set for the allowed error(IMAGE_REGRESSION_ERROR >= 500 and warning at >=200).
High variance tests allow for a higher threshold with IMAGE_REGRESSION_ERROR >= 1000, but we haven't used this as all of our tests fall in the normal range (probably because we're not changing lighting conditions). I've left this in as maybe some of our tests might fall in this range.
Usage -