-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use unittest as test runner instead of nose #475
Conversation
CI is failing with:
|
Thank you. Done. |
kiva/tests/test_graphics_context.py
Outdated
def given(_): | ||
def func(_): | ||
pass | ||
return func | ||
def sampled_from(_): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was a bit awkward, but because given
and sampled_from
are used in decorators, so even if I try to skip those tests when hypothesis_available == False
the tests still fail when trying to load them with something like:
======================================================================
ERROR: kiva.tests.test_graphics_context (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: kiva.tests.test_graphics_context
Traceback (most recent call last):
File "/Users/aayres/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/Users/aayres/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
__import__(name)
File "/Users/aayres/Desktop/enable/kiva/tests/test_graphics_context.py", line 27, in <module>
class TestAlphaBlackImage(unittest.TestCase):
File "/Users/aayres/Desktop/enable/kiva/tests/test_graphics_context.py", line 33, in TestAlphaBlackImage
@given(sampled_from([1.0, 0.0, 0.5]))
NameError: name 'given' is not defined
There is probably a cleaner more standard way to do this sort of thing? I am not sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hypothesis
is a test dependency. I am not sure we need to skip these. We should make sure the test dependency is included in the setup.py
though. CI does have hypothesis
installed and these tests are run there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To elaborate on why I am not sure we need to skip tests based on missing test dependencies: TraitsUI has packaging
as its test dependency, and it was specified in extras_require
. Whenever I do pip install traitsui
and try to run the test suite, I am greeted with an import error due to missing packaging
: The tests requiring packaging
are not skipped because packaging
is not available. That import error reminds me to reinstall with pip install traitsui[test]
. If the test suite was permissive about missing test dependencies, then I risk forgetting to install those dependencies and end up running much fewer tests.
Test dependencies are still separated from optional dependencies due to optional features of the tested system (e.g. ArrayEditor is an optional feature of TraitsUI that depends on NumPy). When a package has optional features with optional dependencies, it is expected to be able to run the tests in the absence of those dependencies.
@@ -3,14 +3,12 @@ | |||
|
|||
from numpy import allclose, ravel | |||
|
|||
import nose | |||
|
|||
from kiva import agg | |||
|
|||
|
|||
# FIXME: | |||
# These tests are broken, and Peter promised to fix it at some point. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be better to just delete these tests? Peter Wang hasn't worked on this code for a long time. I'm nearly certain that he has no interest in fixing these tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that only two of the four tests actually fail. It could also be a platform dependent behaviour, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FTR, these are the two tests that fail:
======================================================================
FAIL: test_bgr24_format (kiva.agg.tests.test_save.Test_Save)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 23, in test_bgr24_format
self.do_check_format('bgr24')
File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 43, in do_check_format
self.format_output_map[fmt])
AssertionError: Lists differ: [255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0] != [255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0, 255]
First differing element 6:
255
0
- [255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0]
? ------
+ [255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0, 255]
? ++++++
======================================================================
FAIL: test_bgra32_format (kiva.agg.tests.test_save.Test_Save)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 29, in test_bgra32_format
self.do_check_format('bgra32')
File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 43, in do_check_format
self.format_output_map[fmt])
AssertionError: Lists differ: [255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255] != [255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255, 255]
First differing element 8:
255
0
- [255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255]
? ------ ------
+ [255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255, 255]
? ++++++ ++++++
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM. Some minor questions / suggestions. I have not compared the number of tests run before and after this change though.
|
||
|
||
@unittest.skipIf(not etsdevtools_available, "test requires etsdevtools") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't find the skip message in the log. I believe that's because test discovery does not use the test_suite
and test
functions below, and none of the methods in this test case has a name with the prefix test_
: they start with the prefix check_
instead. Not sure if we still need to keep this check_
prefix. Could we remove the test_suite
, test
and the __main__
block below, and rename the method to test_...
please? Such change would be within the scope of this PR because this PR changes how tests are discovered. The tests will still be skipped, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I also add etsdevtools
as a test dependency like what was done for hypothesis
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Er... etsdevtools
is a worse dependency here. Let's not go there, and kill this test in a separate PR. This PR should aim at minimal changes in the number of tests being run: the purpose is to switch test runner. It is not meaningful to keep this test given it has not been exercised, but that's orthogonal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opened #483
kiva/tests/test_graphics_context.py
Outdated
def given(_): | ||
def func(_): | ||
pass | ||
return func | ||
def sampled_from(_): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To elaborate on why I am not sure we need to skip tests based on missing test dependencies: TraitsUI has packaging
as its test dependency, and it was specified in extras_require
. Whenever I do pip install traitsui
and try to run the test suite, I am greeted with an import error due to missing packaging
: The tests requiring packaging
are not skipped because packaging
is not available. That import error reminds me to reinstall with pip install traitsui[test]
. If the test suite was permissive about missing test dependencies, then I risk forgetting to install those dependencies and end up running much fewer tests.
Test dependencies are still separated from optional dependencies due to optional features of the tested system (e.g. ArrayEditor is an optional feature of TraitsUI that depends on NumPy). When a package has optional features with optional dependencies, it is expected to be able to run the tests in the absence of those dependencies.
…_extras_require__ under 'test'
…d remove unneeded functions
Codecov Report
@@ Coverage Diff @@
## master #475 +/- ##
==========================================
- Coverage 30.45% 30.43% -0.03%
==========================================
Files 206 206
Lines 17852 17852
Branches 2461 2461
==========================================
- Hits 5437 5433 -4
- Misses 12072 12076 +4
Partials 343 343
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
(One minor suggestion in https://github.com/enthought/enable/pull/475/files#r539141323, as I forgot to submit it as a part of a batch review)
fixes #426
This PR simply updates edmtool to run using unittest instead of nose, and changes test file names to ensure they get discovered.