Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use unittest as test runner instead of nose #475

Merged
merged 16 commits into from
Dec 9, 2020

Conversation

aaronayres35
Copy link
Contributor

fixes #426

This PR simply updates edmtool to run using unittest instead of nose, and changes test file names to ensure they get discovered.

@aaronayres35
Copy link
Contributor Author

CI is failing with:

======================================================================
ERROR: testURLFallback (enable.savage.svg.tests.test_document.TestBrushFromColourValue)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/site-packages/enable/savage/svg/tests/test_document.py", line 69, in testURLFallback
    self.document.getBrushFromState().color,
  File "/home/travis/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/site-packages/enable/savage/svg/document.py", line 880, in getBrushFromState
    element = self.dereference(url)
  File "/home/travis/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/site-packages/enable/savage/svg/document.py", line 344, in dereference
    element = resources.open_svg(uri)
  File "/home/travis/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/site-packages/enable/savage/svg/document.py", line 191, in open_svg
    f = open(path)
  File "/home/travis/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/site-packages/enable/savage/svg/document.py", line 183, in <lambda>
    return path, lambda uri: BytesIO(urllib.request.urlopen(uri).read())
AttributeError: module 'urllib' has no attribute 'request'
----------------------------------------------------------------------

@kitchoi
Copy link
Contributor

kitchoi commented Dec 8, 2020

This PR is going to create some trivial merge conflicts with #455. Merging #455 is going to create some trivial merge conflicts for this PR. Is it okay if we merge #455 first?

@aaronayres35
Copy link
Contributor Author

This PR is going to create some trivial merge conflicts with #455. Merging #455 is going to create some trivial merge conflicts for this PR. Is it okay if we merge #455 first?

Yep, go right ahead!

@kitchoi
Copy link
Contributor

kitchoi commented Dec 8, 2020

Yep, go right ahead!

Thank you. Done.

@aaronayres35 aaronayres35 changed the title Use unittest as test runner instead of nose [WIP] Use unittest as test runner instead of nose Dec 8, 2020
Comment on lines 12 to 17
def given(_):
def func(_):
pass
return func
def sampled_from(_):
pass
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was a bit awkward, but because given and sampled_from are used in decorators, so even if I try to skip those tests when hypothesis_available == False the tests still fail when trying to load them with something like:

======================================================================
ERROR: kiva.tests.test_graphics_context (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: kiva.tests.test_graphics_context
Traceback (most recent call last):
  File "/Users/aayres/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
    module = self._get_module_from_name(name)
  File "/Users/aayres/.edm/envs/enable-test-3.6-null-pillow/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
    __import__(name)
  File "/Users/aayres/Desktop/enable/kiva/tests/test_graphics_context.py", line 27, in <module>
    class TestAlphaBlackImage(unittest.TestCase):
  File "/Users/aayres/Desktop/enable/kiva/tests/test_graphics_context.py", line 33, in TestAlphaBlackImage
    @given(sampled_from([1.0, 0.0, 0.5]))
NameError: name 'given' is not defined

There is probably a cleaner more standard way to do this sort of thing? I am not sure

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hypothesis is a test dependency. I am not sure we need to skip these. We should make sure the test dependency is included in the setup.py though. CI does have hypothesis installed and these tests are run there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate on why I am not sure we need to skip tests based on missing test dependencies: TraitsUI has packaging as its test dependency, and it was specified in extras_require. Whenever I do pip install traitsui and try to run the test suite, I am greeted with an import error due to missing packaging: The tests requiring packaging are not skipped because packaging is not available. That import error reminds me to reinstall with pip install traitsui[test]. If the test suite was permissive about missing test dependencies, then I risk forgetting to install those dependencies and end up running much fewer tests.

Test dependencies are still separated from optional dependencies due to optional features of the tested system (e.g. ArrayEditor is an optional feature of TraitsUI that depends on NumPy). When a package has optional features with optional dependencies, it is expected to be able to run the tests in the absence of those dependencies.

@aaronayres35 aaronayres35 changed the title [WIP] Use unittest as test runner instead of nose Use unittest as test runner instead of nose Dec 8, 2020
@@ -3,14 +3,12 @@

from numpy import allclose, ravel

import nose

from kiva import agg


# FIXME:
# These tests are broken, and Peter promised to fix it at some point.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to just delete these tests? Peter Wang hasn't worked on this code for a long time. I'm nearly certain that he has no interest in fixing these tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that only two of the four tests actually fail. It could also be a platform dependent behaviour, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FTR, these are the two tests that fail:

======================================================================
FAIL: test_bgr24_format (kiva.agg.tests.test_save.Test_Save)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 23, in test_bgr24_format
    self.do_check_format('bgr24')
  File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 43, in do_check_format
    self.format_output_map[fmt])
AssertionError: Lists differ: [255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0] != [255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0, 255]

First differing element 6:
255
0

- [255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0]
?                                              ------

+ [255, 255, 255, 255, 255, 255, 0, 0, 255, 0, 0, 255]
?                                ++++++


======================================================================
FAIL: test_bgra32_format (kiva.agg.tests.test_save.Test_Save)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 29, in test_bgra32_format
    self.do_check_format('bgra32')
  File "/Users/kchoi/.edm/envs/enable-test-3.6-pyqt5-pillow/lib/python3.6/site-packages/kiva/agg/tests/test_save.py", line 43, in do_check_format
    self.format_output_map[fmt])
AssertionError: Lists differ: [255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255] != [255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255, 255]

First differing element 8:
255
0

- [255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255]
?                                               ------          ------

+ [255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 255, 255]
?                                          ++++++          ++++++

Copy link
Contributor

@kitchoi kitchoi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM. Some minor questions / suggestions. I have not compared the number of tests run before and after this change though.

enable/tests/test_test_assistant.py Show resolved Hide resolved
enable/tests/test_test_assistant.py Outdated Show resolved Hide resolved


@unittest.skipIf(not etsdevtools_available, "test requires etsdevtools")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't find the skip message in the log. I believe that's because test discovery does not use the test_suite and test functions below, and none of the methods in this test case has a name with the prefix test_: they start with the prefix check_ instead. Not sure if we still need to keep this check_ prefix. Could we remove the test_suite, test and the __main__ block below, and rename the method to test_... please? Such change would be within the scope of this PR because this PR changes how tests are discovered. The tests will still be skipped, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I also add etsdevtools as a test dependency like what was done for hypothesis?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Er... etsdevtools is a worse dependency here. Let's not go there, and kill this test in a separate PR. This PR should aim at minimal changes in the number of tests being run: the purpose is to switch test runner. It is not meaningful to keep this test given it has not been exercised, but that's orthogonal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opened #483

Comment on lines 12 to 17
def given(_):
def func(_):
pass
return func
def sampled_from(_):
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate on why I am not sure we need to skip tests based on missing test dependencies: TraitsUI has packaging as its test dependency, and it was specified in extras_require. Whenever I do pip install traitsui and try to run the test suite, I am greeted with an import error due to missing packaging: The tests requiring packaging are not skipped because packaging is not available. That import error reminds me to reinstall with pip install traitsui[test]. If the test suite was permissive about missing test dependencies, then I risk forgetting to install those dependencies and end up running much fewer tests.

Test dependencies are still separated from optional dependencies due to optional features of the tested system (e.g. ArrayEditor is an optional feature of TraitsUI that depends on NumPy). When a package has optional features with optional dependencies, it is expected to be able to run the tests in the absence of those dependencies.

kiva/agg/tests/test_save.py Outdated Show resolved Hide resolved
@codecov-io
Copy link

codecov-io commented Dec 8, 2020

Codecov Report

Merging #475 (e34aeca) into master (b09f48e) will decrease coverage by 0.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #475      +/-   ##
==========================================
- Coverage   30.45%   30.43%   -0.03%     
==========================================
  Files         206      206              
  Lines       17852    17852              
  Branches     2461     2461              
==========================================
- Hits         5437     5433       -4     
- Misses      12072    12076       +4     
  Partials      343      343              
Impacted Files Coverage Δ
enable/__init__.py 75.00% <ø> (ø)
enable/savage/svg/document.py 26.80% <100.00%> (ø)
kiva/image.py 85.71% <0.00%> (-14.29%) ⬇️
kiva/fonttools/font_manager.py 48.50% <0.00%> (-2.09%) ⬇️
enable/qt4/base_window.py 51.69% <0.00%> (+0.61%) ⬆️
enable/abstract_window.py 54.19% <0.00%> (+0.76%) ⬆️
enable/qt4/scrollbar.py 78.73% <0.00%> (+1.14%) ⬆️
enable/container.py 56.85% <0.00%> (+1.67%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b09f48e...e34aeca. Read the comment docs.

Copy link
Contributor

@kitchoi kitchoi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you.
(One minor suggestion in https://github.com/enthought/enable/pull/475/files#r539141323, as I forgot to submit it as a part of a batch review)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use unittest as the test runner instead of nose
4 participants