Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR: Use the Python interpreter set in Preferences to run tests #174

Merged
merged 27 commits into from
Jun 10, 2022

Conversation

stevetracvc
Copy link
Contributor

This PR fixes #65 allowing the user to change which python executable will be used for the tests.

This is useful, eg, if you install spyder in its own conda environment and then only install the spyder-kernels in the env you want to run the tests in.

Note, you'll need to install qtpy and pyqt in the target env if it doesn't have spyder installed.

@stevetracvc
Copy link
Contributor Author

hmm that's odd, I don't remember installing zmq specifically

@stevetracvc
Copy link
Contributor Author

@jitseniesen OK tests pass on my machine now, sorry about that. Oddly, I have to have one import to make the tests work and one import to make the plugin work (check the comments in that last commit, 7ae8b3b)

@codecov-commenter
Copy link

codecov-commenter commented Apr 23, 2022

Codecov Report

Merging #174 (c2bbe2b) into master (b788ad9) will decrease coverage by 0.62%.
The diff coverage is 77.77%.

Impacted Files Coverage Δ
spyder_unittest/backend/pytestworker.py 92.30% <50.00%> (-2.57%) ⬇️
spyder_unittest/backend/pytestrunner.py 94.44% <100.00%> (-5.56%) ⬇️
spyder_unittest/backend/runnerbase.py 94.04% <100.00%> (-0.14%) ⬇️
spyder_unittest/widgets/unittestgui.py 83.87% <100.00%> (+0.07%) ⬆️

@jitseniesen
Copy link
Member

Thanks for figuring out how to make the imports work. It took me a while to convince myself that it is now fine.

Regarding translations and the _(...) function, I think it is best to get rid of it. I was never certain about whether to include translations in the pytestworker.py file and how it would work; now it will almost certainly not work as intended. So please remove all references to the _(...) function.

@stevetracvc
Copy link
Contributor Author

Yea it was weird, I don't know. Clearly that isn't the best way but it seemed to work.

I removed the translation from pytestworker and also added a couple tests for the new input in the dialog box. Let me know if you think I should change anything else!

@jitseniesen
Copy link
Member

I added a commit so that the tests now use the Python interpreter that is specified in the Spyder preferences if the user does not specify one in the test configuration. If you have time and feel up to it, can you please tests whether that works for you?

I wonder whether it is now necessary to have a separate option to set the interpreter for tests. In practice, how often would people want to have one interpreter to run the code and another interpreter to run the tests? What do you think?

Apart from deciding on that, I want to have a good look at all the code in this pull request and then it should be good to go in.

@stevetracvc
Copy link
Contributor Author

@jitseniesen you figured out the thing I couldn't 😢 Your solution is much more elegant :) I agree, we probably don't need to manually select the python executable in the config modal.

However, honestly, I wish I could toggle a setting so that when I create a new console, it asks which python executable I want to use (or maybe it just be a menu option under the special console menu). Maybe I'm an odd duck in how I use Spyder, but sometimes I'd want to have two consoles open, and each one would use a different python (it's been a little while since I did that though). I guess maybe the better way would be to have each as separate projects, and then just open two copies of Spyder so that each project would have its own python too. But, again, it would be nice if that's stored as a config for the project. Currently, I have to go to settings and change it for each project.

Another option is that I could hijack part of your addition and have it auto-fill the selection in the unittest config. But when should it change? Probably only when first opening Spyder, right? Load the default python executable when starting Spyder if the project doesn't already have one set (which would then actually set the project's pyexec config option). Oddly, your addition (decorated with on_conf_change) is called twice when starting up Spyder. I wouldn't have expected it to get called at all.

Your code now sets the unittest executable when the user changes the Spyder config (if not set), but there could also still be a console open that uses the old executable. When should the unittest plugin switch to the new executable? As it stands right now, if the user did NOT select a custom exec, then it'll switch. But if the user DID select a custom exec, then the main spyder config change won't affect the unittest exec. That is probably the behavior that someone would expect, right? Keep the manual override if set.

When using projects and no pyexec is set, then the value
None
is store in the project config file. When opened, this is interpreted
as a string "None" rather than the python None

Another option could be to change the default value of pyexec to ''
instead of None
Copy link
Member

@ccordoba12 ccordoba12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @stevetracvc, thanks a lot for your help with this!

spyder_unittest/unittestplugin.py Outdated Show resolved Hide resolved
spyder_unittest/widgets/configdialog.py Outdated Show resolved Hide resolved
@stevetracvc
Copy link
Contributor Author

stevetracvc commented May 15, 2022

@ccordoba12 @jitseniesen So I removed the plugin-specific python executable config and cleaned things up, mostly putting it back the way it started 😆 Thanks Jitse for figuring out how to get the info from spyder itself, I think most of this PR is now yours 😄 .

However, this makes me think that the custom python interpreter should be a project-level setting rather than an application-level setting. Here's my specific use case, let me know what you think:

While working on this plugin, I needed to use my spyder-env conda environment for things like running the tests. But that is NOT the environment I use for most of my projects. Thus, with a unittest-specific python interpreter, it was pretty easy for me to run the tests for spyder-unittest using the unittest plugin. In a separately running instance of spyder, I'm running one of my other projects and can easily run the unit tests there too (it's using the custom interpreter that my spyder config is set to) since its unittest config was using the correct python.

Now, using just the application-level custom (or default) python, if I want to run the spyder-unittest tests in the unittest plugin, I have to go change the interpreter for the whole program, then set it back when I'm done.

This would have to be a change at the spyder level, but my earlier work on this seems to indicate it should work. Carlos, it looks like you discussed it a little back in September, has anyone started on this? I don't see anything in v5.3.0 spyder-ide/spyder#16299 (comment)

@jitseniesen
Copy link
Member

PR spyder-ide/spyder#17788 got merged and will be part of Spyder 5.3.1, so we can use it here if we depend on that version.

As I see it, project-level settings were never properly implemented in Spyder even though we have talked about it for a long time (see spyder-ide/spyder#9804). It is a pity because it makes projects a lot less useful and especially associating interpreters with project is an often requested feature going way back to 2015 (spyder-ide/spyder#1014). However, the basic functionality in Spyder is there and this plugin uses it in a very hacky way; I am actually surprised it still works.

@maurerle
Copy link

Hi @stevetracvc,
You can look at the code I added to my fork here: maurerle@ab3429a#diff-3896c4e6d6471bfd70003914882842f06d3318322aeade9cfb2dc5175259714cR343

I think you can remove the on_conf_change by directly accessing the current python interpreter used for code execution:
executable = self.get_conf('executable', section='main_interpreter')
This makes the code a lot cleaner as you don't have to get the value from custom_interpreter if needed.

You should also update spyder in REQUIREMENTS in setup.py to >=5.3.1 - then this can be released after the release of Spyder 5.3.1?

@ccordoba12
Copy link
Member

I agree with @maurerle's suggestion.

@stevetracvc
Copy link
Contributor Author

OK neat, thanks @maurerle I'll look at this tomorrow. I need to update to 5.3.1.

If what you're saying is really that simple, then maybe some other plugins can be fixed too (like the pylint one)

@ccordoba12
Copy link
Member

I need to update to 5.3.1.

5.3.1 is not released yet, but I'll do it during the weekend.

If what you're saying is really that simple, then maybe some other plugins can be fixed too (like the pylint one)

Yep, that's what @maurerle actually did for the Profiler plugin in Spyder.

@stevetracvc
Copy link
Contributor Author

@maurerle thanks for the help. I bumped the spyder min version requirements and added your get_conf. Looks like it works!

Also, I can confirm that if you want to use a conda environment that doesn't have spyder installed, you will need both qtpy and pyqt (which installs several other things too) in order to get the unit-test plugin to work

@ccordoba12 ccordoba12 changed the title Feat/use different python Use the Python interpreter set in Preferences to run tests Jun 1, 2022
@stevetracvc
Copy link
Contributor Author

OK @ccordoba12 ready for a retest. Is there a better way that I can run all those different OS tests locally?

@ccordoba12
Copy link
Member

Is there a better way that I can run all those different OS tests locally?

Meaning?

@stevetracvc
Copy link
Contributor Author

stevetracvc commented Jun 1, 2022

Is there a better way that I can run all those different OS tests locally?

Meaning?

So I could run the Windows and OSX tests from my Linux laptop? Rather than having to push, wait for github, etc. I'm guessing I'd have to set up docker containers or something. (Sorry, I'm not a developer, I'm a chemical engineer 😆 )

Also, I can't figure out why all these pytestrunner tests are failing, they don't fail for me. I think they popped up when I updated the minimum spyder version? I think it's a pytest-qt issue.

@stevetracvc
Copy link
Contributor Author

@ccordoba12 all of the failing tests use qtbot, so I just pushed an update to the requirements tests.txt file for a newer version, see if that fixes the test crashes?

@stevetracvc
Copy link
Contributor Author

Well that didn't work. Any suggestions other than just skipping all tests using qtbot?

@dalthviz
Copy link
Member

dalthviz commented Jun 2, 2022

Just in case, maybe changing the linux system packages setup step in the workflow could help with the segfaults on Linux. So instead of:

run: |
sudo apt-get update
sudo apt-get install libegl1-mesa

Running something like this:

https://github.com/spyder-ide/spyder-line-profiler/blob/3b2bfb83497e0f49abb912eeea5da3d061502aca/.github/workflows/linux-tests.yml#L27-L29

@stevetracvc
Copy link
Contributor Author

This isn't tenable, here are all the tests using qtbot:

backend/tests/test_zmqstream.py:def test_zmqstream(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_collected(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_collecterror(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_starttest(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_finished(qtbot, output, results):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_logreport_passed(qtbot):
tests/test_unittestplugin.py:def plugin(qtbot):
tests/test_unittestplugin.py:def test_plugin_config(plugin, tmpdir, qtbot):
tests/test_unittestplugin.py:def test_plugin_goto_in_editor(plugin, qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_uses_frameworks(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_indicates_unvailable_frameworks(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_disables_unavailable_frameworks(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_sets_initial_config(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_click_ham(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_ok_initially_disabled(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_ok_setting_framework_initially_enables_ok(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_clicking_pytest_enables_ok(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_wdir_lineedit(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_wdir_button(qtbot, monkeypatch):
widgets/tests/test_unittestgui.py:def widget(qtbot):
widgets/tests/test_unittestgui.py:def test_unittestwidget_forwards_sig_edit_goto(qtbot, widget):
widgets/tests/test_unittestgui.py:def test_unittestwidget_set_config_emits_newconfig(qtbot, widget):
widgets/tests/test_unittestgui.py:def test_unittestwidget_set_config_does_not_emit_when_invalid(qtbot, widget):
widgets/tests/test_unittestgui.py:def test_run_tests_and_display_results(qtbot, widget, tmpdir, monkeypatch, framework):
widgets/tests/test_unittestgui.py:def test_stop_running_tests_before_testresult_is_received(qtbot, widget, tmpdir):
widgets/tests/test_datatree.py:def view_and_model(qtbot):
widgets/tests/test_datatree.py:def test_go_to_test_definition_with_invalid_target(view_and_model, qtbot):
widgets/tests/test_datatree.py:def test_go_to_test_definition_with_valid_target(view_and_model, qtbot):
widgets/tests/test_datatree.py:def test_go_to_test_definition_with_lineno_none(view_and_model, qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_shows_abbreviated_name_in_table(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_shows_full_name_in_tooltip(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_add_tests(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_replace_tests(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_sort_by_status_ascending(qtbot):

@stevetracvc
Copy link
Contributor Author

OK just pushed that, but this is way outside my knowledge base. I do remember seeing something about xinerama in some SO thread

@stevetracvc
Copy link
Contributor Author

OK looks like that fixed it, I'm going to remove the skips

@dalthviz dalthviz changed the title Use the Python interpreter set in Preferences to run tests PR: Use the Python interpreter set in Preferences to run tests Jun 2, 2022
@stevetracvc
Copy link
Contributor Author

OK I reverted all of those skip commits, so I think it's back to normal now. Thanks @dalthviz

I also bumped pytest to v5 since I think that's why the python 3.7 tests were all failing.

@stevetracvc
Copy link
Contributor Author

@ccordoba12 looks like you were right about the pytest version for python3.7. All tests are now passing 😄 Do you want me to remove it from this PR and do it as a separate PR? Or just leave it in here?

Copy link
Member

@ccordoba12 ccordoba12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great work @stevetracvc! I left a couple of minor comments for you, then this should be ready.

spyder_unittest/backend/pytestworker.py Outdated Show resolved Hide resolved
spyder_unittest/backend/pytestworker.py Outdated Show resolved Hide resolved
Copy link
Member

@ccordoba12 ccordoba12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @stevetracvc!

@ccordoba12 ccordoba12 merged commit 0b25301 into spyder-ide:master Jun 10, 2022
stevetracvc added a commit to stevetracvc/spyder-unittest that referenced this pull request Jun 10, 2022
Merged in PR spyder-ide#174, use the python interpreter from settings
@jitseniesen jitseniesen mentioned this pull request Aug 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use python interpreter from preferences to run tests
6 participants