-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test run results in status bar reporting incorrect count #2143
Comments
I believe I have encountered the test-count bug, but I've not seen the test 'Code Lens' disappear during testing. |
@d3r3kk please could you update the issue with instructions to replicate this. |
I still face this issue! For me, every time I re-run the unit tests from a file, the count doubles and 'Code Lens' (didn't know this term, nice one though!) disappears. |
@d3r3kk @MandarJKulkarni |
A reproducer thanks to @ElaineDi: import unittest
class PassingTest(unittest.TestCase):
def test_passing(self):
self.assertEqual(42,42)
def test_passing_still(self):
self.assertEqual("silly walk", "silly walk")
class FailingTest(unittest.TestCase):
def test_failure(self):
self.assertEqual(42,-13)
def test_failure_still(self):
self.assertEqual("I'm right","No I am") This reports 2 passes and 2 failures the first time you run all tests, 4/4 the second time. |
+1 Reproduced this as well. Never had this issue in previous versions. Mac OS Sierra 10.12.6 |
Fixes microsoft#2143 - Unregister listeners from socket service after each test run
Fixes microsoft#2143 - Unregister listeners from socket service after each test run
Fixes microsoft#2143 - Unregister listeners from socket service after each test run
I think this may have broken re-running failing tests. With the following code: import unittest
class PassingTests(unittest.TestCase):
def test_passing(self):
self.assertEqual(42, 42)
def test_passing_still(self):
self.assertEqual("silly walk", "silly walk")
class FailingTests(unittest.TestCase):
def test_failure(self):
self.assertEqual(42, -13)
def test_failure_still(self):
self.assertEqual("I'm right!", "no, I am!") |
And if you run "All Tests", run just the succeeding tests through the code lens, and then run "All Tests" again, it is reported not tests ran. In both failure cases the proper tests are actually run, the results are just reported incorrectly. |
Notes for tomorrow-d3r3kk: I have found the source of the trouble. In my latest "fix", when we run tests I register all the event listeners on the In the This works when we are running a single file/suite/test, but it fails when we run multiple files, multiple suites (which basically amounts to files), or multiple tests, because I will rethink this design tonight and see if I cannot come up with a more useful solution (such as queue-ing up test jobs, or rethinking how we register and unregister tests). |
- Add test to ensure re-running failed tests count correctly
Environment data
Actual behavior
I use VS code for my Python projects and we have unit tests written using Python's unittest module. I am facing a weird issue with debugging unit tests.
Let's say I have 20 unit tests in a particular project.
I run the tests by right clicking on a unit test file and click 'Run all unit tests' After the run is complete, the results bar displays how many tests are passed and how many are failed. (e.g. 15 passed, 5 failed).
And I can run/debug individual test because there is a small link on every unit test function for that. If I re-run the tests from same file, then the results bar displays the twice number of tests. (e.g. 30 passed, 10 failed)
Also the links against individual test functions disappear. So I cannot run individual tests.
The only way to be able to run/debug individual tests after this is by re-launching the VS code.
Expected behavior
Re-run of unit tests should not double the number.
Individual test functions debugging should be possible.
Steps to reproduce:
Run the python tests by right clicking on a unit test file and click 'Run all unit tests'.
After the run is complete, the results bar displays how many tests are passed and how many are failed. (e.g. 15 passed, 5 failed). And you can run/debug individual test because there is a small link on every unit test function for that.
Now re-run the tests from same file, then the results bar displays the twice number of tests.
Logs
Output for
Python
in theOutput
panel (View
→Output
, change the drop-down the upper-right of theOutput
panel toPython
)Starting the classic analysis engine.
##########Linting Output - pylint##########
No config file found, using default configuration
Your code has been rated at 9.95/10 (previous run: 9.72/10, +0.23)
Output from
Console
under theDeveloper Tools
panel (toggle Developer Tools on underHelp
)[Extension Host] Python Extension: Error: read ECONNRESET
t.log @ workbench.main.js:sourcemap:270
The text was updated successfully, but these errors were encountered: