Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many tests fail on when run locally #50

Closed
thebigmunch opened this issue Feb 3, 2019 · 17 comments
Closed

Many tests fail on when run locally #50

thebigmunch opened this issue Feb 3, 2019 · 17 comments
Labels
enhancement Improvement to an already existing feature

Comments

@thebigmunch
Copy link

thebigmunch commented Feb 3, 2019

63 failed, 627 passed, 1 skipped, 1 xfailed

Same results on my Windows desktop and a Linux server.

Edit: Should've mentioned, just for completeness, Python 3.7 on Windows and Python 3.6 on the Linux server.

@Delgan
Copy link
Owner

Delgan commented Feb 3, 2019

I ran the tests on both Windows and Linux but did not encountered any error. I guess this is related to some local configuration.

Could you please share with me the stacktrace of failing tests?

@thebigmunch
Copy link
Author

Here's the full output.

@Delgan
Copy link
Owner

Delgan commented Feb 3, 2019

@thebigmunch Thanks for the logs.

I remember encountering the exact same problem. For some reason, it disappeared while I was investigatng it.

This seems to be caused by my pyexec fixture. I need to find a way to reproduce it.

Delgan added a commit that referenced this issue Feb 16, 2019
Instead of comparing loguru output with standard built-in formatting exceptions,
add a function to generate formatted exceptions to file, and compare it with
stderr while running tests.
@Delgan
Copy link
Owner

Delgan commented Feb 16, 2019

So, I completely removed the dubious pyexec fixture and replaced it by comparing exceptions formatting to exepected generated output.

@thebigmunch Hopefully, tests should pass on your computer too, now. 👍

@thebigmunch
Copy link
Author

21 fails, including the exceptions formatting tests on both systems I tried (Windows and Linux).

Here's the output.

As just a quick note, assuming python is on PATH and is a supported version is not a great idea. You can use sys.executable to run the Python interpreter that is running the code already.

@Delgan
Copy link
Owner

Delgan commented Feb 16, 2019

Erf...

I'm too used to virtual environments. Thanks for the hint, I updated the tests with your suggestion.

I'm not sure why one test is failing in test_standard_handler.py while it passed according to your previous output. I will "vendorize" better_exceptions_fork anyway, so it will ease developping and testing exception formatting.

@thebigmunch
Copy link
Author

thebigmunch commented Feb 16, 2019

Well, the future way to launch Python everywhere may become py as is already the recommended way on Windows anyway. And virtualenv may not be the only way in the future to isolate things.

Moreover, I didn't add the security implications that you can't even know if Python on the PATH is even Python at all. Malware and all is possible.

I haven't had time to look into the failing tests further as of yet. I probably won't have the time this weekend anyway.

@Delgan
Copy link
Owner

Delgan commented Feb 16, 2019

I was aware of PEP 582 but I didn't know py, looks cool. Python packaging is hard, it's nice to see experimental ideas to improve it.

I'm not satisfied at all with the use of subprocess in unit tests, but I could not find any other way to properly check exceptions formatting.

Thank you for taking the time to test this.

@thebigmunch
Copy link
Author

On the topic of isolation and testing, I'm quite spoiled by most projects using tox, including my own, for that. Would you accept a PR adding tox for testing if I put it together?

On a related note, I've been quite pleased using poetry for many things, including mostly automatic virtual environments for development. Don't think it's needed like tox, but I just wanted to mention it in case it interested you.

I'm not satisfied at all with the use of subprocess in unit tests, but I could not find any other way to properly check exceptions formatting.

I haven't really looked at the testing there enough to get a grasp on what you're doing with them. That's kinda what I plan on doing next to help understand the landscape better. Perhaps if you could describe it for me in advance, that might help me figure out if there's an alternative or solution quicker.

@thebigmunch
Copy link
Author

Note: For tox, I don't mean to change Travis to use it unless you want it to. It would just be available for easier local testing.

@blueyed
Copy link
Contributor

blueyed commented Feb 17, 2019

I agree that it would be great to have tox setup for testing.
And it could later also be used for Travis, given the benefit that it runs the same environment/setup on CI.
I recommend using testing extras_require, which can be used from tox via extras, and also vie pip install -e '.[testing]]'. But I've just seen that there is "dev" already, which could be used for this.

@blueyed
Copy link
Contributor

blueyed commented Feb 17, 2019

See also #41 - apparently I have not found the time for this myself. Feel free to give it a go.

@thebigmunch
Copy link
Author

thebigmunch commented Feb 17, 2019

Ah, right. I forgot that you had opened an issue about that already : P

I'd probably just start with local tox for now.

@Delgan
Copy link
Owner

Delgan commented Feb 17, 2019

Yeah, as discussed with @blueyed in #41, tox is a must have. Code quality improvements and easier development are to be expected by integrating it to the codebase. It's currently missing because I actually never used it, but I should and I look forward to do so.

It is also quite possible to integrate it into the Travis tests, since it seems to be a good practice too. But this can be done gradually.

I know poetry and I follow this project closely since its launch. This looks well polished and very promising, I hope it will be widely adopted and become the default tool for Python packaging. I would prefer to wait for a stable 1.0.0 release before considering using it seriously. Also, I'm a little worried when I see there are as many open than closed issues, which is more than 300... 😕

I haven't really looked at the testing there enough to get a grasp on what you're doing with them. That's kinda what I plan on doing next to help understand the landscape better. Perhaps if you could describe it for me in advance, that might help me figure out if there's an alternative or solution quicker.

Well, I you dig into the tests, you will see that they are not "unit" tests but actually "integration" tests. The functions and expected behaviors are all checked through the public API of the logger which is reseted and re-configured between each test.

Apart from that, I'm happy with the test suite which I try to stay as exhaustive as possible.

The problem with testing exception formatting is that it depends on the environement they are raised from. If I raise an Exception in a test function and then format it, the traceback will contain frames from the pytest internals, so I can't compare it with my expected output. Another problem happens with the filepath part in raceback, depending on the cwd when tests are run, it might be File "/home/user/loguru/tests/test_case.py" or just File "tests/test_case.py", so it's not reproducible. Also, the line might change each time you modify a test case upper in the test file.
There is many edge cases, I care about testing them to avoid regression.
The only solution I could think of was to isolate the tested exceptions if another .py file, and capture the stderr output while running it in a subprocess, so the stacktrace only contains the frame I'm interested in, and is easily reproducible.

@thebigmunch
Copy link
Author

Just wanted to touch base, since I haven't gotten to do anything loguru-related recently. Besides not having a lot of extra time lately, I also had a recurring shoulder/neck problem pop up much worse than previously. I've had to have my dominant arm in a sling for much of the last 8-9 days and am not sure how long until I'm back near 100%. The injury itself makes it difficult to sit and work at a computer for any useful amount of time. And the arm being in a sling makes the work much harder to do when I am.

Just wanted to let you know I haven't forgotten about this : )

@Delgan
Copy link
Owner

Delgan commented Feb 28, 2019

@thebigmunch Oh, I'm sorry to hear that. I really hope you will get better soon.

Thank you for taking the time to inform me. Do not worry about Loguru development, things are moving at their own pace, given occupations and oligations of everyone.

Take care of yourself. I wish you a quick recovery. 🙂

@Delgan Delgan added the enhancement Improvement to an already existing feature label Mar 3, 2019
@Delgan
Copy link
Owner

Delgan commented Jun 29, 2019

Tests should now work fine on any platform hopefully. Many modifications has been made. Also tox can be used to ensure environment is properly configured. If you ever notice there is still problems remaining, please tell me!

@Delgan Delgan closed this as completed Jun 29, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvement to an already existing feature
Projects
None yet
Development

No branches or pull requests

3 participants