-
-
Notifications
You must be signed in to change notification settings - Fork 670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automated GUI testing framework #1436
Comments
@freakboy3742 I haven't been using Toga/Beeware for all too long, but given that there is the examples dir that seams to have a mini app for each widget, could we just use something like pyautogui? Spin up each app and test the widget? |
@hyptocrypto This is an option I've considered. There are three notable complications:
|
So I tried to build a little POC test with pyautogui that runs a Toga app with a button, click this button and see that the
Those points are absolutely valid. There is also the issue that pyautogui is very undermaintained. The last commit to it was submitted in September last year. I can see why it might be a problem when we try to use it in tests. The way I see it, we have 3 options:
I'm not sure which is the best solution out of this 3. I'm tending towards 2, but I'm really not sure. Please let me know what you think. |
I'd vastly prefer to not fork or maintain our own GUI testing library if at all possible, but if there's no other option, then I guess that's what we'll need to do. Interestingly, it looks like Al merged the Rubicon changes, and then reverted them a few days later. I don't know what the story is there, because the original PR is still open. I'll reach out to Al and see if I can get some clarification. However, before we go down that path, we should verify whether the PyAutoGUI approach is viable at all. If we can't get a proof of concept working with PyObjC, we're not going to be able to get a proof of concept working with Rubicon, either. So - my suggestion would be to try get a proof-of-concept working with PyAutoGUI as is, and then re-evaluate our options if it turns out it looks viable. The proof-of-concept doesn't have to be that thorough - a couple of simple tests working on all platforms would be sufficient to prove whether the approach is likely to be viable at all. |
So, I created an example PR #1600 that shows how to write UI tests with pyautogui. I ran it on Windows and all the tests are passing! |
This was implemented in #1687. |
At present, we have a unit test suite that validates the Toga interface layer against a
dummy
backend. This enables us to verify the consistency of the interface, and have confidence that the public interface to widgets doesn't suffer any regressions. This test suite could be more extensive, but it at least exists, and can be built upon with time.However, we have very little by way of testing of the actual GUI backends. We verify API completeness, but we don't do any automated testing of the GUI behavior of widgets.
This complicates the release process, as there's no way to have certainty that a release contains no unintended changes in widget behavior. It's trivial to introduce a subtle change to a widget's hinting mechanism or signal handling that has unintended consequences in one specific use case.
Describe the solution you'd like
We need a unit test suite on the GUI widget layer.
This needs to be able to run in a completely automated manner, and should be able to verify how a widget actually behaves in the presence of keyboard and mouse input.
Running the unit test suite should give us confidence that a PR doesn't inadvertently change existing behavior, and that releases will contain no regressions.
Describe alternatives you've considered
I suspect the answer will involve some sort of GUI automation; however, I don't know what that looks like in practice. Each platform will almost certainly require an independent approach.
The text was updated successfully, but these errors were encountered: