-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing Improvements #602
Comments
I would like to create a test that feeds a reform (or a series of reforms) through the interface that alters all of the parameters and validates that the inputs are processed as expected. However, the process of inputting over a hundred parameters into taxbrain takes several hours and is painfully tedious. I've been trying to think of a way to come as close as possible to doing this without spending several hours punching values into the interface. Here are some of my ideas:
The idea is to figure out a way to simulate as closely as possible the act of feeding user inputs through taxbrain and checking the results without actually doing it. Of course, I'm open to other ideas and the three that I listed may not be very good. This is something that we could start thinking about. The next step would be to create a keywords to dropq dictionary like the one created in the |
@hdoupe said in TaxBrain issue #602:
Automated testing of the TaxBrain GUI interface is definitely an important goal. So, thank you for raising this issue for discussion. I don't know much about the internal working of TaxBrain, but what I do know suggests that for the automated testing to be comprehensive it needs to replicate the TaxBrain user experience using the GUI interface. I don't know enough to know whether your approaches 1 and 3 would accomplish that replication. But clearly your approach 2 does replicate the TaxBrain user experience. So, the rest of my response to your questions focuses on your second approach:
First, having the HTML source code is not essential for implementing this approach (but having it it might make the implementation easier). Second, we already have source code that implements your second approach. That source code was used extensively to test the TaxBrain GUI interface during late 2016, especially the reform-delay feature using the One strategy you could consider is to download the source code for release 0.7.3 and then copy the contents of the If you have any questions about the old |
@martinholmer said
I had the same thoughts on this.
That's great. Thanks for pointing this out. I'll look into this approach. And, yes, this will be much better than reinventing the wheel. Thank you for your feedback. |
@hdoupe said:
I see where you are going and this is a good idea. There is a "tool" that is missing from your proverbial toolbox that will help you achieve this goal. First I will explain what happens when someone does a TaxBrain submission: The user goes to All of this can, of course, be done programmatically via a script instead of typing the data in the boxes displayed in the browser. In other words, one can write Python code (or code in many other languages) that loads the You can find out more about how all of this works by searching online for something like "HTTP introduction POST form submission" or something like that. A good Python package that is useful for these purposes is called |
@talumbau explained in response to @hdoupe questions on approaches to TaxBrain testing:
@hdoupe, I think the approach being suggested here by @talumbau is more promising than the selenium approach (because selenium can be touchy when working with a webpage). Just sending a POST form submission to the TaxBrain server is much easier and more reliable. However, it does seem to leave untested the first steps of submitting a job to the TaxBrain server. I don't know enough about what happens between the time a TaxBrain user clicks on the Show Me the Results button and the time the POST request arrives at the TaxBrain server. |
@talumbau said
Thank you for your response. I'm new to web-development and have been operating under the assumption that some black-box magic occurred between submitting the form and data being received by the functions in @martinholmer said
I agree with this.
I also do not know how much we would be leaving untested and I will look into this. @talumbau @brittainhard @PeterDSteinberg we would appreciate your thoughts on this. |
@martinholmer @hdoupe Django is really great in that it can provide a django Client object that can handle get / post requests and database object creation. Documentation is here: https://docs.djangoproject.com/en/1.11/topics/testing/ When you run the test suite it usually creates a test database, which is then destroyed after the tests are done. You also run a mocked Client object that will handle any requests, so that you can test responses in isolation. Here is one such example:
The Django test client is also a sort of black box. In theory this should submit a dropq request, but it doesnt. It returns a mocked response object when your post data does not cause any errors. I haven't investigated this test to see whether it also created the TaxSaveInputs object, but I suspect it didn't. |
@brittainhard Thanks, I'll look into this. I was able to use the requests library that @talumbau mentioned to submit a form on my local machine, but I haven't figured out how to get the results. I'll open another issue about this so that I don't clutter up this feed while I'm figuring out how all of this POST and GET stuff works. |
It would seem that issue #602, which asked for help in testing the TaxBrain GUI input logic, has generated much activity by @hdoupe. Thanks for all the work on this, Hank. Several bugs have been identified in issues and unit tests added to document each one of those bugs. Also, an initial version of a systematic testing framework has been developed in pull request #627 and is being reviewed by Continuum Analytics staff. So, it would seem as if #602 has been very effective at generating a discussion of how best to approach systematic TaxBrain input testing. And I assume that discussion will continue in comments on pull request #627. |
I think it would be good to have a discussion about testing strategies that would go beyond unit tests and mocking functions. One goal for issue #543 was to add a mocked test, much of which we have in the
MockCompute
object.I think we should also talk about creating unit tests that run the actual services and test the output. These would only be run locally, since they would require the taxpuf file.
Issue #600 is one we can get done quickly, and the issue in #601 needs to be dealt with.
@martinholmer @MattHJensen @hdoupe @PeterDSteinberg if you have any suggestions I think this is a good place to discuss them.
The text was updated successfully, but these errors were encountered: