Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using requests for first iteration of automated webapp testing #615

Closed
hdoupe opened this issue Aug 11, 2017 · 4 comments
Closed

Using requests for first iteration of automated webapp testing #615

hdoupe opened this issue Aug 11, 2017 · 4 comments

Comments

@hdoupe
Copy link
Collaborator

hdoupe commented Aug 11, 2017

In this issue, I am continuing the conversation in #602 about using requests to post data to TaxBrain.

@talumbau gave an excellent explanation for how data is posted to TaxBrain (and other websites):

I see where you are going and this is a good idea. There is a "tool" that is missing from your proverbial toolbox that will help you achieve this goal. First I will explain what happens when someone does a TaxBrain submission:

The user goes to /taxbrain and the TaxBrain screen loads. They are given a "session ID" by the site and this is kept in the browser session as long as the tab stays open. The user fills out all desired boxes and hits the submit button. This generates a specific type of message from the user's browser sent to ospc.org using the HTTP protocol. The message is called a "POST". This HTTP POST contains all of the data that the user filled out in the forms, along with the session ID. The website is designed to run specific code when it receives such a POST message. This code does all of the work of storing the user's input as a row in the taxbrain_taxsaveinputs table (a Model "instance" in Django terms), submitting the job to the worker nodes, waiting for the results, etc.

All of this can, of course, be done programmatically via a script instead of typing the data in the boxes displayed in the browser. In other words, one can write Python code (or code in many other languages) that loads the /taxbrain page, but instead of displaying it in a browser, just keeps the data in a data structure. Then, one can construct an HTTP POST message with the desired parameter values (really just key-value pairs where the keys are the parameter names and values are what the user would type in the boxes) and the session ID, and then submit this POST to the site.

I was able to use the requests library to post reforms both through the TaxBrain GUI interface and through the file upload interface. I had some difficulty retrieving the results from TaxBrain. I hoped that I would be able to retrieve the csv output but was unsuccessful. I found a hacky, short-term solution that pulls the results from the taxbrain_taxsaveinputs table in db.sqlite3. Then, I called the taxcalc.dropq.run_nth_year_tax_calc_model function and formatted the results into the same format as the TaxBrain results. Finally, I compared the two results. All of this code is available in the reform_input_processing2 branch of my webapp-public fork.

I ran this on a series of test cases here. The reform files passed, but the tests posting data to the GUI interface failed. This is likely because of issues such as #609, #607, #598, and #596.

I'm sure that there is a better way of doing this using mocked functions. However, I tried to throw this together quickly so that I could check the results from dropq with a little more certainty than simply comparing the TaxBrain results and the taxcalc results before the dropq algorithm is applied.

What does everybody think of this approach? If everyone approves, I'll send an email to the Policy brains list with my findings as requested in RELEASE_PROCESS.md.

@martinholmer @PeterDSteinberg @brittainhard @andersonfrailey @Amy-Xu @GoFroggyRun

@martinholmer
Copy link
Contributor

@hdoupe, This is an impressive expansion in the testing of TaxBrain. Good work!
Of course, as you gain experience I'm sure you will refine your testing procedures. But this is a major improvement.

My only question is shouldn't your testing procedures be added to the webapp-public repo? I see that you have added narrow specific tests for each failure you've found and that is great. Shouldn't the general testing framework you've created in your fork of webapp-public be merged into the main GitHub webapp-public repository? Or are you waiting for comments from the Continuum Analytics staff before doing that?

@hdoupe
Copy link
Collaborator Author

hdoupe commented Aug 14, 2017

@martinholmer Thanks, I'm just happy that I will not have to spend as much time manually populating fields in the TaxBrain GUI, manually uploading files, and manually checking answers with those from taxcalc.

@martinholmer said

My only question is shouldn't your testing procedures be added to the webapp-public repo? I see that you have added narrow specific tests for each failure you've found and that is great. Shouldn't the general testing framework you've created in your fork of webapp-public be merged into the main GitHub webapp-public repository? Or are you waiting for comments from the Continuum Analytics staff before doing that?

Yes I hope that some form of these testing procedures will be uploaded to the webapp-public repo. I am waiting on feed back from the Continuum Analytics staff before doing this.

@hdoupe
Copy link
Collaborator Author

hdoupe commented Jan 17, 2018

I'm closing #615. I think a better approach to testing for now is to continue beefing up the current test suite.

@hdoupe
Copy link
Collaborator Author

hdoupe commented Jan 17, 2018

Closed #615

@hdoupe hdoupe closed this as completed Jan 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants