Skip to content
This repository has been archived by the owner on Dec 4, 2018. It is now read-only.

Update functional tests #839

Merged
merged 8 commits into from
Nov 29, 2017
Merged

Update functional tests #839

merged 8 commits into from
Nov 29, 2017

Conversation

niqjohnson
Copy link
Member

Update functional tests to more fully check that the eRegs user interface is working as expected

Additions

  • Added tests for search
  • Added tests for responsiveness

Changes

  • Updated tests for diffs
  • Updated tests for the table of contents
  • Updated tests for navigation
  • Updated tests for interpretation

Testing

  1. Break out those terminal tabs!
    1. In one tab, start up regulations-site (e.g. python manage.py runserver 8001)
    2. In a second tab, go to the root of regulations-site and start up the dummy API with ./dummy_api/start.sh 0.0.0.0:8282 (or whatever port you usually run the real API from regulations-core on)
    3. In a third tab, start up Sauce Connect however you usually do that
  2. Visit http://localhost:8001/ (or whatever port you're running regulations-site on) in a browser; you should see only a single regulation, 1005, on the home screen, and all the content for that regulation should be placeholder gibberish
  3. In a fourth terminal tab, go to the root of regulations-site and run grunt test
  4. Waaaaaaaaiiiiiiit for all the functional tests to run in both Chrome and IE on Sauce Labs (it'll take 6–7 minutes for everything to finish)
  5. Verify that all the tests pass
  6. If you want to, log into Sauce Labs to watch the recordings of the tests to make sure they're doing the right things

Screenshots

You should see this in your terminal when grunt test finishes:

eregs-terminal

Notes

  • I know almost nothing about Python, yet here you are reviewing a bunch of Python code I wrote 😄 . Definitely let me know if it would make sense to refactor anything (at least I managed not to use any terminal semicolons, I think).

Todos

  • As noted in search_test.py, search isn't actually working locally. When you search for any term, the app returns every section of the dummy reg as a result. So right now the search test can't fully test searching a reg, but it does check to make sure that all the UI elements used to do a search work. When we get search working locally, it should be pretty easy to update search_test.py to account for correct search results. Should we hold off on adding search_test.py until then, though?

This commit adds a browser test that checks the functionality of searching a regulation. Specifically, the tests check to make sure that clicking the search icon in the nav panel displays the search bar, searching a term displays a results page, and clicking a search result takes the user to the appropriate subsection.
Update the functional test for the table of contents to more thoroughly check that the correct content and effective date is displayed when a TOC link is clicked.
Update the functional test for navigation to check that internal references to regulation subsections link to the proper subsection.
Update the functional test for interpretations to more fully check for the proper "show/hide" controls and to check for a link to the correct interpretation section in the interpretation expandable.
The functional test for search runs in Sauce Labs in a window that is too small to accommodate the 1366x768 resolution that was being used in this test. Also, the IE window cut off the result link that the test was trying to click, making it unclickable and making the test error.

Both problems are now fixed: the window is set to 1024x600 (as in the table of contents test) and the last result link on the first page of results—which is reliably always in the viewport and clickable—is now being tested.
Copy link
Member

@chosak chosak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests work great for me!

The use of nosetests -tc here seems to require an installation of nose-testconfig, which, as far as I can tell, isn't in the requirements files anywhere.

When the tests do run, I get an ImportError (No module named selenium), I think again because there's a missing dependency on selenium.

@@ -4,6 +4,7 @@
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.color import Color
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor Python thing, we typically try to alphabetize these imports from their source module (e.g. selenium.webdriver.support.color should come before the other selenium.webdriver.support imports).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6381fa7

'2011-11111',
'2012-12121'
]
for index, item in enumerate(timeline_list_items):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better to write this test backwards, so iterate over the expected output and confirm that it matches what you get. Here you iterate over the output you see, but let's say you didn't get back any .history-toc-list elements, this test would still pass incorrectly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GOOD CATCH! It turns out that, thanks to a dumb mistake I made with the selector, this exact thing was happening, and the test was passing despite an empty timeline_list_items list. This should be fixed (and now correctly passing) in 6381fa7.

self.test_url + '/1005-2/2011-11111',
self.test_url + '/1005-2/2012-12121'
]
for index, link in enumerate(timeline_links):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6381fa7. I also updated a similar test in search_test.py to iterate over the expected output instead of the actual output.

@@ -26,6 +26,9 @@ def test_interps(self):
# should have the appropriate header
self.assertTrue('OFFICIAL INTERPRETATION TO 2(h)' in interp_dropdown.text)

# should have the "SHOW" link
self.assertTrue('SHOW' in interp_dropdown.text)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly nicer way to do this in Python: self.assertIn('SHOW', interp_dropdown.text)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6381fa7

for index, result in enumerate(search_results):
self.assertEqual(result.text, expected_results[index])

# clicking a serrch result loads the appropriate subsection
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: "serrch".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6381fa7

Fix a few functional test issues uncovered during code review:

- Alphabetize imports
- Iterate over lists of expected outputs rather than lists of actual outputs
- Use `assertIn` instead of `assertTrue` where applicable
- Fix typos in comments
@ascott1
Copy link
Member

ascott1 commented Nov 29, 2017

Approved on my end. Let's see this 👶 merged!

@niqjohnson niqjohnson merged commit 716af4c into master Nov 29, 2017
@niqjohnson niqjohnson deleted the functional-tests branch November 29, 2017 20:17
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants