-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jonahstanley/add more tests #10
Conversation
|
||
Scenario: I can answer a problem with one attempt correctly | ||
Given I am viewing a "multiple choice" problem with "1" attempt | ||
Then I should see "You have used 0 of 1 submissions" somewhere in the page |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assertions should be on outcomes. We are following the Given-When-Then pattern.
Move "you have used x of y submissions" and "final check" button from the tests an make it a separate test that will pass/fail discreetly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. You can also remove the assertions for checking that a problem is marked correct/incorrect because other test cases cover this logic.
Besides a few style things, it looks good. Did you manage to run it within the Ubuntu VirtualBox? |
I have still yet to get it fully set up. But once I do I will run it many times |
New function was added: is_css_not_present This function works like is_css_present in that it will wait and can take in an optional argument to wait longer. This should be used everywhere INSTEAD of not is_css_present as in the latter case, you are telling selenium to wait for the thing you don't want to be there to either be there or time out.
Now the step calls is_text_present and is_text_not_present with a wait time of 5 seconds so that the page can be properly refreshed/reloaded if needed. This also gets rid of an assert not
I accidently had show_answer instead of showanswer. This error was hidden by a previous default of showanswer=always.
|
||
Scenario: I can answer a problem with multiple attempts correctly and still reset the problem | ||
Given I am viewing a "multiple choice" problem with "3" attempts | ||
Then I should see "You have used 0 of 3 submissions" somewhere in the page |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These scenarios should be rewritten to follow the Given-When-Then convention. It seems like what you are trying to test here should be broken out so that a test will fail discreetly if the functionality is broken.
Let's get this PR merged in first, then refactor.
Looks good. 👍 |
👍 |
Jonahstanley/add more tests
Добавление веса вертикалям.
[#118166065] remove invalid characters from username field
# This is the 1st commit message: ENT-385 change error msg for confirm email # This is the commit message #2: remove changed msgid from po files # This is the commit message #3: add name to AUTHORS file # This is the commit message #4: remove lastfailed file # This is the commit message #5: GradingPolicyChanged Signal Handler https://openedx.atlassian.net/browse/EDUCATOR-393 # This is the commit message #6: EDUCATOR-435 | Include enrollment status in in course and problem grade reports. # This is the commit message #7: Change visibility to access. EDUCATOR-396 # This is the commit message #8: LEARNER-923 Make Pattern Library support tabs transparent # This is the commit message #9: Conform to WCAG 2 AA contrast minimums for Google OAuth. Also apply variables for FB/Linkedin OAuth2 rather than leave color hashes hanging around. # This is the commit message #10: Updated auto_auth endpoint to always return JSON Defaulting to a plaintext response makes no sense for an endpoint that is intended to be used by machines for testing. The endpoint now returns JSON only unless a redirect action is triggered. # This is the commit message #1: Fix the activation email support link in the dashboard sidebar # This is the commit message #2: Mask grades on progress page according to "Show Correctness" setting. # This is the commit message #3: More celery logging # This is the commit message #4: Eventing for grading policy change # This is the commit message #5: More specific grace_course logging # This is the commit message #6: Add course overrides of waffle flags. # This is the commit message #7: Mark test as flaky, and remove flaky from a fixed test. EDUCATOR-511 # This is the commit message #8: Fix for LEARNER-859: Updating styling on the course updates page to more clearly differentiate between multiple updates. Specifically: - Updated styling on date in the top left corner - Added horizontal line between updates - Removed ability to toggle updates on and off - Cleaned up code to always show all updates:
…former update base transformer
* Updates translations for the Sign In/Registration, and incomplete profile
Implemented unread feature for notification Approved-by: Oksana Slusarenko
Suggested Reviewers: @wedaly @jzoldak
In this commit, I have done the following
The purpose of this being that problems.py was a very large file. Now only the steps are in problems.py whereas the information needed to create problems are in problems_setup. This way, it is much easier to see what is needed in adding new problems to the test suite.
Before, it was not possible to have add problems in the test with metadata specified. Now, there are two main ways to add metadata. Either the problem itself in the problem dictionary can have a metadata key corresponding to a dictionary of metaday keys and values and/or the step can somehow specify the metadata. add_problem_to_course now takes in an optional third argument of metadata which is a dictionary of metadata keys and values.
Now that we can have metadata, these tests could be added.
-- For showing and hiding answers, I check that I can click on a show answer button, the answer appears, there is now a hide answer button and when that is clicked, the answer disappears
-- For limited attempts, I check that if there is 1 attempt, the final check button is there and when the answer is sent (if it is correct or incorrect) there is no option to reset the problem. If there is more than 1 attempt allowed, I check that the user is properly updated on how many chances they have left and once they are on the final guess, the button changes to final check and after that point there is no reset button
Both of these tests are slower due to needed to check for buttons that do not appear.
There were a few tests on lms that were running a bit slow: high-level-tabs, partners and checking ids. These were slow due to unnecessary repeated logic in the scenario. These were sped up by using step tables. Since most of these tests were checking to make sure something was on the page, the checks can be done all at once. Before, the entire scenario (adding a course, logging in, reloading the page, getting the page data) was repeated for each check.
Hopefully, all of these tests will run nicely on the other machines. Also, hopefully the new scenarios and files are easy enough to read.