-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup integration test suite #41
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…tcome See Github for further report formatters: https://github.com/kern/minitest-reporters
Each test run automatically generates a coverage report in the `coverage` folder. The overhead should be neglectable (e.g., 2s for 600 tests https://twitter.com/qxjit/status/53102603533430784). Currently, ~70% coverage is reported.
Capybara (https://github.com/jnicklas/capybara) is the most popular integration testing tool in the Rails community according to Ruby Toolbox (https://www.ruby-toolbox.com/categories/browser_testing). Capybara provides a convenient DSL to write browser tests.
Adding support for JS-enabled tests has some important implications: * The transactional database cleaning strategy cannot be used anymore because JS tests run in a separate thread. * Loading all fixtures using an alternative DB cleaning strategy (e.g., deletion or truncation) imposes a performance penalty. Preferring `poltergeist` over `selenium-webdriver`: From the two most popular browser testing drivers (https://www.ruby-toolbox.com/categories/browser_testing), `poltergeist` is truly headless (no need for firefox or virtual framebuffer) and more suitable for CI builds. Additional dependency `phantomjs` is required: Phantomjs (http://phantomjs.org/) need to be installed on every machine running the integration tests. Separate test configuration for integration tests: * Selectively enable the javascript test driver for integration tests. * Configure database cleaner deletion strategy * Disable WebMock but allow connections to localhost. This addresses the issue `WebMock::NetConnectNotAllowedError: Real HTTP connections are disabled. Unregistered request:` => See: https://robots.thoughtbot.com/using-capybara-to-test-javascript-that-makes-http Avoid hacks proposing to use transactions and JS-tests together: A couple of websites suggest to use shared DB connections with transactional fixtures (e.g., http://blog.plataformatec.com.br/2011/12/three-tips-to-improve-the-performance-of-your-test-suite/). However, multiple sources report inconsistent behaviors such as race conditions using this monkey-patching approach (see http://infinitemonkeys.influitive.com/dont-use-a-shared-connection-on-full-stack-capybara-tests/) Additional resources: * Extensive guide on Rails 4.1 Testing: http://www.ironhorserails.com/posts/1?locale=en * Pro MiniTest and fixtures: http://brandonhilkert.com/blog/7-reasons-why-im-sticking-with-minitest-and-fixtures-in-rails/ * Using fixtures: http://api.rubyonrails.org/classes/ActiveRecord/FixtureSet.html
Integration tests should not make requests to external services. Using environment-specific configurations in combination with the WebMock configuration allowing requests to localhost (see `test_helper.rb`) allows to overcome this issue. Therefore, requests made from Javascript should be configurable and either be ignored (redirected to `localhost`) or suitable mock implementations must be provided and setup before running the test.
Fixtures cannot be safely and efficiently used for JS-enabled integration tests. FactoryGirl is the most popular replacement for fixtures offering a convenient syntax. Github: https://github.com/thoughtbot/factory_girl Rubydoc: http://www.rubydoc.info/gems/factory_girl/file/GETTING_STARTED.md
A priori loaded fixtures are incompatible with JS-enabled tests which require alternative DB cleaning strategies (deletion or truncation) Unless integration tests were run in isolation from other test, subtle errors caused the tests to behave non-deterministically. Therefore, we need to lazily load test data via factories. In order to optimize performance, only JS-enabled integration tests use the much slower `deletion` DB cleaning strategy whereas all other test use `transaction`. NOTICE: In the `test_helper.rb`, we have distinguish between test types via `is_integration_test?`. Using separate `setup` hooks for `ActionDispatch::IntegrationTest` and `ActiveSupport::TestCase` does not work because both hooks would get executed and disturb each other.
The code coverage analysis tool did not correctly report test coverage results if executed via guard-minitest. Externalizing the config into `.simplecov` is considered best practice: simplecov-ruby/simplecov#235 (comment) The `.simplecov` config is optimized for Cloud Stove based on the Rails defaults from: https://github.com/colszowka/simplecov/blob/master/lib/simplecov/defaults.rb Positive side effect: Adding Spring support for MiniTest accelerates repeated test executions through reloading: https://github.com/guard/guard-minitest#spring The new Rake task `rake test:coverage` also triggers coverage analysis. SimpleCov is very fast (i.e., a couple of seconds for ~10 min Rails test suite) and thus they enable coverage analysis by default. For conditional execution, see: https://github.com/colszowka/simplecov#running-coverage-only-on-demand
Knowing how to use guard for continuous test execution motivates to maintain a valuable test suite. Debugging integration tests (e.g., via `save_and_open_page`) gives you a glimpse idea what's going on: https://github.com/jnicklas/capybara#debugging
The JS-enabled tests require the `phantomjs` binary. Although the script conditionally installs PhantomJS, it needs to be downloaded on every build because wercker steps are not cached.
Integration tests require a clean database and truncation/deletion DB cleaning strategy would erase the seeds anyways after the first test. Some tests fail if they are executed immediately after seeding the database. Example: test that checks whether exactly n blueprints are displayed after creating n.
We should forsee local wercker build files in the gitignore
Each test suite run should start in clean state. Any trash in the test database, possibly caused by failing or interrupted previous tests, will get deleted.
Top-down ordering (i.e., most high level factories first) improves clarify Proper factory nesting reduces redundancy and clarifies membership Correctly refer *_component factories in concrete_component traits
Improve naming
Capybara's `save_and_open_page` html looks very different without considering assets such as CSS and JS. The `show_page` helper leverages the `public` directory of a concurrently running rails server to load these assets such that the page looks authentic. Also enable asset debugging for the test environment. Capybara is also capable of launching its own server to serve assets during test execution: \# Use a Capybara server to load assets Capybara.server_port = 3001 Capybara.app_host = "http://localhost:#{Capybara.server_port}" ActionController::Base.asset_host = Capybara.app_host
Overly long `more_attributes` fields are cumbersome to define within factories. This helper allows to load these hashes from json files in the fixtures directory (similar to WebMock).
This test uses a subset of the aws provider data loaded from a json file. Error: While the provider name gets displayed correctly, the price list does not show up within the PhantomJS. The same behavior was observed using Selenium as Capybara test driver.
Do not use this test helper: `include ActiveJob::TestHelper` It allows to use `assert_enqueued_jobs 1` but the jobs won't be present in the database then (i.e., Delayed::Job.all always returns [])
Fix `resource_type` for storage resource factories The reason why no resources were displayed was because no resources were attached to the provider. The more_attributes field of provider had no effect.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
cloud_application_stories_test
: Uncomment failing assert when the issue within the deployment recommendations is fixed (always shows $0.00 costs)require 'simplecov'
inconfig/boot.rb
beforerequire 'bundler/setup'
results in 2.51% coverage (covers parts of blueprint, component, and base)609 / 818 LOC (74.45%)
vs561 / 854 LOC (65.69%)
=> The latest local wercker build reported 95.03% coverage (vs 81.1% via rake test)