Fast and sophisticated remote database import using the best of Postgres 9.2.x
- Allows dumping data selectively - choose between entire, partial or no data
- No downtime on restore - it uses a temporary database while restoring
- Uses Postgres parallel restore
- Uses Postgres custom dump format that is automatically compressed
- MySQL will be supported in near future
- Capistrano 3.x
- Postgres 9.2.x
- It was tested with Rails only, but it is expected to work in any project containing
config/database.yml
file in both local and remote machine.
-
Add this line to your application's Gemfile:
gem 'capistrano-db_sync', require: false
Available in RubyGems: https://rubygems.org/gems/capistrano-db_sync
-
Define your custom settings, if needed. We suggest to put this file at
lib/capistrano/tasks/db_sync.rake
. Capistrano 3.x should load all*.rake
files by default inCapfile
. See the complete configuration referencerequire 'capistrano/db_sync' set :db_sync_options, -> do { # Hash mapping a table name to a query with data selection or nil in case no data # is wanted for a table. Tables not listed here will be dumped entirely. data_selection: { posts: "SELECT * FROM posts WHERE created_at > NOW() - interval '60 days'", comments: "SELECT * FROM comments WHERE created_at > NOW() - interval '30 days'", likes: nil }, local: { cleanup: false, # If the downloaded dump directory should be removed after restored pg_jobs: 2, # Number of jobs to run in parallel on pg_restore }, remote: { cleanup: true, # If the remote dump directory should be removed after downloaded } } end
$ cap production db_sync:import
The following steps describe what happens when executing cap production db_sync:import
:
- SSH into production server with primary db role on capistrano stages configuration
- Connect to the remote Postgres using credentials of
config/database.yml
in the deployed server - Dump the database schema, data, triggers, constraints, rules and indexes
- Download the compressed dump files to local machine
- Restore the dumps in local machine in following sequence
- database schema
- data for tables with entire data dumped
- data for tables with partial data specified in configuration
- triggers, constraints, rules and indexes
- Rafael Sales @rafaelsales
- Jérémy Lecour @jlecour
- Ankur Agarwal @devilankur18
- Increase test coverage
- Add support to MySQL
- Validate database versions before applying any changes
- Fork it ( https://github.com/heartbits/capistrano-db_sync/fork )
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create a new Pull Request