Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(docker): add script to clean up docker environment #2013

Merged
merged 6 commits into from
Dec 17, 2020

Conversation

mars-lan
Copy link
Contributor

Checklist

  • The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format)
  • Links to related issues (if applicable)
  • Tests for the changes have been added/updated (if applicable)
  • Docs related to the changes have been added/updated (if applicable)

@cobolbaby
Copy link
Contributor

You can try the following command:

docker-compose  -p datahub down -v
docker system prune -all --force 

@jplaisted
Copy link
Contributor

LGTM if we want a "real" nuke, but afaik that command also gets rid of images we probably don't need to get rid of, like kafka and Elasticsearch. Up to you if you want to modify the script to delete just the datahub images and volumes.

Copy link
Contributor

@jplaisted jplaisted left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, testing that command I had to redownload things like neo4j and Elasticsearch, which all took awhile. Can we kill just our images and volumes?

@mars-lan
Copy link
Contributor Author

mars-lan commented Dec 17, 2020

Yeah, testing that command I had to redownload things like neo4j and Elasticsearch, which all took awhile. Can we kill just our images and volumes?

Since this is really to nuke everything so the user can start from a complete clean slate to avoid any possible issues, I think it's okay to drop and redownload those images?

@jplaisted
Copy link
Contributor

Since this is really to nuke everything so the user can start from a complete clean slate to avoid any possible issues, I think it's okay to drop and redownload those images?

/shrug in theory they're isolated from the actual volumes, right? Try it yourself; it has to download a few GBs. Not good for time or data caps :)

@mars-lan
Copy link
Contributor Author

Since this is really to nuke everything so the user can start from a complete clean slate to avoid any possible issues, I think it's okay to drop and redownload those images?

/shrug in theory they're isolated from the actual volumes, right? Try it yourself; it has to download a few GBs. Not good for time or data caps :)

Good point. Use docker-composer rm instead of docker system prune.

@jplaisted jplaisted merged commit 36b79a3 into datahub-project:master Dec 17, 2020
@jplaisted
Copy link
Contributor

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants