Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All system components should have consistent continuous testing and build pipelines #34

Open
5 of 17 tasks
gobengo opened this issue May 11, 2018 · 6 comments
Open
5 of 17 tasks

Comments

@gobengo
Copy link

gobengo commented May 11, 2018

@eenblam and I talked today about build pipelines and standardizing them so we can catch inevitable mistakes before pull requests are merged. Assuming everyone is okay with using travis-ci.org to start with this (we can always add/switch later), the table below can track our progress in adding a baseline of automated testing to each of our major system component repositories.

Definitions

  • .travis.yml - The repo has this file that configures travis

  • Travis Badget - The README of the repo has a pretty badge to show how great the build is

  • Deployable - When the build passes in master, the project should be 'deployed' automatically. e.g. firmware builds might be deployed to a publicly accesible URL for people to download from. Web apps might get automatically deployed to staging.

    • sudomesh/monitor
      • Heroku Pipeline. Any Pull Request can be built/deployed as a "Review App". Once merged to master, Heroku can wait until CI passes (e.g. tests pass) and only then deploy to a peoplesopen-monitor-production app.

        (Heroku Pipeline is a little opaque and definitely nonfree, but wanted to prototype using it as the quickest way to have a reasonable separation between deploying new branches and deploying prod. I hope we use something else in the future, and the whole pipeline can be encoded in a file in this repo).

        The Heroku Pipeline is called peoplesopen-monitor and is owned by a new sudomesh heroku team that @eenblam created. Inquire to be added to it.

      • - Travis creates git tags and pushes to github as release

    • sudomesh/exitnode
      • Travis should be able to build and run the process
      • It makes sense to do an automating staging deploy to ensure the deploy scripts work
      • Eventually can deploy to production as well using same, tested deploy script
      • tagged releases
    • sudowrt-firmware
      • Deploy built docker image
      • Deploy all builds, even pre-release ones to ??? builds server??, s3?
      • Deploy master or tagged builds to build server, zemondo
      • tagged releases
    • peoplesopen-front
      • travis npm test
      • Automate deployment to staging
      • Automate deployment to production
      • tagged releases
    • peoplesopen-dash
      • travis-ci tests
      • Staging deploys (won't work just like home node, but can do cursory tests)
      • Tag git releases in release branch
  • Versioned releases - Not every commit in the repository makes sense to be officially supported. Instead, we can use git tags to mark known-working versions of the code, and only support or test at these milestones. If anyone has a bug, it is invaluable to know with the bug report what version of the code they were using.

  • Staging - Even when the build passes, there can still be bugs in the software or the deployment process. Quality in production can be improved by first doing a practice deploy to a production-like 'staging' environment where actual humans can do a final round of testing and acceptance. Without staging, we test in production, and users are worse off for it.

Software Project .travis.yml Travis Badge in README Deployable Versioned Releases Staging
sudomesh/monitor Build Status yes Heroku Pipeline PR peoplesopen-monitor-staging
sudomesh/exitnode Build Status PR
sudomesh/sudowrt-firmware Build Status yes
sudomesh/peoplesopen-dash Build Status yes
sudomesh/peoplesopen-front Build Status yes PR peoplesopen-front-staging
sudomesh/meshnode-database Build Status
@eenblam
Copy link
Member

eenblam commented May 13, 2018

Summarizing discussion about sudomesh/peoplesopen-front during BYOI office hours:

@sierkje:

  • Meant to be temporary; looking for ways to make this more accessible
  • Mostly static right now, but need to move to CMS to make it more accessible to non-coders.
  • Not sure what we'd test at the moment.
  • More important to improve the system administration of the DO droplet it's running on. No change management in terms of what happens on that system.

@gobengo: Yeah, we don't necessarily need tests for the website. I'm shooting for, at a minimum, getting coverage around the build process.

@eenblam: Since we're trying to answer the question of what "deployability" means for this repo, maybe we can focus on adding infrastructure-related scripts so we can smoothly spin up a new droplet, secure it, redeploy the site, etc.

Also cc @bennlich

@eenblam
Copy link
Member

eenblam commented May 13, 2018

I think that "verifying deployability" for a lot of our stuff will have more to do with having solid provisioning scripts that we can use to spin up test instances and run basic integration tests against.

To this end, we might consider setting up a Digital Ocean team.

@eenblam
Copy link
Member

eenblam commented May 13, 2018

We also discussed standardizing on adding a Makefile to every repo so that things can always be built with git clone <repo> then something like make build && make test && make deploy.

@eenblam
Copy link
Member

eenblam commented May 14, 2018

Relevant notes from @jhpoelen in the mailing list:

As you probably know, I am big fan of build automation, especially when pushed to continuous deployment (CD). In my experience, CD helps to build testable software especially when changes propagate quickly and build processes are snappy. I am probably preaching to the choir.

Some (historic) notes:

sudomesh/firmware has some build automation (travis-ci cron jobs in addition to commit-triggers) and pushes a build environment image to dockerhub. This automation helped catch a bunch of dependency issues (e.g., sudomesh/sudowrt-firmware#133) as they happened. At the time, we've made an attempt to make the firmware build enviroment (docker image) process independent of internet access. For a thread see sudomesh/sudowrt-firmware#111 (comment) .

sudomesh/firmware has release tags. When a release is tagged, a tagged firmware build image is pushed to dockerhub automatically on a successful build. The .travis.yml and accompanying scripts describe how this works.

travis deploy to heroku - I've had good experiences with continuous deployment with github to travis to heroku, see https://github.com/jhpoelen/fb-osmose-bridge for an example.

@gobengo gobengo changed the title All system components should have the same baseline of continuous testing and build pipelines All system components should have consistent continuous testing and build pipelines May 14, 2018
@bennlich
Copy link
Collaborator

Let's not forget meshnode-database--one-stop-shopping for all your mesh ip needs :)

@eenblam
Copy link
Member

eenblam commented May 15, 2018

Update: @jhpoelen transferred the monitor app to the new Heroku team, so I've set it as production and set master to deploy to it after successful builds. The staging app still exists for the moment, because I haven't figured out how to kill it without killing the review app setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants