Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add habitat provisioner #16280

Merged

Conversation

nsdavidson
Copy link
Contributor

This adds a Habitat (https://habitat.sh) provisioner.

Nolan Davidson added 2 commits October 6, 2017 12:22
First pass at loading the config data using the TF schema.

Signed-off-by: Nolan Davidson <ndavidson@chef.io>
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
@apparentlymart
Copy link
Contributor

apparentlymart commented Oct 17, 2017

Hi @nsdavidson! Thanks for working on this.

We've traditionally been cautious about adding vendor-specific provisioners because the provisioner abstraction in Terraform is not very strong (compared to resources, which are the primary Terraform concept) and so they tend to be limiting in non-trivial environments. We added some while we were still learning, but we've generally been advising users to either bake necessary information into their machine images (avoiding the need for provisioning at all) or to build using the file and remote-exec provisioners, which tend to compose better with features such as destroy provisioners.

With that said, I'd like to explore the following with you:

  • Is installation of Habitat a stateless operation? That is, does it just affect the target machine, or does it also create records on some central service? The existing chef provisioner is stateful, for example, because it creates a record of the instance in the Chef server and that has, in practice, proven problematic since Terraform has no means to de-register an instance when it's destroyed.
  • Is this provisioner doing something that would be impossible or very difficult to achieve using file and remote-exec? I had a quick read of the steps and it didn't seem so, but given that you mentioned that it works only with the ssh communicator I wonder if I missed something complex going on here.
  • I assume there'd be no good reason for this provisioner to be used as an "on-destroy" provisioner, since it doesn't really make sense to install new software while you're in the process of destroying an instance.

@bixu
Copy link

bixu commented Oct 19, 2017

@apparentlymart, I think I can help here, in answer to your questions:

1). Yes, Habitat is essentially stateless. There's some gossip that happens between Habitat services, but that conversation is designed to allow members of the gossip ring to depart unexpectedly (like during a destroy operation).

2). At my org we do use the file and remote-exec resources, and have found them to work well, although we don't gain the advantages of HCL's readability/clarity, so I can understand the desire to have a native provider.

3). See my first answer - Habitat is designed with self-departure of group/ring members in mind.

@nsdavidson
Copy link
Contributor Author

@apparentlymart Thanks for taking a look at it!

@bixu is pretty much on point with the responses. It is stateless and no action needs to be taken upon destroying. I think most folks currently provisioning Habitat services with Terraform are using file and remote-exec, but it's not terribly clean or clear as to what is being provisioned. My goal here was to provide a way to clearly describe the infrastructure and service configuration together.

@nsdavidson
Copy link
Contributor Author

@apparentlymart Is there anything else I can help answer here?

@bixu
Copy link

bixu commented Dec 1, 2017 via email

@smartb-pair
Copy link

@apparentlymart, we'd love to see this integrated. We're trying to be an all Terraform+Habitat shop at the moment.

@apparentlymart
Copy link
Contributor

Hi @nsdavidson! Sorry for the silence here.

To be transparent with you, we ended up stalling on this because we've found it pretty hard to support the existing service-specific provisioners already present (chef, salt-masterless) because we don't have in-house expertise with these nor infrastructure in place to test them. Thus we find ourselves pretty nervous about adding new ones that may end up in the same situation.

Given that there are far fewer provisioners than providers, we haven't developed a process for dealing with community-driven provisioner development/maintenance and so we found ourselves a bit stalled here, not really knowing how best to proceed.

Do you think that, were this to be merged, you'd be able to help field issues and PRs that might be submitted against this provisioner on an ongoing basis? With the provisioner still living inside the main repository (vs. providers, which were split out) we don't have a proper mechanism in place yet for direct community maintenance of provisioners, but if you'd be willing to monitor for new issues and test/review new PRs (which we could then merge, trusting your review/test) I think that would help assuage the concern here. A new issue/PR label provisioner/habitat would allow you to more easily find these, hypothetically.

@nsdavidson
Copy link
Contributor Author

@apparentlymart No worries, that all makes sense. I totally understand the reservations.

For sure, I'd be happy to help with issues and PRs.

@apparentlymart apparentlymart merged commit a50a383 into hashicorp:master Dec 8, 2017
@apparentlymart
Copy link
Contributor

Thanks @nsdavidson! I've merged this for inclusion in the next release.

@apparentlymart
Copy link
Contributor

Any issues and PRs for this provider will show up under the provisioner/habitat label, once we tag them during our triage/labelling process.

@bixu
Copy link

bixu commented Dec 8, 2017

@nsdavidson, I could be missing something because I’m on thumbs, but I don’t see anything in this PR to handle GitHub Auth Tokens, which are used to access private Builder Origins. We’d need to pass such a token in to hab commmand and also make it available to the systemd service in the unit file, I think.

Feel free to ping me in the community Slack channel if you have questions. Very excited to use this :)

Description=Habitat Supervisor

[Service]
ExecStart=/bin/hab sup run %s
Copy link

@smartb-pair smartb-pair Dec 8, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here we additionally (and optionally) need something like:

Environment="HAB_AUTH_TOKEN=<github_token>"

for anyone accessing private origins in Builder.

command = fmt.Sprintf("sudo -E %s", command)
}

command = fmt.Sprintf("env HAB_NONINTERACTIVE=true %s", command)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nsdavidson, I think we also need to be able to pass HAB_AUTH_TOKEN in here.

func (p *provisioner) startHabService(o terraform.UIOutput, comm communicator.Communicator, service Service) error {
var command string
if p.UseSudo {
command = fmt.Sprintf("env HAB_NONINTERACTIVE=true sudo -E hab pkg install %s", service.Name)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nsdavidson, I think we also need to be able to pass HAB_AUTH_TOKEN in here.

@ghost
Copy link

ghost commented Apr 5, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 5, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants