Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport of [QT-525] and [QT-530] into release/1.12.x #20161

Merged
merged 2 commits into from
Apr 23, 2023

Conversation

hc-github-team-secure-vault-core
Copy link
Collaborator

@hc-github-team-secure-vault-core hc-github-team-secure-vault-core commented Apr 13, 2023

The previous strategy for provisioning infrastructure targets was to use
the cheapest instances that could reliably perform as Vault cluster
nodes. With this change we introduce a new model for target node
infrastructure. We've replaced on-demand instances for a spot
fleet. While the spot price fluctuates based on dynamic pricing,
capacity, region, instance type, and platform, cost savings for our
most common combinations range between 20-70%.

This change only includes spot fleet targets for Vault clusters.
We'll be updating our Consul backend bidding in another PR.

  • Create a new vault_cluster module that handles installation,
    configuration, initializing, and unsealing Vault clusters.
  • Create a target_ec2_instances module that can provision a group of
    instances on-demand.
  • Create a target_ec2_spot_fleet module that can bid on a fleet of
    spot instances.
  • Extend every Enos scenario to utilize the spot fleet target acquisition
    strategy and the vault_cluster module.
  • Update our Enos CI modules to handle both the aws-nuke permissions
    and also the privileges to provision spot fleets.
  • Only use us-east-1 and us-west-2 in our scenario matrices as costs are
    lower than us-west-1.

Signed-off-by: Ryan Cragun me@ryan.ec

@hc-github-team-secure-vault-core hc-github-team-secure-vault-core force-pushed the backport/qt-525/basically-helpful-katydid branch 2 times, most recently from 62bdc38 to 4138b95 Compare April 13, 2023 20:53
@hashicorp-cla
Copy link

hashicorp-cla commented Apr 13, 2023

CLA assistant check
All committers have signed the CLA.

@ryancragun ryancragun force-pushed the backport/qt-525/basically-helpful-katydid branch from 1d5576a to 802ee55 Compare April 23, 2023 22:35
@ryancragun ryancragun changed the title Backport of [QT-525] enos: use spot instances for Vault targets into release/1.12.x Backport of [QT-525] and [QT-530] into release/1.12.x Apr 23, 2023
The previous strategy for provisioning infrastructure targets was to use
the cheapest instances that could reliably perform as Vault cluster
nodes. With this change we introduce a new model for target node
infrastructure. We've replaced on-demand instances for a spot
fleet. While the spot price fluctuates based on dynamic pricing,
capacity, region, instance type, and platform, cost savings for our
most common combinations range between 20-70%.

This change only includes spot fleet targets for Vault clusters.
We'll be updating our Consul backend bidding in another PR.

* Create a new `vault_cluster` module that handles installation,
  configuration, initializing, and unsealing Vault clusters.
* Create a `target_ec2_instances` module that can provision a group of
  instances on-demand.
* Create a `target_ec2_spot_fleet` module that can bid on a fleet of
  spot instances.
* Extend every Enos scenario to utilize the spot fleet target acquisition
  strategy and the `vault_cluster` module.
* Update our Enos CI modules to handle both the `aws-nuke` permissions
  and also the privileges to provision spot fleets.
* Only use us-east-1 and us-west-2 in our scenario matrices as costs are
  lower than us-west-1.

Signed-off-by: Ryan Cragun <me@ryan.ec>
The security groups that allow access to remote machines in Enos
scenarios have been configured to only allow port 22 (SSH) from the
public IP address of machine executing the Enos scenario. To achieve
this we previously utilized the `enos_environment.public_ip_address`
attribute. Sometime in mid March we started seeing sporadic SSH i/o
timeout errors when attempting to execute Enos resources against SSH
transport targets. We've only ever seen this when communicating from
Azure hosted runners to AWS hosted machines.

While testing we were able to confirm that in some cases the public IP
address resolved using DNS over UDP4 to Google and OpenDNS name servers
did not match what was resolved when using the HTTPS/TCP IP address
service hosted by AWS. The Enos data source was implemented in a way
that we'd attempt resolution of a single name server and only attempt
resolving from the next if previous name server could not get a result.
We'd then allow-list that single IP address. That's a problem if we can
resolve two different public IP addresses depending our endpoint address.

This change utlizes the new `enos_environment.public_ip_addresses`
attribute and subsequent behavior change. Now the data source will
attempt to resolve our public IP address via name servers hosted by
Google, OpenDNS, Cloudflare, and AWS. We then return a unique set of
these IP addresses and allow-list all of them in our security group. It
is our hope that this resolves these i/o timeout errors that seem like
they're caused by the security group black-holing our attempted access
because the IP we resolved does not match what we're actually exiting
with.

Signed-off-by: Ryan Cragun <me@ryan.ec>
@ryancragun ryancragun force-pushed the backport/qt-525/basically-helpful-katydid branch from 802ee55 to 0f608a8 Compare April 23, 2023 22:39
@ryancragun ryancragun marked this pull request as ready for review April 23, 2023 22:40
@ryancragun ryancragun requested a review from a team as a code owner April 23, 2023 22:40
@ryancragun ryancragun merged commit 5515070 into release/1.12.x Apr 23, 2023
@ryancragun ryancragun deleted the backport/qt-525/basically-helpful-katydid branch April 23, 2023 23:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants