-
Notifications
You must be signed in to change notification settings - Fork 76
Conversation
The GitHub UI reordered the commits. The correct order is:
|
BaseCluster already implements it. harness.H does too, and since cluster.TestCluster also does, we need to disambiguate every reference to TestCluster.Name().
Add Flight interface and corresponding types as the parent object of a Cluster. Move cloud API objects into flight types, allowing them (and their rate-limiting mechanisms) to be shared across clusters.
Cloud API objects can handle their own rate-limiting now that they're shared between clusters.
We don't validate that the port isn't already in use, but we're running in a network namespace, so hopefully there's no contention.
The other segments are apparently unused.
When all clusters are in the same netns, we'll eventually need more than 254 interfaces. Support 16 bits' worth, but only create 500 for now, since a large number of dhcp-host directives significantly increases dnsmasq's startup time. We don't actually need to retain the entire Interfaces array in memory after dnsmasq.conf is rendered, but it's a small enough amount of RAM not to be worth fixing for now.
Share the dnsmasq, discovery etcd, and NTP server between clusters. Continue to use a per-cluster Omaha server, since tests can configure it.
No longer upload SSH key once per cluster.
It'll now be handled automatically.
Move the old "kola mkimage" functionality into flight and run it by default on CL.
nice work @bgilbert ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial Pass
// NewFlight will consume the environment variables $AWS_REGION, | ||
// $AWS_ACCESS_KEY_ID, and $AWS_SECRET_ACCESS_KEY to determine the region to | ||
// spawn instances in and the credentials to use to authenticate. | ||
func NewFlight(opts *aws.Options) (platform.Flight, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might just be because it's the initial implementation, but in the platform specific flights should we be handling the creation of entire test run / global resources (i.e. networking resources in AWS) in the platforms during the NewFlight
creation and then providing accessors to the individual Cluster
objects?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add onto this comment, I'm fine with having these actions done in separate PRs after the landing of this one and am more than happy to file issues once this lands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, that would require some refactoring of platform/api/*
, e.g. to have a setup function returning a state struct which is then passed to the instance-creation function. (Or instance creation could become a method on the state struct.) Makes sense, but probably for a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hadn't thought of creating a State
struct that has a built-in method for instance-creation but I do like the idea. My initial thought was around the caching of global / entire test run variables with simple accessor methods.
Once this PR lands I'll file a couple of issues for the different clouds that have distinct overarching state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor nit, fix it or leave it. LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Add Flight object, which serves as a parent for all Clusters in a kola run. Make the Flight responsible for platform infrastructure that shouldn't be recreated per Cluster.
Move the following into Flight:
kola mkimage
functionalityFixes #803.