-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds platform-specific specs to Infrastructure in config.openshift.io #231
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -28,6 +28,11 @@ type InfrastructureSpec struct { | |
|
||
// InfrastructureStatus describes the infrastructure the cluster is leveraging. | ||
type InfrastructureStatus struct { | ||
// infrastructureName uniquely identifies a cluster with a human friendly name. | ||
// Once set it should not be changed. Must be of max length 27 and must have only | ||
// alphanumeric or hyphen characters. | ||
InfrastructureName string `json:"infrastructureName"` | ||
|
||
// platform is the underlying infrastructure provider for the cluster. This | ||
// value controls whether infrastructure automation such as service load | ||
// balancers, dynamic volume provisioning, machine creation and deletion, and | ||
|
@@ -38,6 +43,11 @@ type InfrastructureStatus struct { | |
// not support that platform. | ||
Platform PlatformType `json:"platform,omitempty"` | ||
|
||
// platformStatus holds status informations that are specific to the underlying | ||
// infrastructure provider of the cluster. | ||
// +optional | ||
PlatformStatus *PlatformStatus `json:"platformStatus,omitempty"` | ||
|
||
// etcdDiscoveryDomain is the domain used to fetch the SRV records for discovering | ||
// etcd servers and clients. | ||
// For more info: https://github.com/etcd-io/etcd/blob/329be66e8b3f9e2e6af83c123ff89297e49ebd15/Documentation/op-guide/clustering.md#dns-discovery | ||
|
@@ -53,28 +63,43 @@ type InfrastructureStatus struct { | |
type PlatformType string | ||
|
||
const ( | ||
// AWSPlatform represents Amazon Web Services infrastructure. | ||
AWSPlatform PlatformType = "AWS" | ||
// AWSPlatformType represents Amazon Web Services infrastructure. | ||
AWSPlatformType PlatformType = "AWS" | ||
|
||
// AzurePlatform represents Microsoft Azure infrastructure. | ||
AzurePlatform PlatformType = "Azure" | ||
// AzurePlatformType represents Microsoft Azure infrastructure. | ||
AzurePlatformType PlatformType = "Azure" | ||
|
||
// GCPPlatform represents Google Cloud Platform infrastructure. | ||
GCPPlatform PlatformType = "GCP" | ||
// GCPPlatformType represents Google Cloud Platform infrastructure. | ||
GCPPlatformType PlatformType = "GCP" | ||
|
||
// LibvirtPlatform represents libvirt infrastructure. | ||
LibvirtPlatform PlatformType = "Libvirt" | ||
// LibvirtPlatformType represents libvirt infrastructure. | ||
LibvirtPlatformType PlatformType = "Libvirt" | ||
|
||
// OpenStackPlatform represents OpenStack infrastructure. | ||
OpenStackPlatform PlatformType = "OpenStack" | ||
// OpenStackPlatformType represents OpenStack infrastructure. | ||
OpenStackPlatformType PlatformType = "OpenStack" | ||
|
||
// NonePlatform means there is no infrastructure provider. | ||
NonePlatform PlatformType = "None" | ||
// NonePlatformType means there is no infrastructure provider. | ||
NonePlatformType PlatformType = "None" | ||
|
||
// VSpherePlatform represents VMWare vSphere infrastructure. | ||
VSpherePlatform PlatformType = "VSphere" | ||
// VSpherePlatformType represents VMWare vSphere infrastructure. | ||
VSpherePlatformType PlatformType = "VSphere" | ||
) | ||
|
||
// PlatformStatus holds the current status specific to the underlying infrastructure provider | ||
// of the current cluster. Since these are used at status-level for the underlying cluster, it | ||
// is supposed that only one of the status structs is set. | ||
type PlatformStatus struct { | ||
// AWSPlatformStatus contains settings specific to the Amazon Web Services infrastructure provider. | ||
// +optional | ||
AWSPlatformStatus *AWSPlatformStatus `json:"awsPlatformStatus,omitempty"` | ||
} | ||
|
||
// AWSPlatformStatus holds the current status of the Amazon Web Services infrastructure provider. | ||
type AWSPlatformStatus struct { | ||
// tags holds key and value pairs that are set as tags in the AWS resources created by the cluster. | ||
Tags map[string]string `json:"tags,omitempty"` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. does this actually extend to loadbalancers, volumes, or machines created by machine api? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for in-tree aws cloud provider, it uses the cluster id here: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we can say that it should after this merges. I can notify those teams to make sure they're respecting it, if possible. Machine API already has this info on each MachineSet, so user created ones wouldn't match unless the user thought to add the same tags. I'm not sure how ingress/registry/storage are tagging but it would be nice to get everyone on same page. (kubernetes.io/cluster/dgoodwin-dev-6vzj7=owned) I believe the infrastructureName being added above should match the kube one you link to. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
the installer currently tags all the machine/machinesets, loadbalancers with these tags on aws
These are the tags important to openshift on AWS, all controllers must add these and then add more if it likes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is a bit chicken/egg i guess. the in-tree aws cloud provider from kube only applies the tags it knows about here: which right now is of the form: which we are getting assigned to entities by a bit of luck because of how we pre-tag ec2 instances. on startup, kube cloud provider sees its aws, sees no cloud.conf file to pull the cluster id from, and then falls back to reading the ec2 instance for its tags. i would prefer that we update the language a bit here to reflect that the full set of tags enumerated may not be used by all entities unless we are saying the only tag we will set here. what will we set in this map? do we care about both key names and values? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If it's holding up this PR, I'm fine adding language about anything beyond There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For our purposes it was really the InfrastructureName we need as that should be the basis for the KubernetesClusterID, the tag can be inferred. If we drop specification of arbitrary tags can we get this through @derekwaynecarr ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I would be cautious of operators making same decision independently. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. a best-effort API is hard to understand and enforce. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
How can we make this explicit then? InfrastructureTag: InfrastructureTag: kubernetes.io/cluster/infraName=owned (string) My vote would be just to leave it implicit, if you need to tag cloud resources we expect you to apply the standard kube tag using the InfrastructureName as the Kube cluster-id. |
||
} | ||
|
||
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object | ||
|
||
// InfrastructureList is | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
InfrastructureID
because it is an identifier. but looks like @smarterclayton wanted it this way so feel free to keep it as-isThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good with either
InfrastructureID
orInfrastructureName
. To me the difference sound more like in terms of uniqueness (an unique IDentifier) than about the content and readability. So in that sense I'd have it asInfrastructureID
. ButName
seems to be compliant with similar use cases in the API, e.g. theGroupName
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Name in our API is a unique in space (but not time) identifier. Names are things that refer to entities in other systems. UID is reserved for unique in time and space things we generate ourselves. If you can set it, it's not a UID, because someone else could set it to the same value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to pin this comment in my bookmarks :)