Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove restriction of single cluster per namespace #177

Closed
staebler opened this issue May 16, 2018 · 6 comments
Closed

Remove restriction of single cluster per namespace #177

staebler opened this issue May 16, 2018 · 6 comments

Comments

@staebler
Copy link
Contributor

staebler commented May 16, 2018

It makes for a bad user experience if a user can break the functionality of an existing Cluster by adding another Cluster to the same namespace. Are there use cases around the restriction of a single cluster per namespace? I see that this restriction came out of kubernetes-retired/kube-deploy#463, but I don't see any arguments in that issue for why this restriction is needed.

Red Hat would like to allow users to be able to create multiple Clusters in a namespace so that the user can have the Clusters can all use the same Secrets around things like account credentials. The alternative is that the user would be required to copy their account credentials for every Cluster that they wish to create.

See #145 for a discussion on using labels to associate MachineSets and MachineDeployments with Clusters when a namespace has multiple Clusters.

Here is an example of a place where a single Cluster per namespace is required in the current implementation.

func (c *MachineControllerImpl) getCluster(machine *clusterv1.Machine) (*clusterv1.Cluster, error) {
clusterList, err := c.clientSet.ClusterV1alpha1().Clusters(machine.Namespace).List(metav1.ListOptions{})
if err != nil {
return nil, err
}
switch len(clusterList.Items) {
case 0:
return nil, errors.New("no clusters defined")
case 1:
return &clusterList.Items[0], nil
default:
return nil, errors.New("multiple clusters defined")
}
}

@medinatiger
Copy link
Contributor

can you elaborate the Red Hat use case in more detail? Is cluster object live outside cluster?

@staebler
Copy link
Contributor Author

@medinatiger Red Hat will have what we are loosely terming a root cluster where users can create Cluster and MachineSets. The Red Hat product will create a remote OpenShift cluster in AWS for each Cluster in the root cluster. A Cluster in the root cluster will have in its ProviderConfig a reference to a Secret that will contain AWS account credentials to use to create the remote OpenShift cluster. It would be a better experience for Red Hat's users if they could create that Secret once in a single namespace and have all of the Clusters that they create in that namespace share that Secret. That way, if there are changes that need to be made to the account credentials, the user has only one place that they need to make the changes.

When the Red Hat product creates a remote OpenShift cluster corresponding to a Cluster in the root cluster, the Red Hat product will also create Cluster and MachineSets in the remote cluster. The Cluster and MachineSets in the root cluster will remain the ultimate source of truth, though.

@dgoodwin
Copy link
Contributor

We talked about opening an issue for this on call today but this one may be sufficient. Essentially the proposal is to revisit the decision taken in kubernetes-retired/kube-deploy#463.

Reasons given were:

  • Difficult for the apiserver to actually validate that only one cluster is created in a namespace unless we enforce a single cluster name.
  • No need to copy shared objects. (Secrets, RBAC policies, etc)

This spawned from discussion around how to link machines to machinesets and clusters. The proposed solution resulting from that was a required local reference from machineset/machinedeployment/machine to an owning cluster, enforced by the apiserver validation. No use cases were uncovered on call where machines are in use but a cluster does not exist for them to belong to.

If however we maintain the assumption that only one cluster can exist per namespace, this link remains unnecessary and this additional step of setting a local object cluster reference on your machine types would not be required.

CC @craigtracey @roberthbailey @krousey

@wangzhen127
Copy link
Contributor

/cc @wangzhen127

@roberthbailey
Copy link
Contributor

De-duping with #41.

/close

@k8s-ci-robot
Copy link
Contributor

@roberthbailey: Closing this issue.

In response to this:

De-duping with #41.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

chuckha pushed a commit to chuckha/cluster-api that referenced this issue Oct 2, 2019
Add capdctl, capd-manager versions to bug report template
chuckha pushed a commit to chuckha/cluster-api that referenced this issue Oct 2, 2019
✨ Calculate cert hashes and verify CA certificates
jayunit100 pushed a commit to jayunit100/cluster-api that referenced this issue Jan 31, 2020
* add standalone esx support

* move all glog to klog

* Fixed machine provisioning on ESXi.

- fixed boot sequence on some images (e.g. xenial)
- fixed sudo on machines without DNS access
- fixed cloud provider bootstrap
- fixed rbac role preventing machine deletion
- refactored templates.go and the esx cloning code

Fixed boot sequence on some images by adding a serial port to allow random
number initialization.  This affect some images like Xenial.  It currently
adds a serial port to all machines if it doesn't already in the vm spec.
Fixed sudo access for machines without DNS access, which for most development
scenarios in nested ESXi on dev laptops.  Fixed cloud provider bootstrapping
on infrastructure that do not have cloud provider support (e.g. ESXi)

issue kubernetes-sigs#177
fxierh pushed a commit to fxierh/cluster-api that referenced this issue Sep 14, 2024
OCPCLOUD-2121: Add openshift/e2e-tests for CAPI E2E testing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants