-
Notifications
You must be signed in to change notification settings - Fork 271
Description
Describe the bug
We have been testing both the S3 and Elasticache controllers in an OpenShift Cluster
and we ran into an issue where if one controller is installed, and we try to install a second controller, OLM fails to install the second controller.
If we only install one controller at a time S3 or Elasticache OLM installs the controller successfully. And then the controller is able to
manage it’s CustomResources successfully.
We have pinpointed it to the AdoptedResource custom resource. We are noticed that across all the controllers the AdoptedResource’s
apiVersion: adoptedresources.services.k8s.aws/v1alpha1 which causes a collision in OLM, and the second operator to fail the install.
This would mean that only one controller would be able to be installed on a cluster at a time.
Taking a further look, we noticed that the proposed feature request doc
for AdoptedResource that the apiVesion for each controller is proposed as being apiVersion: s3.services.k8s.aws/v1alpha1
and apiVersion: apigateway.services.k8s.aws/v1alpha1 for example. Is the proposed doc yet to be implemented? Or is the plan
to go is the group for AdoptedResource going to be the same across all controllers?
Steps to reproduce
- Build and bundle two controllers with
ACK_GENERATE_OLM=true - Try to install both via OLM
- The 2nd controller never completely gets installed
Expected outcome
Would expect that multiple controllers to be able to be installed within an OpenShift cluster
Environment
- Kubernetes version
v1.21 - Using EKS (yes/no), if so version? no
- AWS service targeted (S3, RDS, etc.) - all