Skip to content

OLM Fails to Install 2nd Controller if Any Other Controller is Present within a Cluster #1006

@acornett21

Description

@acornett21

Describe the bug
We have been testing both the S3 and Elasticache controllers in an OpenShift Cluster
and we ran into an issue where if one controller is installed, and we try to install a second controller, OLM fails to install the second controller.
If we only install one controller at a time S3 or Elasticache OLM installs the controller successfully. And then the controller is able to
manage it’s CustomResources successfully.
We have pinpointed it to the AdoptedResource custom resource. We are noticed that across all the controllers the AdoptedResource’s
apiVersion: adoptedresources.services.k8s.aws/v1alpha1 which causes a collision in OLM, and the second operator to fail the install.
This would mean that only one controller would be able to be installed on a cluster at a time.
Taking a further look, we noticed that the proposed feature request doc
for AdoptedResource that the apiVesion for each controller is proposed as being apiVersion: s3.services.k8s.aws/v1alpha1
and apiVersion: apigateway.services.k8s.aws/v1alpha1 for example. Is the proposed doc yet to be implemented? Or is the plan
to go is the group for AdoptedResource going to be the same across all controllers?

Steps to reproduce

  1. Build and bundle two controllers with ACK_GENERATE_OLM=true
  2. Try to install both via OLM
  3. The 2nd controller never completely gets installed

Expected outcome
Would expect that multiple controllers to be able to be installed within an OpenShift cluster

Environment

  • Kubernetes version v1.21
  • Using EKS (yes/no), if so version? no
  • AWS service targeted (S3, RDS, etc.) - all

Metadata

Metadata

Labels

kind/bugCategorizes issue or PR as related to a bug.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions