Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm operator unable to create ClusterRole resource due to setting ownerRefs. #1815

Closed
jsm84 opened this issue Aug 12, 2019 · 5 comments
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. language/helm Issue is related to a Helm operator project

Comments

@jsm84
Copy link

jsm84 commented Aug 12, 2019

Bug Report

What did you do?
The Kubeturbo operator deploys the helm chart at https://github.com/esara/kubeturbo/tree/master/deploy/kubeturbo.

Specifically, it trips up when creating the ClusterRole object defined here in the helm chart: https://github.com/turbonomic/kubeturbo/blob/master/deploy/kubeturbo/templates/serviceaccount.yaml#L43-#L85

What did you expect to see?
The helm operator should be able to properly install the helm chart, but can't due to attempting to set ownerRefs on a cluster-scoped resource.

What did you see instead? Under which circumstances?
The error produced by the operator pod is:

"logger":"helm.controller","msg":"Failed to install release","namespace":"kubeturbo","name":"kubeturbo-example","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Kubeturbo","release":"kubeturbo-example-3a8ax3z7yi6wl1v0f5qhmtrea","error":"release kubeturbo-example-3a8ax3z7yi6wl1v0f5qhmtrea failed: clusterroles.rbac.authorization.k8s.io \"turbo-cluster-admin\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>

Environment

  • operator-sdk version: v0.8.0-24-gfd7d925d

  • go version: go1.12.7 linux/amd64

  • Kubernetes version information:

Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.3-201906191409+458fcdd-dirty", GitCommit:"458fcdd", GitTreeState:"dirty", BuildDate:"2019-06-19T18:47:30Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+abe1830", GitCommit:"abe1830", GitTreeState:"clean", BuildDate:"2019-06-19T18:48:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind: OpenShift 4.1 release

  • Are you writing your operator in ansible, helm, or go? Helm

Possible Solution

Additional context
Add any other context about the problem here.

@joelanford
Copy link
Member

@jsm84 Thanks for reporting this issue. I've submitted #1817, which should resolve it. Can you try it out and confirm if that resolves this issue?

@joelanford joelanford self-assigned this Aug 13, 2019
@joelanford joelanford added language/helm Issue is related to a Helm operator project kind/bug Categorizes issue or PR as related to a bug. labels Aug 13, 2019
@jsm84
Copy link
Author

jsm84 commented Aug 13, 2019

@jsm84 Thanks for reporting this issue. I've submitted #1817, which should resolve it. Can you try it out and confirm if that resolves this issue?

I will try this on the metadata I have for kubeturbo. Interestingly, Endre Sara w/Turbonomic was able to take my metadata, adjust the RBAC, and got it to work. I verified this on my on cluster today. The diff -u output for the CSV is:

--- bundle/kubeturbo-operator.v6.3.0.clusterserviceversion.yaml	2019-08-13 15:08:34.430524099 -0400
+++ endre/kubeturbo-operator.v6.3.0.clusterserviceversion.yaml	2019-08-13 14:37:15.312585739 -0400
@@ -42,11 +42,13 @@
         - rules:
             - apiGroups:
                 - ""
+                - apps
+                - extensions
               resources:
+                - nodes
                 - pods
                 - configmaps
                 - endpoints
-                - persistentvolumeclaims
                 - events
                 - deployments
                 - persistentvolumeclaims
@@ -58,27 +60,32 @@
               verbs:
                 - '*'
             - apiGroups:
+                - ""
                 - apps
+                - extensions
+                - policy
               resources:
-                - deployments
                 - daemonsets
-                - replicasets
-                - statefulsets
-              verbs:
-                - '*'
-            - apiGroups:
-                - ""
-              resources:
+                - endpoints
+                - limitranges
                 - namespaces
+                - persistentvolumes
+                - persistentvolumeclaims
+                - poddisruptionbudget
+                - resourcequotas
+                - services
+                - statefulsets
               verbs:
                 - get
+                - list
+                - watch
             - apiGroups:
                 - ""
               resources:
-                - configmaps
-                - secrets
+                - nodes/spec
+                - nodes/stats
               verbs:
-                - '*'
+                - get
             - apiGroups:
                 - charts.helm.k8s.io
               resources:
@@ -95,9 +102,11 @@
               resources:
                 - nodes
                 - pods
+                - configmaps
                 - deployments
                 - replicasets
                 - replicationcontrollers
+                - serviceaccounts
               verbs:
                 - '*'
             - apiGroups:
@@ -127,6 +136,12 @@
               verbs:
                 - get
             - apiGroups:
+                - charts.helm.k8s.io
+              resources:
+                - '*'
+              verbs:
+                - '*'
+            - apiGroups:
                 - rbac.authorization.k8s.io
               resources:
                 - clusterroles

My guess here (as Daniel Messer mentioned in slack) is probably the addition of the apiGroup charts.helm.k8s.io under the clusterPermissions RBAC rules.

I should also say, thanks a ton for the immediate response time and code fix @joelanford !

@jsm84
Copy link
Author

jsm84 commented Aug 14, 2019

I confirmed that adding the apiGroup charts.helm.k8s.io in the clusterPermissions block does function as a workaround.

However, I tried updating the operator-sdk binary to the latest master branch, and also updating the Dockerfile for kubeturbo-operator to use helm-operator:v0.10.0, hoping the changes were reflected there, but they don't appear to be. I don't know how to force a build of the helm-operator base image using the operator-sdk binary, unfortunately, so the operator pod is still failing with the same error as before. Any thoughts @joelanford ?

@joelanford
Copy link
Member

@jsm84 I merged the fix to master this morning, but the CI build that kicked off the new master image builds flaked. So once that finishes, you should be able to try it out with the helm-operator:master image.

https://travis-ci.org/operator-framework/operator-sdk/builds/571919523

@jsm84
Copy link
Author

jsm84 commented Aug 15, 2019

I just completed testing the same helm-operator build using the helm-operator:master base image in the Dockerfile, and it works as expected.

Closing issue. Thanks again @joelanford !

@jsm84 jsm84 closed this as completed Aug 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. language/helm Issue is related to a Helm operator project
Projects
None yet
Development

No branches or pull requests

2 participants