-
Notifications
You must be signed in to change notification settings - Fork 705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Helm chart for installing Kubeapps #349
Conversation
@arapulido the latest changes should be testable, you'll need to install Tiller using |
@prydonius I have checked this and I am getting a CRD validation error:
I have been checking around but it seems that there is not yet a solution for this. Did you encounter this error? How did you fix it? |
Oh woops, I forgot to mention you also need the RC Helm version: https://github.com/helm/helm/releases/tag/v2.10.0-rc.1, it adds support for a CRD hook. I was hoping for this to be released this week, but if not it will be a blocker for our release or we will need a workaround. |
chart/kubeapps/templates/NOTES.txt
Outdated
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "kubeapps.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') | ||
echo http://$SERVICE_IP:{{ .Values.frontend.service.port }} | ||
{{- else if contains "ClusterIP" .Values.frontend.service.type }} | ||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kubeapps.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pod get the app label as .Release.Name-.kubeapps.name, instead of just kubeapps, so the command shown here will get 0 pods
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, thanks
chart/kubeapps/templates/NOTES.txt
Outdated
{{- else if contains "ClusterIP" .Values.frontend.service.type }} | ||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kubeapps.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") | ||
echo "Visit http://127.0.0.1:8080 to use your application" | ||
kubectl port-forward $POD_NAME 8080:80 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pod is listening in the port 8080, so the command above should be kubectl port-forward $POD_NAME 8080
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, thanks
Installing charts doesn't work for me either |
@tompizmor @javsalgar It would be very useful if you start reviewing this as well, thanks! |
Hmm, thanks @arapulido. Seems to be an issue when building the docker image for the Dashboard, because with Telepresence the AppRepositories work for me. With the image you built, I can see that the namespace it retrieves from the config is |
also renames API URL method to getResourceLink
The issue was the ConfigMap was incorrect JSON (oops!), when using telepresence it uses the local config.json which was correct. Also the creation error for AppRepositories was due to using the wrong resource link, fixed that too. Note, when doing a Helm upgrade, please make sure you set the same password for MongoDB, otherwise it will get regenerated (known Helm issue - helm/helm#3053) and break chart syncs and the chartsvc:
|
@prydonius I have rebuilt and deployed the chart in a new cluster, and now the Dashboard doesn't get the login form, but tries directly to get to the App list (getting a 401 in response) |
@prydonius I have tested the chart and with the cluster-admin role things seem to work, but as soon as I create a serviceaccount with less permissions, it seems that we need to create the roles manually (when with I get this error when deploying to the namespace my serviceaccount has write access to: that also talks about the |
Once I created a role to read apprepositories in the |
@@ -0,0 +1,5 @@ | |||
apiVersion: v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should you also add the maintainers
and other metadata like home
, sources
and icon
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I'll do that, thanks!
metadata: | ||
name: {{ template "kubeapps.apprepository.fullname" . }} | ||
labels: | ||
app: {{ template "kubeapps.apprepository.fullname" . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that when there is an issue deploying a chart from kubeapps the following message is show:
Check the health of Kubeapps components 'kubectl get po --all-namespaces -l created-by=kubeapps'.
However, it seems any pod created by the chart has the label created-by=kubeapps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where are you seeing that? I thought that was from the kubeapps
CLI, it's not in the NOTES.txt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh sorry, it's actually shown in the UI, will fix that thanks. Note you'll need to build images for the apprepository-controller and dashboard to make this actually work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I saw that Ara already built the images so I used the same ones
chart/kubeapps/values.yaml
Outdated
repository: kubeapps/chart-repo | ||
tag: latest | ||
repos: | ||
- name: stable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you thought about also adding the bitnami charts repo here?
serviceAccountName: {{ template "kubeapps.apprepository.fullname" . }} | ||
containers: | ||
- name: controller | ||
image: "{{ .Values.apprepository.image.repository }}:{{ .Values.apprepository.image.tag }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you think it makes sense to also allow specifying the registry for the container image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should do it for consistency, but I was just going off the helm create
template. TBH I don't really like adding the registry value, because I don't think the value needs to be so fine-grained, setting repository=myregistry.com/apprepository-controller
works fine right?
It's also confusing if you're used to the upstream charts that just have the repository
value, if we had registry default to docker.io
, then the above would actually render docker.io/myregistry.com/apprepository-controller
which would be incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other charts the image is set using a template. This could be easier down the line if we want to change the way we set the images ("registry, repository, image" or "repository,image")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@javsalgar I couldn't find where the template was used, we seem to be just doing this: https://github.com/helm/charts/blob/master/stable/drupal/templates/deployment.yaml#L27. In any case, since this chart will eventually be going into bitnami/charts I am going to update it for consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this a test and it worked as expected. For Kubeless I would suggest that instead of adding it as a subchart I would do the same that we do with the service catalog: Detect that it's missing and point to the kubeless
chart that is in the incubator
registry. If that gets complicated for any reason we should show a custom error message in the Functions view.
Let me know when you want me to take a look to this again!
chart/kubeapps/templates/NOTES.txt
Outdated
{{- else if contains "ClusterIP" .Values.frontend.service.type }} | ||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kubeapps.fullname" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") | ||
echo "Visit http://127.0.0.1:8080 to use your application" | ||
kubectl port-forward $POD_NAME 8080:8080 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubectl port-forward --namespace {{ .Release.Namespace}} $POD_NAME 8080:8080
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah good catch, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that what technically kubeapps dashboard
used to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does helm support subdirectories under templates/
i.e apprepository/
chart-svc/`
Not for now, but I am curious if it would make sense to make them separated charts and the dashboard imports them as subcharts. As I said, not important since nobody is going to re-use them for now and we are going to package them all at the same time but food for thought.
@@ -75,4 +78,7 @@ func init() { | |||
flag.StringVar(&kubeconfig, "kubeconfig", "", "Path to a kubeconfig. Only required if out-of-cluster.") | |||
flag.StringVar(&masterURL, "master", "", "The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.") | |||
flag.StringVar(&repoSyncImage, "repo-sync-image", "kubeapps/chart-repo:latest", "container repo/image to use in CronJobs") | |||
flag.StringVar(&namespace, "namespace", "kubeapps", "Namespace to discover AppRepository resources") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I ask why these changes are related to the creation of a chart? Couldn't this have been done in another previous patch?
Just trying to understand the dynamics :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was needed because previously apprepository-controller was hardcoded to use the kubeapps
namespace which was fine as kubeapps up
didn't make that configurable. The chart can be installed in any namespace though (although we could hardcode it, but bad practice), so this is needed for that to work.
@migmartri good points, I don't know if Helm supports directories inside templates, but another option is to just have the subcharts inside the |
- update hardcoded references to `kubeapps` namespace to variable Kubeapps installation namespace - remove Kubeless roles as Kubeapps n longer bundles it and so is unable to create the necessary Roles in the `kubeless` namespace. The permission error alerts in Kubeapps should be sufficient for users using Kubeless within Kubeapps
@arapulido I will fix this in a separate PR as I need to rework the way we get the namespace in the dashboard a bit and don't want to add more to this. For now, this PR functionally works and so I think we can merge as-is. I believe I've addressed all the other review changes. Things to address in other PRs:
|
docs/user/access-control.md
Outdated
``` | ||
|
||
#### Write access to App Repositories | ||
|
||
In order to create and refresh App Repositories in Kubeapps, apply the | ||
`kubeapps-repositories-write` Role in the kubeapps namespace. | ||
`kubeapps-repositories-write` Role in the namespace KUbeapps is installed in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/KUbeapps/Kubeapps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, thanks
For me I am okay to merge this PR as it is and open different ones to address the items missing to avoid more and a bigger PR. |
chart/kubeapps/templates/NOTES.txt
Outdated
echo http://$NODE_IP:$NODE_PORT | ||
{{- else if contains "LoadBalancer" .Values.frontend.service.type }} | ||
NOTE: It may take a few minutes for the LoadBalancer IP to be available. | ||
You can watch the status of by running 'kubectl get svc -w {{ template "kubeapps.fullname" . }}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove "of"
serviceAccountName: {{ template "kubeapps.apprepository.fullname" . }} | ||
containers: | ||
- name: controller | ||
image: "{{ .Values.apprepository.image.repository }}:{{ .Values.apprepository.image.tag }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other charts the image is set using a template. This could be easier down the line if we want to change the way we set the images ("registry, repository, image" or "repository,image")
httpGet: | ||
path: /healthz | ||
port: 8080 | ||
initialDelaySeconds: 60 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our production-ready infrastructure assets, these kind of values go in values.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right! We usually export health checks settings so users can fine tune them
@prydonius you seem to have added the tarball |
@sameersbn I was thinking of vendoring it, but I'll remove it |
chart/kubeapps/values.yaml
Outdated
registry: docker.io | ||
repository: kubeapps/chart-repo | ||
tag: latest | ||
jobsImage: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comment explaining that this is to perform a workaround.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we run it even if you are using the version of helm that does not require the workaround?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do run it yes, this workaround works with all versions of Helm, and I don't know if we can detect the new version easily enough to change the way we do this (also it's still not yet released).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@migmartri fixed
- delete | ||
--- | ||
apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
kind: RoleBinding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add hook annotations.
chart: {{ template "kubeapps.chart" . }} | ||
release: {{ .Release.Name }} | ||
heritage: {{ .Release.Service }} | ||
data: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually it might make sense to bake this configuration in the image.
fixes #325
Todos: