-
Notifications
You must be signed in to change notification settings - Fork 16.8k
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
CLA signed |
/assign |
/assign @unguiculus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Before I continue reviewing it, please make sure you follow our review guidelines and apply best practices. In general we want separate files per resources (with an exception for RBAC stuff).
https://github.com/kubernetes/charts/blob/master/REVIEW_GUIDELINES.md
https://github.com/kubernetes/helm/blob/master/docs/chart_best_practices/rbac.md
incubator/ark-server/README.md
Outdated
## ConfigMap customization | ||
Since we want to have a customizable chart it's important that the configmap is a template and not a static file. | ||
To do this we add the keyword `tpl` when reading the file | ||
- {{ (tpl (.Files.Glob "static/*").AsConfig .) | indent 2 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't you just add it to the templates folder? It doesn't make sense to me to have it in a static
folder when it is not static.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the reason I explained in the Premise Helm cannot handle properly CRD becauses it has a validation mechanism that checks the installation before the CRD are actually created, hence each resource that uses a CRD cannot be validated because the CRD doesn't exist yet!
.
If you move the file in templates
you get the error Error: apiVersion "ark.heptio.com/v1" in ark-server/templates/01-config-deploy.yaml is not available
.
But I agree that the name static
is not appropriate
incubator/ark-server/README.md
Outdated
|
||
### Heptio Secret | ||
Ark server needs a IAM service accoutn in order to run, if you don't have it you must create it: | ||
https://github.com/heptio/ark/blob/v0.6.0/docs/cloud-provider-specifics.md#gcp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.7.0
incubator/ark-server/README.md
Outdated
|
||
### Bucket and Project name | ||
Please change bucket and project/region name in the values.yaml file | ||
See here for possible values: https://github.com/heptio/ark/blob/v0.6.0/docs/cloud-provider-specifics.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.7.0
|
||
--- | ||
apiVersion: apps/v1beta1 | ||
kind: Deployment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this not in templates
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved it to templates
stable/ark/templates/deployment.yaml
Outdated
- name: plugins | ||
mountPath: /plugins | ||
{{- if (or (eq .Values.configuration.cloudprovider "aws") (eq .Values.configuration.cloudprovider "gcp")) }} | ||
- name: cloud-credentials |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name in my opinion should be a configurable variable.
- name: delete-ark-config | ||
image: {{ required "A docker image with kubectl" .Values.kubectl.image.repository }}/{{ required "A docker image with kubectl" .Values.kubectl.image.tag }} | ||
imagePullPolicy: Always | ||
command: ["kubectl", "delete", "-f", "/tmp/"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the image has entry point kubectl
or command kubectl
, you can pass just the arguments, without the kubectl
such as here: https://hub.docker.com/r/lachlanevenson/k8s-kubectl/ imo.
I'll probably have some time to more thoroughly test and review the PR today. If you don't mind, I'm going to help fix things. |
@unguiculus you're welcome to help :) |
I made quite a few changes. I'll continue with this next week and hope to push something then. |
/ok-to-test |
I pushed a few changes:
I tested it successfully on my GKE cluster. |
@domcar @svyotov @sortigoza @ncdc PTAL |
@unguiculus I'm fine with the changes and I could successully deploy using an existing secret. But it doesn't work when putting the secrets into |
@domcar I tested it again. It works with limitations. Note that base64 encoding is done by the chart. You need to specify values in plain text. A few examples: credentials:
secretContents:
AZURE_SUBSCRIPTION_ID: ...
AZURE_TENANT_ID: ...
AZURE_RESOURCE_GROUP: ...
AZURE_CLIENT_ID: ...
AZURE_CLIENT_SECRET: ...
AZURE_STORAGE_ACCOUNT_ID: ...
AZURE_STORAGE_KEY: ...
credentials:
secretContents:
cloud: |
{
"type": "service_account",
"project_id": "my-project",
"private_key_id": "123456789",
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
"client_email": "service@my-project.iam.gserviceaccount.com",
"client_id": "123456789",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service%40my-project.iam.gserviceaccount.com"
} I didn't manage to pass the GCP service account key from the last example on the commandline. Some escaping would probably be necessary. I got |
@unguiculus ok thanks, I managed to deploy on GCP with the secret in the |
@domcar Can we merge it? |
yes :) |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: domcar, unguiculus The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* first commit * changes SA name * adds correct rbac rules * renames files; add more customizable vars * updates readme * adds notes * removes config values * changes email in chart * updates readme * test changes author * test change email * test change email * adds prerequisites in readme * fixes typo * adds AWS * updates to version 0.7 * updates version in chart * adds repo source; removes unnecessary values * moves deployment to templates * renames folder * updaates to 0.7.1 * creates ark sa in helpers; separates files according to object type * updates version in chart * adds home to chart * updates to v0.7.1 * modifies chart according to best practices * removes blank line * adds delete backup hook to solve issue crd backup not deleting * adjusts indentation, renames file with using hyphene * moves folder to stable * remove unnecessary test condition * Update to v0.8.1; Add support for Azure * Update readme * Add annotation for kube2iam * Add image for hooks. Update readme * Rename serviceaccount * Use Get instead of Glob * Remove namespace * Add PullPolicy; Modify readme * Rename Chart ark * Add standard labels to resources * Add customizable tolertion and nodeselector * Add missing labels; Use image with tag * Implement suggestion * Various updates * Add missing if block around delete hook
* first commit * changes SA name * adds correct rbac rules * renames files; add more customizable vars * updates readme * adds notes * removes config values * changes email in chart * updates readme * test changes author * test change email * test change email * adds prerequisites in readme * fixes typo * adds AWS * updates to version 0.7 * updates version in chart * adds repo source; removes unnecessary values * moves deployment to templates * renames folder * updaates to 0.7.1 * creates ark sa in helpers; separates files according to object type * updates version in chart * adds home to chart * updates to v0.7.1 * modifies chart according to best practices * removes blank line * adds delete backup hook to solve issue crd backup not deleting * adjusts indentation, renames file with using hyphene * moves folder to stable * remove unnecessary test condition * Update to v0.8.1; Add support for Azure * Update readme * Add annotation for kube2iam * Add image for hooks. Update readme * Rename serviceaccount * Use Get instead of Glob * Remove namespace * Add PullPolicy; Modify readme * Rename Chart ark * Add standard labels to resources * Add customizable tolertion and nodeselector * Add missing labels; Use image with tag * Implement suggestion * Various updates * Add missing if block around delete hook Signed-off-by: voron <av@arilot.com>
Premise
Helm cannot handle properly CRD becauses it has a validation mechanism that checks the installation before the CRD are actually created,
hence each resource that uses a CRD cannot be validated because the CRD doesn't exist yet!
Solution
The solution here is to create CRD via helm chart, and only after (using a
post-install
) to install the resources with a container.The container has the only job to execute a
kubectl create -f filename
and create the resources.At the same time the resources created with the hook are completely transparent to Helm, that is, when you delete the
chart those resources remain there. Hence we need a second hook for deleting them