Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.yaml refresher #625

Merged
merged 39 commits into from
Jun 6, 2018
Merged

.yaml refresher #625

merged 39 commits into from
Jun 6, 2018

Conversation

consideRatio
Copy link
Member

@consideRatio consideRatio commented Apr 2, 2018

About

The PR's size

Making many separate PRs would have been good for the ability to review them. I had that ambition initially but realized that making many PRs each affecting a lot of files and the same files over and over was hard and I gave up on that, I'm hoping it won't cause too much trouble - sorry in advance!

In this PR

I have aimed make the .yaml consistent in various ways.

  • Labels (using helper)
  • Selectors (using helper)
  • Block indentation (nindent introduced)
  • List element indentation
  • Whitespace chomp left ({{- }})
  • Template function (include replace template)
  • Yaml and Helm comments (# vs {{/* */}})
  • Passing scope (with merge (dict "key1" "val1") .)
  • Utilize helm lint
  • Utilize yamllint like kubernetes/charts
  • Setup values for yamllint (lint-chart-values.yaml)
  • Utilize kubeval
  • Linting / Validation tooling (tools/lint.py)
  • Update guidelines (CONTRIBUTING.md)
  • Consider dynamic .yaml in chartpress
  • Consider dynamic .yaml in pod culler
  • Consider dynamic .yaml in kubespawner (kubespawner#159, kubespawner#161)
  • Test run cluster upgrades
  • Merge kubespawner#161 then bump to utilize it.

Related

Issues and PR's

#538 #622 #624 helm#3811 helm#3470 helm#3837 helm#3854 kubespawner#159 kubespawner#161

Documentation

Refactored from PR

@betatim
Copy link
Member

betatim commented Apr 2, 2018

I am a sucker for consistent style, so + 💯 for this! :)

Do you want to split this into (many) smaller PRs, roughly one per bullet point maybe? This might reduce the amount of rebasing and trying to keep up with changes to the charts made by others.

From experience if there isn't a linter nudging people to stick to a particular style, things drift apart over time. So I'd support an effort to add a linter or somehow automate fixing up the style.

@consideRatio
Copy link
Member Author

@betatim thanks for the encouragement! =D

I've mostly considered breaking the PR up to make it easier for code review for others. I'm currently trying to figure out how to do things properly for a small amount of .yaml files, when that is done I'll know better how to proceed I think.

It might become quite easy to make several PR's for various bullet points if I manage to setup some linter to automate various parts for me. I'll look into linter usage in the kubernetes/charts!

@consideRatio
Copy link
Member Author

consideRatio commented Apr 2, 2018

Research

helm lint and yamllint

The kubernetes/charts repo contained contribution guidelines that suggested the use of helm lint and yamllint in conjunction with a yamllint config.

template and include

Using the include command instead of template simply adds benefits without drawbacks as far as I know. include allows for piping while template doesn't.

Sprig's nindent

Sprig extends Helms go templating, and Sprig's nindent function is available to us in Helm 2.7+. It first adds a new line, then indents. It allows us to create more readable Yaml files.

# with nindent
metadata:
  labels:
    {{- include "jupyterhub.labels" . | nindent 4 }}
    extraLabel: fun

# without nindent
metadata:
  labels:
{{ include "jupyterhub.labels" . | indent 4 }}
    extraLabel: fun

# output
metadata:
  labels:
    component: "continuous-image-puller"
    app: jupyterhub
    chart: jupyterhub-v0.7-dev
    release: RELEASE-NAME
    heritage: Tiller
    extraLabel: fun
Passing merged scopes to templates

We can pass a dictionary to a template with things, but if we do we might loose track of our previous scope. It was possible to use $. to access that, but it led to issues if you had a nested chart which I have. A nice solution was to pass an augmented scope to the template instead.

# Don't miss the . in the end, it is merged into the first dictionary
# The first variable will be written to based on the second, put the . last!
{{- $_ := merge (dict "key1" "value1" "key2" "value2") . -}}
{{ include "jupyterhub.templatename" $_ }}
Template comments

The details matters a lot when it comes to comments within Helm templates, especially if combined with whitespace chomping. In short, you should comment like {{/* or {{- /* and not {{ /* or {{-/*.

# OK
{{/* A helm comment */}}
# Helm template rendering error: unexpected "/" in command
{{ /* Causes error */ }}

# OK
{{- /* A helm comment with whitespace chomping */ -}}
# Helm template rendering error: illegal number syntax: "-"
{{- /* Causes error */ -}}

# OK
# A Yaml comment
The toYaml function

Apparently it comes with a new line that is hard to chomp away as well using {{- tricks. It can be removed with | trimSuffix "\n" but it only messes up somewhat in the yamllinter with the trailing-spaces flag which we could disable in the yamllint config.

Whitespace chomping

I established a system for whitespace chomping to follow in the .yaml files. I'm of the opinion that it is easier to understand chomping ({{-) backwards/up as compared to chomping ahead (-\}}). If you are to use either or to solve a task, aim for backwards/up chomping perhaps. It will almost always turn out great if you chomp backwards/up and not ahead/down all the time.

1. Always chomp left and not right.

ports:
  {{- if $manualHTTPS }}
  - containerPort: 8443
    name: proxy-https
  {{- end }}

2. Except when no content is available above (beginning of a file), then chomp right as well once before the first content.

{{- $manualHTTPS := (and .Values.proxy.https.enabled (eq .Values.proxy.https.type "manual")) }}
{{- if $manualHTTPS -}}
apiVersion: v1
# ...
{{- end }}

3. And don't chomp if you are inline with other content

data:
  tls.crt: {{ .Values.proxy.https.manual.cert | b64enc }}
  tls.key: {{ .Values.proxy.https.manual.key | b64enc }}
Future stuff - Sprig's ternary

A new feature was added 28 March 2018 to Sprig. It is my understanding that this will only influence Helm on the client side when generating the chart, allowing us to utilize this quickly. It can resolve some challenges discussed here.

consideRatio added a commit to consideRatio/zero-to-jupyterhub-k8s that referenced this pull request Apr 5, 2018
consideRatio added a commit to consideRatio/zero-to-jupyterhub-k8s that referenced this pull request Apr 6, 2018
@consideRatio
Copy link
Member Author

consideRatio commented Apr 6, 2018

I'm thinking I should document preliminary code conventions for this repo, but not confident on where it would be suitable to document it.

Within CONTRIBUTING.md? Suggestions?
/cc @betatim @yuvipanda

@choldgraf
Copy link
Member

I think it's a good idea! +1 to CONTRIBUTING.md

Copy link
Collaborator

@yuvipanda yuvipanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this very much! Left some minor comments. Needs testing to make sure we don't break people upgrading from last released version of the chart.

Thank you very much for doing this work! It's important for the long term health of the project!


It might take a few minutes for it to appear!

Note that this is still an alpha release! If you have questions, feel free to
1. Come chat with us at https://gitter.im/jupyterhub/jupyterhub
2. File issues at https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues
1. Read the guide at https://z2jh.jupyter.org
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this addition!

@@ -0,0 +1,54 @@
{{- /*
- component:
- A: .componentLabel
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you expand a bit more on the A / B /C and what they mean?

@@ -1,30 +1,32 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make sure that upgrading from an older version of the chart continues to work after this change!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I wonder how Helm behaves, I figure the old deployment will be deleted fully and then recreated, but it should not be an issue I think.
  • Would you suggest we simply test on some deployment as a way to make sure this works before merging?

spec:
podSelector:
matchLabels:
app: jupyterhub
component: singleuser-server
{{- $_ := merge (dict "componentLabel" "singleuser-server") . }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you expand a bit on what this is doing?

Copy link
Member Author

@consideRatio consideRatio Apr 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes check out the text I added to #625 (comment) under "Passing merged scopes to templates", it might sometimes be overkill, but often it is a nice practice.

For example, when we had the daemonset template and passed a dictionary with two keys, one key was called "name" and one was called "top", with this technique, you could pass a scope that merges the name key/value into the current scope/dict and removing the need to utilize .top.Release.Name instead of simply .Release.Name within the helper

Copy link
Member Author

@consideRatio consideRatio Apr 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for reviewing @yuvipanda I find it really motivating ❤️

consideRatio added a commit to consideRatio/kubespawner that referenced this pull request Apr 18, 2018
consideRatio added a commit to consideRatio/zero-to-jupyterhub-k8s that referenced this pull request Apr 18, 2018
consideRatio added a commit to consideRatio/kubespawner that referenced this pull request Apr 18, 2018
@consideRatio consideRatio force-pushed the yaml-polishing-pr branch 2 times, most recently from dbb2e70 to 02debad Compare April 18, 2018 22:52
@@ -0,0 +1,58 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about using pipenv for that, at least to garentee the reproductibility in ci?

pipenv install yamllint
wget https://github.com/garethr/kubeval/releases/download/0.6.0/kubeval-darwin-amd64.tar.gz
tar xf kubeval-darwin-amd64.tar.gz
cp kubeval $(pipenv --venv)/bin

So that, pipenv run yamllint or pipenv run kubeval will always run, inside the ci or in the user's env.

Copy link
Member Author

@consideRatio consideRatio May 3, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah thanks @gsemet for taking time to review! I've a lot to learn regarding execution environments, I'll start by considering this!

My TODO list

  • Read up on pipenv and figure out differences with venv and virtualenv.
  • Ensure a robust execution environment~

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shebang here is already correct. While individuals are free to use pipenv, on CI since we already have an env, pip install is all that should be used.

@minrk
Copy link
Member

minrk commented May 7, 2018

@consideRatio I'm excited to get this landed, since it will inevitably conflict with many new PRs and I don't want you to have to spend your whole life rebasing and resolving conflicts! What are the next steps and how can we help get this finished?

@consideRatio
Copy link
Member Author

@minrk thanks for the encouragement! I will try to split out some details related to the kubespawners labels that is keeping me from removing WIP. Then, some pre-merge testing would be great since i influence so much code with this! I test this on my cluster but would appreciate if someone else also tried it.

It might take a week until that though, im on a climbing vacation without easy access to my linux-computer at home where i code.

consideRatio added a commit to consideRatio/kubespawner that referenced this pull request May 13, 2018
@willingc
Copy link
Collaborator

Beautiful!!! Thanks @consideRatio

@choldgraf
Copy link
Member

What's left to do on this PR? Just have a few pairs of eyeballs take a look at it?

@gsemet
Copy link
Contributor

gsemet commented Jun 6, 2018

this looks great :)

@minrk
Copy link
Member

minrk commented Jun 6, 2018

This is really great! I was able to do a deploy of master and upgrade to this and it all went smoothly (I did have to do --force due to certain changes, but users were running during the upgrade and the user was still running and accessible, and no volumes were disrupted).

My notes:

2018-06-06

Starting from v0.7-f7fa978

helm install --name test jupyterhub/jupyterhub --version=v0.7-f7fa978 -f config.yaml

config.yaml:

hub:
  cookieSecret: "..."
proxy:
  secretToken: "..."
  https:
    hosts:
      - test-k8s.jovyan.org
    letsencrypt:
      contactEmail: my@email.com

Login and start a user, so that we are doing the upgrade with running users.

Upgrade, in zero-to-jupyterhub:

  • checkout pull request 625
  • edit chartpress.yaml, set imagePrefix to minrk/k8s- to avoid pushing to official repo
  • chartpress --push

Run the upgrade:

helm upgrade test /path/to/zero-to-jupyterhub/jupyterhub -f config.yaml

This fails (as expected) because not everything can upgrade with a patch:

Error: UPGRADE FAILED: Deployment.apps "hub" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"hub.jupyter.org/network-access-singleuser":"true", "heritage":"Tiller", "release":"test", "hub.jupyter.org/network-access-proxy-api":"true", "hub.jupyter.org/network-access-proxy-http":"true", "app":"jupyterhub", "component":"hub", "chart":"jupyterhub-v0.7-4a728c5"}: `selector` does not match template `labels` && Deployment.apps "proxy" is invalid: [spec.template.metadata.labels: Invalid value: map[string]string{"component":"proxy", "heritage":"Tiller", "hub.jupyter.org/network-access-hub":"true", "hub.jupyter.org/network-access-singleuser":"true", "app":"jupyterhub", "chart":"jupyterhub-v0.7-4a728c5", "release":"test"}: `selector` does not match template `labels`, spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"hub.jupyter.org/network-access-singleuser":"true", "name":"proxy", "release":"test", "app":"jupyterhub", "component":"proxy", "heritage":"Tiller", "hub.jupyter.org/network-access-hub":"true"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable] && PodDisruptionBudget.policy "hub" is invalid: spec: Forbidden: updates to poddisruptionbudget spec are forbidden. && PodDisruptionBudget.policy "proxy" is invalid: spec: Forbidden: updates to poddisruptionbudget spec are forbidden.

Now, re-run upgrade with --force:

Success!

Verify:

  • hub and proxy pods are restarted
  • running from before the upgrade users are still running and accessible
  • volumes/database are preserved
  • yay!

@minrk minrk merged commit a41c9c6 into jupyterhub:master Jun 6, 2018
@willingc
Copy link
Collaborator

willingc commented Jun 6, 2018

Thanks @consideRatio 🎉 ☀️

@consideRatio
Copy link
Member Author

Wieee! Thank you all for the attention and the help with this PR! Great that you also tested it @minrk !

I'm looking into the chart label issue causing restarts on hub/proxy right now @minrk !

@choldgraf
Copy link
Member

super awesome! TBH, given how many things this PR touched, I'm surprised that it only took 2 months!
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants