Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trigger upgrades automatically on statefulset change #55

Merged
merged 2 commits into from
Nov 28, 2018
Merged

Conversation

solsson
Copy link
Contributor

@solsson solsson commented Aug 2, 2017

Any votes on this? No need to do kubectl delete manually, but on the other hand the pod restart sequence may come as a surprise. Also I think it requires Kubernetes 1.7.

@BenjaminDavison
Copy link

Yeh it requires 1.7.

@solsson
Copy link
Contributor Author

solsson commented Nov 3, 2017

Proposing that this gets merged to master, for next major, see #84

@solsson solsson added this to the v3.0 milestone Nov 7, 2017
@solsson
Copy link
Contributor Author

solsson commented Nov 7, 2017

Apparently I didn't read up on this enough. For example the change of readiness probe in #81 did not trigger restarts.

@solsson solsson removed the merge1.8 label Nov 7, 2017
@elm-
Copy link
Contributor

elm- commented Nov 7, 2017

Has the update behaviour changed in 1.8? I had an issue with this when testing rolling updates on StatefulSets. They were first scaled back to 0 and then back to full size when I tested it with upgrading Kafka. Meaning there is a down time. The way I did it was stage the updates with apply, and then removed each pod, waited till it was back up and back again and then picked the next one.

@solsson solsson modified the milestones: v3.0, v3.1 Nov 9, 2017
@solsson
Copy link
Contributor Author

solsson commented Nov 9, 2017

Under the circumstances I will postpone this. My thinking goes, as in #81 (comment), that the scope of this repo isn't - at least not yet :) - to automate Kafka ops tasks that are manual in a typical non-k8s setup.

Also, there won't be any automatic upgrade on configmap changes, though they can be as fundamental as an image or env change, so RollingUpdate might be misleading.

@solsson solsson removed this from the v3.1 milestone Nov 9, 2017
@solsson
Copy link
Contributor Author

solsson commented Nov 9, 2017

Haha, just discovered that RollingUpdate is default. Will produce a PR that reverts to OnDelete, based on the argument above.

Upon a fresh create from master in minikube v1.8.0:

$ kubectl get statefulset kafka -o yaml | grep -A 3 updateStrategy
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate

To see this in action, with no other changes:
kubectl -n kafka patch statefulset kafka --type='json' -p='[{"op": "replace", "path": "/spec/updateStrategy/type", "value": "RollingUpdate"}]'
@solsson solsson added this to the 5.0 - Java 11 milestone Nov 28, 2018
@solsson
Copy link
Contributor Author

solsson commented Nov 28, 2018

I guess that recent adopters of Kubernetes will be surprised by not having RollingUpdate. Oldies like me should simply embrace pod volatility :)

@solsson solsson requested a review from atamon November 28, 2018 14:31
@solsson
Copy link
Contributor Author

solsson commented Nov 28, 2018

The biggest flaw here is that we don't restart in case of -config ConfigMap changes. Is anyone aware of a mechanism, a kube-native one, for restarting based on such a dependency?

@atamon
Copy link
Contributor

atamon commented Nov 28, 2018

A common pattern seems to be to have a sidecar container that mounts the same configmap as a volume and detects changes inside the configmap, then triggers a reload. https://github.com/coreos/prometheus-operator does this for their prometheus statefulsets for example.

Googled a bit and found https://stackoverflow.com/a/37331352/1740999 which has some interesting ideas

EDIT:
I find this quite ingenious (cited from above link): You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

Adding sidecars always feels a bit cumbersome though.

@solsson
Copy link
Contributor Author

solsson commented Nov 28, 2018

Adding sidecars always feels a bit cumbersome though.

Not when they're ingenious 😬 ... I created #225 to track this.

@solsson solsson merged commit 7cf0988 into master Nov 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants