-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add scripts/manifests for new daemonset kubeadm approach #34
Conversation
This allows subsequent args (namely --pod-infra-container-image) to override any linux-specific flags we get
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: benmoss The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
kubeadm/kube-proxy/kube-proxy.yml
Outdated
valueFrom: | ||
fieldRef: | ||
fieldPath: status.podIP | ||
image: sigwindowstools/kube-proxy:1.17.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally we should not pin the kube proxy version.
is there a way for the user to customize this during the initial script execution - e.g. using search and replace (sed
)?
(i think) kube-proxy has a skew of +1 / -1 releases compared to core kube, so if this version is not updated and someone tries to deploy e.g. 1.19 it might fail.
[3]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was not sure what to do about this problem. This manifest is not deployed/edited by a script at all, the user is doing a kubectl apply
with this file directly. So we can document in the step where we direct the user to do this that they pick the version that matches their node/cluster. We can provide a sed command to do so, that's pretty straightforward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally it should be propagated from $KubernetesVersion in PrepareNode, so when the user picks a version the PowerShell patches the YAMLs that later the user applies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that PrepareNode
/InstallNssm
are run on the Windows nodes, whereas kube-proxy.yml
is run from your workstation where you have kubectl
access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, i overlooked this. i guess it makes sense to document this and possibly give a sed
example too. i don't think we want the users to apply the stock manifest without making sure it has the version they want.
kubeadm/kube-proxy/Dockerfile
Outdated
@@ -0,0 +1,15 @@ | |||
ARG k8sVersion="1.17.3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[3]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is just the default, and so can be overridden with docker build --build-arg k8sVersion="1.18.0"
. We'll need to set up some automation to produce new images on new releases though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've been playing with GitHub actions in this regard so automation seems possible.
but it this goes back to the topic of whether we want to branch / tag this repository and or have the install scripts in folders.
i think this is always the safest bet, because otherwise if a single set of scripts and docker files is maintained for the whole support skew of k8s it might end up with some if version ...
branches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of using github releases for the script and manifest artifacts? I don't want to create version subdirectories since they are going to be 99% identical to each other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
having releases / tags is fine, but without branches it will not allow backports.
one option is to start with a linear tag increments in a single branch (master), but if a change is required for a backport a branch can be created.
thanks for adding this @benmoss @gab-satchi what is the plan for deprecating and removing the old code here? |
Avoid using the Windows directory to store non-official binaries
wins for whatever reason validates the path both inside the container and on the host, meaning the file has to exist and have the same checksum inside as well as outside.
Not sure, what do you think we should do? Delete it? Make some "archive" subdirectory or something? |
given:
i think we can just delete the old code in a new PR. |
I actually just realized that if we remove the old code it will break the old documentation. It links to things like https://github.com/kubernetes-sigs/sig-windows-tools/archive/master.zip that is just a zip of current master of the repo. Not sure if that is a concern or not. |
i'd leave it to sig-windows to establish the level of concern for this issue. k/website have a no-backport policy, but a backport for a old dev-1.x branch there might be needed here. using HEAD for documentation links is a bad idea and ideally the old docs should be updated to point to commit SHAs. |
Docs will specify users need to substitute this with their k8s version, this avoids people accidentally using the wrong version since now it will break.
thanks for the updates. /lgtm |
No reason to run two scripts when one will do
Apparently the official site has some stability issues
This seems to mess up cloudbase-init somehow, the stderr from curl causes a "RemoteException"/"NativeCommandError"
mkdir -force C:\etc\kubernetes\pki | ||
New-Item -path C:\var\lib\kubelet\etc\kubernetes\pki -type SymbolicLink -value C:\etc\kubernetes\pki\ | ||
|
||
$StartKubeletFileContent = '$FileContent = Get-Content -Path "/var/lib/kubelet/kubeadm-flags.env" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just pointing out that this PR will improve the kubelet flag management for kubeadm on Windows:
kubernetes/kubernetes#88287
/lgtm |
/hold cancel |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@neolit123 @PatrickLang @michmike
kubeadm-flags.env
fileFinalizing documentation here kubernetes/website#19217