-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update kubeadm Windows KEP to reflect new DaemonSet-based approach #1456
Conversation
Signed-off-by: Gab Satchi <gsatchithanantham@pivotal.io>
keps/sig-cluster-lifecycle/kubeadm/20190424-kubeadm-for-windows.md
Outdated
Show resolved
Hide resolved
keps/sig-cluster-lifecycle/kubeadm/20190424-kubeadm-for-windows.md
Outdated
Show resolved
Hide resolved
thanks for the update,
👍
this is unfortunately not true, currently and i'm hoping that people will sign up to do that:
people gave some feedback and there are an number of issues and PRs logged in the repository, but nobody is taking ownership.
"complete" might be demanding a bit too much. i think we might want to move this to the GA graduation section.
this section needs and update:
we should definitely say that we are going to use CAPA for this.
it will need updates for the beta docs, given the changes in the script.
PR is here: |
Unknown CLA label state. Rechecking for CLA labels. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Signed-off-by: Gab Satchi <gsatchithanantham@pivotal.io>
We won't be adding any additional preflight checks
The plan is that me and @gab-satchi will take ownership for the scripts once they've been updated in sig-windows-tools. We just haven't been supporting the existing scripts since we've been investigating this alternate approach that we think should make it a lot more simple to maintain and extend.
I've updated this section
Yes, once the KEP is merged we will move new scripts into sig-windows-tools and update docs.
I think we might be able to just document |
thanks for the updates! /hold /assign @timothysc |
@@ -115,23 +115,35 @@ The motivation for this KEP is to provide a tool to allow users to take a Window | |||
|
|||
A user will have a Windows machine that they want to join to an existing Kubernetes cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user value or user story could be re-framed to be
"A user can join a Windows VM to an existing Kubernetes Cluster"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow Github didn't allow me to select the line that I actually wanted to review, so here it is!
A user will have a Windows machine that they want to join to an existing Kubernetes cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user value or user story could be re-framed to be
"A user can join a Windows VM to an existing Kubernetes Cluster"
Windows Nodes in a k8s cluster are not necessarily VMs only, thus "machines" is a better generalization.
/approve |
/hold |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: benmoss, neolit123 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The windows land is out of my area of knowledge, but my personal understanding is that the user story is much more simpler now. If Introducing the new dependency to wings is ok for the majority, I'm +1 on this changes |
/milestone v1.18 |
/hold cancel |
Update kubeadm Windows KEP to reflect new DaemonSet-based approach
|
||
#### Provisioning script creates a fake host network | ||
|
||
In order for the Flannel and kube-proxy DaemonSets to run before CNI has been initialized, they need to be running with `hostNetwork: true` on their Pod specs. This is the established pattern on Linux for bootstrapping CNI, and we are utilizing it here as well. This is in spite of the fact that our containers will not actually need networking at all since the actual Flannel/kube-proxy process will be running outside of the container through wins. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the advantages to running this way, as opposed to just using a native service (as with kubelet)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-proxy and flanneld, require a scoped kubeconfig to access certain resources related to these components. the kubeconfig has to be copied to the workers, using e.g. SSH.
running kube-proxy and flannel as DaemonSets
- solves the problem of having to copy the priv. kubeconfig as part of the setup steps.
- aligns the deployment of the components with kubeadm on Linux,. where these are run as DS.
|
||
#### Provisioning script creates a fake host network | ||
|
||
In order for the Flannel and kube-proxy DaemonSets to run before CNI has been initialized, they need to be running with `hostNetwork: true` on their Pod specs. This is the established pattern on Linux for bootstrapping CNI, and we are utilizing it here as well. This is in spite of the fact that our containers will not actually need networking at all since the actual Flannel/kube-proxy process will be running outside of the container through wins. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will kube-proxy access it's service account credentials?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is currently done like so:
https://github.com/benmoss/kubeadm-windows/blob/master/kube-proxy/kube-proxy.yml#L10
the KubeProxyConfiguration has:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||
### Risks and Mitigations | ||
*Mitigation*: Access to the wins named pipe can be restricted using a PSP that either disables | ||
`hostPath` volume mounts or restricts the paths that can be mounted. A sample PSP will be provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not wild about depending on PSP to enforce this behavior. Here are 2 alternative ideas:
- Custom admission controller for dealing with windows pods. Only allow the
wins
pipe to be mounted (applies to the whole directory tree - does windows support hard linking?) if the requesting container is running asprivileged
. - Put an authenticating proxy in front of wins. Require a service account token to make privileged requests, and check against the podspec whether a request with that privilege level should be allowed. I'm not sure I understand how windows containers / wins work well enough to know whether this approach would be viable.
- (A weaker version of 2) Use the authenticating proxy, but just check for the presence of a static secret. The secret would be added to a kubernetes secret, and mounted into containers that need to access wins. Perhaps rancher would consider an upstream patch to build this capability in?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not wild about depending on PSP to enforce this behavior.
i concur.
would defer to @benmoss and @PatrickLang to the Windows specific questions.
1 seems fine to me, but if understand this correctly it requires that the controller is included in core k8s. AFAIK, wins currently lacks an authorization / authentication model so maybe this is the way forward as long as the project is OK with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tallclair @neolit123
We arrived at using PSPs by following the pattern for privileged containers in Linux as privileged containers were kind of similar to what was being offered by wins. Can you elaborate on your reluctance to use PSPs?
- Are PSPs insufficient in protecting against malicious pods mounting that wins pipe?
- Is it the idea of relying on an operator to apply PSPs and not having things secured out of the box that's hard to accept?
thanks for the comments @tallclair , @benmoss is the main driver for this change, but he is on PTO until mid-February. |
…tworks SDN-4035: IPAM for VMs for OVN Kubernetes secondary networks
Based on discussions we've had with SIG Windows and SIG Cluster Lifecycle we've decided to revise this KEP to reflect a new approach using DaemonSets to run CNI and kube-proxy. This is the first pass at updating it to reflect the new workflow.
@PatrickLang
@neolit123
/assign @michmike @fabriziopandini @timothysc
/sig windows