-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add multi-node support for host path driver #367
Conversation
Welcome @jsanda! |
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @jsanda. Thanks for your PR. I'm waiting for a kubernetes-csi or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
pkg/controller/controller.go
Outdated
@@ -380,6 +380,12 @@ func (p *csiProvisioner) ProvisionExt(options controller.ProvisionOptions) (*v1. | |||
} | |||
} | |||
|
|||
if options.SelectedNode.Name != os.Getenv("NODE_NAME") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add a flag to this binary to control this behavior. This will break normal csi drivers where the provisioner can run on any node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a flag, but not sure about the name.
I signed it |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jsanda The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc |
@aojea: GitHub didn't allow me to request PR reviews from the following users: aojea, bentheelder. Note that only kubernetes-csi members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I signed it |
/ok-to-test |
@@ -64,6 +64,7 @@ var ( | |||
leaderElectionType = flag.String("leader-election-type", "endpoints", "the type of leader election, options are 'endpoints' (default) or 'leases' (strongly recommended). The 'endpoints' option is deprecated in favor of 'leases'.") | |||
leaderElectionNamespace = flag.String("leader-election-namespace", "", "Namespace where the leader election resource lives. Defaults to the pod namespace if not set.") | |||
strictTopology = flag.Bool("strict-topology", false, "Passes only selected node topology to CreateVolume Request, unlike default behavior of passing aggregated cluster topologies that match with topology keys of the selected node.") | |||
enableNodeCheck = flag.Bool("enable-node-check", false, "Enables a check to see that the node selected by the scheduler for provisioning is this node.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would probably call it something like "local-node-mode", and a description like "when enabled, the provisioner only processes PVCs that have been scheduled to the same node its running on"
@@ -395,6 +398,12 @@ func (p *csiProvisioner) ProvisionExt(options controller.ProvisionOptions) (*v1. | |||
} | |||
} | |||
|
|||
if p.enableNodeCheck && options.SelectedNode.Name != os.Getenv("NODE_NAME") { | |||
return nil, controller.ProvisioningNoChange, &controller.IgnoredError{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should return ProvisioningFinished here, so that lib-external-provisioner will stop retrying to call Provision(). cc @jsafrane to confirm
Hmmm although will that cause lib-external-provisioner to remove "selected-node" annotation to make scheduler retry? Maybe we need lib-external-provisioner to be aware of this behavior too...
@@ -395,6 +398,12 @@ func (p *csiProvisioner) ProvisionExt(options controller.ProvisionOptions) (*v1. | |||
} | |||
} | |||
|
|||
if p.enableNodeCheck && options.SelectedNode.Name != os.Getenv("NODE_NAME") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't a node only selected for a PVC with late binding? What about PVCs without late binding? Are those not supported anymore when deploying the hostpath driver with provisioner on each node with node check enabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct that will be a prereq to using this mode. Maybe we need to detect if they're not using delayed binding and return an error msg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could enable leadership election in this mode and then provision volumes which don't have a node chosen on the node which has the leader. I suspect that the csi-driver-host-path will not pass driver testing in this mode unless it still supports volumes with immediate binding.
If we combine that with yielding leadership after each provisioned volume, then such volumes would even be spread across the entire cluster. Not sure how important that is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a TLDR or doc on "immediate" and "late" binding in this context somewhere? I don't think I've seen these referenced in the context of kubernetes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
Sorry I wasn't using the official terminology. "late" = "WaitForFirstConsumer"
I think using leader election to support the "immediate" mode is an interesting idea, but it's going to be really hard to support both immediate and waitforfirstconsumer at the same time, because we don't want leader election in the latter case.
For now, I am fine with saying that you must use waitforfirstconusmer binding mode if you run the hostpath driver like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, I am fine with saying that you must use waitforfirstconusmer binding mode if you run the hostpath driver like this.
Yes, might be good enough. Then a better error message when it encounters a volume without node name would be nice, though. Right now the resulting error message will be Selected node () is not current node (xxxx)
(empty first bracket), which will be a bit misleading.
Anyway, to get a better understanding of how complicated it would be, I implemented my idea. The result is here: pohly@df7b61e
I've not tested whether it actually works and I don't intend to pursue this further unless there is interest in this approach. The biggest concern that I have myself is about how costly leadership election is when many processes are involved. No idea how "many" will be "too many"...
/cc @andrewsykim
Michelle Au <notifications@github.com> writes:
For now, I am fine with saying that you must use waitforfirstconusmer
binding mode if you run the hostpath driver like this.
As a first step that's okay. Just beware that in that mode it won't pass
the E2E storage test suite because I'm pretty sure that it expects
immediate binding semantic in several places (create PVC, wait for it to
be provisioned). Applications or operators may fail the same way.
|
Other plugins are running the external storage e2es using waitforfirstconsumer with no issues. We made sure to write the tests in a way that supports both semantics |
Indeed, the wait for "PVC bound" is optional. Sorry, didn't know that. Please disregard my comment. It would still be interesting to know whether the full set of storage tests passes when using external-provisioner+hostpath driver in this mode. |
gentle poke, what can we do to move this forward? |
I would like to proceed with this PR with the current approach, with the limitation that |
@jsanda: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@jsanda: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
My apologies for not getting back to this sooner. I had been out for a while (wife had a baby) and I was missing some notifications. I will go ahead and rebase my branch off master. |
@jsanda still interested in driving this? |
Unfortunately I am not able to right now. I do want to acknowledge and say
thank you though to @msau42 for taking the time to work with me. I really
appreciate it and makes for an awesome community experience!
…On Wed, Feb 19, 2020 at 2:01 PM John Griffith ***@***.***> wrote:
@jsanda <https://github.com/jsanda> still interested in driving this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#367?email_source=notifications&email_token=AABJBOPW4ZFPPFCQVPJANWLRDV6YLA5CNFSM4I6Y3DR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMJB7VQ#issuecomment-588390358>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABJBONZRPISU4JSTHMBRVTRDV6YLANCNFSM4I6Y3DRQ>
.
--
- John
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR makes makes it possible to run the host path driver across multiple nodes. I need to also submit a PR to https://github.com/kubernetes-csi/csi-driver-host-path with specs for running the driver as a daemonset.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
No