Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add multi-node support for host path driver #367

Closed
wants to merge 5 commits into from

Conversation

jsanda
Copy link
Contributor

@jsanda jsanda commented Oct 9, 2019

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test

/kind feature

/kind flake

What this PR does / why we need it:
This PR makes makes it possible to run the host path driver across multiple nodes. I need to also submit a PR to https://github.com/kubernetes-csi/csi-driver-host-path with specs for running the driver as a daemonset.

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:
No

Add support for running host path driver on multiple nodes

@k8s-ci-robot
Copy link
Contributor

Welcome @jsanda!

It looks like this is your first PR to kubernetes-csi/external-provisioner 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-csi/external-provisioner has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. labels Oct 9, 2019
@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Oct 9, 2019
@k8s-ci-robot
Copy link
Contributor

Hi @jsanda. Thanks for your PR.

I'm waiting for a kubernetes-csi or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Oct 9, 2019
@msau42
Copy link
Collaborator

msau42 commented Oct 9, 2019

/assign

@@ -380,6 +380,12 @@ func (p *csiProvisioner) ProvisionExt(options controller.ProvisionOptions) (*v1.
}
}

if options.SelectedNode.Name != os.Getenv("NODE_NAME") {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add a flag to this binary to control this behavior. This will break normal csi drivers where the provisioner can run on any node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a flag, but not sure about the name.

@jsanda
Copy link
Contributor Author

jsanda commented Oct 9, 2019

I signed it

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jsanda
To complete the pull request process, please assign msau42
You can assign the PR to them by writing /assign @msau42 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Oct 9, 2019
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 9, 2019
@aojea
Copy link

aojea commented Oct 9, 2019

/cc
/cc @BenTheElder

@k8s-ci-robot
Copy link
Contributor

@aojea: GitHub didn't allow me to request PR reviews from the following users: aojea, bentheelder.

Note that only kubernetes-csi members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc
/cc @BenTheElder

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jsanda
Copy link
Contributor Author

jsanda commented Oct 9, 2019

I signed it

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Oct 9, 2019
@msau42
Copy link
Collaborator

msau42 commented Oct 9, 2019

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 9, 2019
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Oct 11, 2019
@@ -64,6 +64,7 @@ var (
leaderElectionType = flag.String("leader-election-type", "endpoints", "the type of leader election, options are 'endpoints' (default) or 'leases' (strongly recommended). The 'endpoints' option is deprecated in favor of 'leases'.")
leaderElectionNamespace = flag.String("leader-election-namespace", "", "Namespace where the leader election resource lives. Defaults to the pod namespace if not set.")
strictTopology = flag.Bool("strict-topology", false, "Passes only selected node topology to CreateVolume Request, unlike default behavior of passing aggregated cluster topologies that match with topology keys of the selected node.")
enableNodeCheck = flag.Bool("enable-node-check", false, "Enables a check to see that the node selected by the scheduler for provisioning is this node.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably call it something like "local-node-mode", and a description like "when enabled, the provisioner only processes PVCs that have been scheduled to the same node its running on"

@@ -395,6 +398,12 @@ func (p *csiProvisioner) ProvisionExt(options controller.ProvisionOptions) (*v1.
}
}

if p.enableNodeCheck && options.SelectedNode.Name != os.Getenv("NODE_NAME") {
return nil, controller.ProvisioningNoChange, &controller.IgnoredError{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should return ProvisioningFinished here, so that lib-external-provisioner will stop retrying to call Provision(). cc @jsafrane to confirm

Hmmm although will that cause lib-external-provisioner to remove "selected-node" annotation to make scheduler retry? Maybe we need lib-external-provisioner to be aware of this behavior too...

@@ -395,6 +398,12 @@ func (p *csiProvisioner) ProvisionExt(options controller.ProvisionOptions) (*v1.
}
}

if p.enableNodeCheck && options.SelectedNode.Name != os.Getenv("NODE_NAME") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't a node only selected for a PVC with late binding? What about PVCs without late binding? Are those not supported anymore when deploying the hostpath driver with provisioner on each node with node check enabled?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct that will be a prereq to using this mode. Maybe we need to detect if they're not using delayed binding and return an error msg

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could enable leadership election in this mode and then provision volumes which don't have a node chosen on the node which has the leader. I suspect that the csi-driver-host-path will not pass driver testing in this mode unless it still supports volumes with immediate binding.

If we combine that with yielding leadership after each provisioned volume, then such volumes would even be spread across the entire cluster. Not sure how important that is.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a TLDR or doc on "immediate" and "late" binding in this context somewhere? I don't think I've seen these referenced in the context of kubernetes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode

Sorry I wasn't using the official terminology. "late" = "WaitForFirstConsumer"

I think using leader election to support the "immediate" mode is an interesting idea, but it's going to be really hard to support both immediate and waitforfirstconsumer at the same time, because we don't want leader election in the latter case.

For now, I am fine with saying that you must use waitforfirstconusmer binding mode if you run the hostpath driver like this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, I am fine with saying that you must use waitforfirstconusmer binding mode if you run the hostpath driver like this.

Yes, might be good enough. Then a better error message when it encounters a volume without node name would be nice, though. Right now the resulting error message will be Selected node () is not current node (xxxx) (empty first bracket), which will be a bit misleading.

Anyway, to get a better understanding of how complicated it would be, I implemented my idea. The result is here: pohly@df7b61e

I've not tested whether it actually works and I don't intend to pursue this further unless there is interest in this approach. The biggest concern that I have myself is about how costly leadership election is when many processes are involved. No idea how "many" will be "too many"...

/cc @andrewsykim

@pohly
Copy link
Contributor

pohly commented Oct 22, 2019 via email

@msau42
Copy link
Collaborator

msau42 commented Oct 22, 2019

Other plugins are running the external storage e2es using waitforfirstconsumer with no issues. We made sure to write the tests in a way that supports both semantics

@pohly
Copy link
Contributor

pohly commented Oct 30, 2019

Other plugins are running the external storage e2es using waitforfirstconsumer with no issues. We made sure to write the tests in a way that supports both semantics

Indeed, the wait for "PVC bound" is optional. Sorry, didn't know that. Please disregard my comment. It would still be interesting to know whether the full set of storage tests passes when using external-provisioner+hostpath driver in this mode.

@BenTheElder
Copy link

gentle poke, what can we do to move this forward?

@msau42
Copy link
Collaborator

msau42 commented Dec 6, 2019

I would like to proceed with this PR with the current approach, with the limitation that volumeBindingMode: Immediate (the default) will not work properly. There's also some error handling/retry behavior that needs to be verified. We may need more changes in lib-external-provisioner to handle this properly.

@k8s-ci-robot
Copy link
Contributor

@jsanda: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-csi-external-provisioner-1-17-on-kubernetes-1-17 740a4a2 link /test pull-kubernetes-csi-external-provisioner-1-17-on-kubernetes-1-17

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot
Copy link
Contributor

@jsanda: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 10, 2019
@jsanda
Copy link
Contributor Author

jsanda commented Dec 17, 2019

My apologies for not getting back to this sooner. I had been out for a while (wife had a baby) and I was missing some notifications. I will go ahead and rebase my branch off master.

@j-griffith
Copy link
Contributor

@jsanda still interested in driving this?

@jsanda
Copy link
Contributor Author

jsanda commented Feb 20, 2020 via email

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 19, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants