Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE-1051] Handle Kubelet's Wrong CSI Call Inconsistent with Real Volume Status #1050

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

CraneShiEMC
Copy link
Collaborator

@CraneShiEMC CraneShiEMC commented Aug 26, 2023

Purpose

Resolves #1051

  1. Handle Kubelet's Wrong CSI Call Inconsistent with Real Volume Status
  2. Proceed Kubelet's CSI call also on Failed Volume

PR checklist

  • Add link to the issue
  • Choose Project
  • Choose PR label
  • New unit tests added
  • Modified code has meaningful comments
  • All TODOs are linked with the issues
  • All comments are resolved

Testing

I've similuated the scenario of volume's k8s global device mountpoint missing in solely kubelet-issued NodePublishVolume in my standalone test. This CSI defensive enhancement can work well as expected in the test.

custom-ci passed: https://asd-ecs-jenkins.isus.emc.com/job/csi-custom-ci/1562/

custom-acceptance passed:
Atlantic (rke2): https://asd-ecs-jenkins.isus.emc.com/job/csi-custom-acceptance-tar_b_ona/39/
Openshift: https://asd-ecs-jenkins.isus.emc.com/job/csi-custom-acceptance-oil_bd/255/

…ed volume

Signed-off-by: Shi, Crane <crane.shi@emc.com>
Signed-off-by: Shi, Crane <crane.shi@emc.com>
@CraneShiEMC CraneShiEMC changed the title Need to Retry on Failed Volume Also in NodePublish/NodeUnpublish/NodeUnstage Need to Still Process Failed Volume Also in NodePublish/NodeUnpublish/NodeUnstage Aug 26, 2023
Signed-off-by: Shi, Crane <crane.shi@emc.com>
Signed-off-by: Shi, Crane <crane.shi@emc.com>
Signed-off-by: Shi, Crane <crane.shi@emc.com>
@codecov
Copy link

codecov bot commented Aug 26, 2023

Codecov Report

Attention: 9 lines in your changes are missing coverage. Please review.

Comparison is base (1f86bc6) 72.76% compared to head (5a36eea) 72.81%.

Files Patch % Lines
pkg/node/node.go 82.69% 6 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1050      +/-   ##
==========================================
+ Coverage   72.76%   72.81%   +0.04%     
==========================================
  Files          63       63              
  Lines        8949     8994      +45     
==========================================
+ Hits         6512     6549      +37     
- Misses       2147     2153       +6     
- Partials      290      292       +2     
Flag Coverage Δ
unittests 72.81% <82.69%> (+0.04%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@CraneShiEMC CraneShiEMC changed the title Need to Still Process Failed Volume Also in NodePublish/NodeUnpublish/NodeUnstage [ISSUE-1051] Handle Kubelet's Problematic CSI Call Inconsistent with Real Volume Status Aug 26, 2023
@CraneShiEMC CraneShiEMC changed the title [ISSUE-1051] Handle Kubelet's Problematic CSI Call Inconsistent with Real Volume Status [ISSUE-1051] Handle Kubelet's Problematic CSI Call and Process Failed Volume in Kubelet's CSI call Aug 26, 2023
Signed-off-by: Shi, Crane <crane.shi@emc.com>
Signed-off-by: Shi, Crane <crane.shi@emc.com>
@@ -488,6 +494,27 @@ func (s *CSINodeService) NodePublishVolume(ctx context.Context, req *csi.NodePub
resp, errToReturn = nil, fmt.Errorf("failed to publish volume: fake attach error %s", err.Error())
}
} else {
// will check whether srcPath is mounted, if not, need to redo NodeStageVolume
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little concerned about this process, as some mount logic(like global path setup) is done in kubelet. there redo nodeStage may not help in such case. do we have a test can prove that this will help in the failure case?

Copy link
Collaborator Author

@CraneShiEMC CraneShiEMC Aug 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kubelet issued wrong CSI call NodePublishVolume in this case assumed that the volume's device global path's mountpoint has been successfully setup in some successful previous CSI call NodeStageVolume. But acutally the volume's device global path has been unmounted in the possible forceful node-removal and kubelet cannot successfully sync the volume's real status because of its continually-failed "orphan" pod's volume cleanup by calling CSI NodeUnpublish when CSI pods have not been intialized yet.

In spite of this, the device global path still exists there if the volume still exists in the most cases. Even though the device global path has also been removed in the worst case, CSI current logic can still create the device global path itself.

I will do the functional test on this code change to verify it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've similuated the scenario of volume's k8s global device mountpoint missing in my standalone test. This CSI defensive enhancement can work well as expected in the test.

@CraneShiEMC CraneShiEMC changed the title [ISSUE-1051] Handle Kubelet's Problematic CSI Call and Process Failed Volume in Kubelet's CSI call [ISSUE-1051] Handle Kubelet's Wrong CSI Call and Process Failed Volume in Kubelet's CSI call Aug 27, 2023
@CraneShiEMC CraneShiEMC changed the title [ISSUE-1051] Handle Kubelet's Wrong CSI Call and Process Failed Volume in Kubelet's CSI call [ISSUE-1051] Handle Kubelet's Wrong CSI Call Inconsistent with Real Volume Status Aug 27, 2023
Signed-off-by: Shi, Crane <crane.shi@emc.com>
…e failed volume status

Signed-off-by: Shi, Crane <crane.shi@emc.com>
Signed-off-by: Shi, Crane <crane.shi@emc.com>
@libzhang
Copy link
Collaborator

it seems the UT in PR validation failed. please fix it.

@@ -602,6 +649,9 @@ func (s *CSINodeService) NodeUnpublishVolume(ctx context.Context, req *csi.NodeU
volumeCR.Spec.Owners = owners
if len(volumeCR.Spec.Owners) == 0 {
volumeCR.Spec.CSIStatus = apiV1.VolumeReady
} else {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems this is not needed, the default state is Published in this function.

Copy link
Collaborator Author

@CraneShiEMC CraneShiEMC Aug 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because we will proceed on the failed volume now, here the set of status to Published is for Failed volume successfully unpublished and used by multiple pods.

CraneShiEMC and others added 3 commits August 28, 2023 22:50
Signed-off-by: Shi, Crane <crane.shi@emc.com>
Signed-off-by: Shi, Crane <crane.shi@emc.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Need to Handle Kubelet's Wrong CSI Call Inconsistent with Real Volume Status
3 participants