Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] The Name of the extra disk in the dashboard does not match the actual device path #1976

Closed
weihanglo opened this issue Mar 8, 2022 · 5 comments
Assignees
Labels
area/node-disk-manager area/ui Harvester standalone UI or Rancher UI extension kind/bug Issues that are defects reported by users or that we know have reached a real release priority/0 Must be fixed in this release
Milestone

Comments

@weihanglo
Copy link
Contributor

Describe the bug
The Name field of extra disks does not match the actual device path on the system. This often happens by swapping the slots of disks.

To Reproduce
Steps to reproduce the behavior:

  1. Prepare a harvester cluster (single node is sufficient)
  2. Prepare two additional disks and format both of them.
  3. Hotplug both disks and add them to the host via Harvester Dashboard ("Hosts" > "Edit Config" > "Disks")
  4. Shutdown the host.
  5. Swap the address and slot of the two disks in order to make their dev paths swapped
    • For libvirt environment, you can swap <address> and <target> in the XML of the disk.
  6. Reboot the host

Expected behavior
The name matches the acutal device path on the system. (e.g. lsblk's output)

Possible solution

My impression is that the dashboard displays the field Name with blockdevice.spec.fileSystem.devPath. Unfortunately the value of devPath is static. Once a blockdevice is created it can never reflect the actual device path on the system.

Off the top of my head, there are some solution:

  1. Find an alternative to display other than the field devPath, such as blockdevice.name. Though the name itself is meaningless to users.
  2. The NDM controller provides the real device path on status.deviceStatus.devPath or somewhere else to let dashboard display.

Environment:

  • Harvester ISO version: master-fd872c9-head

Additional context

This is a screenshot copied from. #1874

You can see that /dev/vdc with 29GiB is mounted at 88d81368 on the left, but in the console the disk mounted at 88d81368 with 29GiB actually has the path /dev/vdb

image

@weihanglo weihanglo added kind/bug Issues that are defects reported by users or that we know have reached a real release area/node-disk-manager area/ui Harvester standalone UI or Rancher UI extension labels Mar 8, 2022
@rebeccazzzz rebeccazzzz added this to the v1.0.2 milestone Mar 8, 2022
@n313893254 n313893254 self-assigned this Mar 22, 2022
@guangbochen guangbochen added the priority/0 Must be fixed in this release label Apr 11, 2022
@harvesterhci-io-github-bot
Copy link

harvesterhci-io-github-bot commented Apr 15, 2022

Pre Ready-For-Testing Checklist

  • Where is the reproduce steps/test steps documented?
    The reproduce steps/test steps are at:
  • Is there a workaround for the issue? If so, where is it documented?
    The workaround is at:

  • Does the PR include the explanation for the fix or the feature?

* [ ] Does the PR include deployment change (YAML/Chart)? If so, where are the PRs for both YAML file and Chart?
The PR for the YAML change is at:
The PR for the chart change is at:

* [ ] Have the backend code been merged (harvester, harvester-installer, etc) (including backport-needed/*)?
The PR is at

* [ ] Which areas/issues this PR might have potential impacts on?
Area
Issues

* [ ] If labeled: require/HEP Has the Harvester Enhancement Proposal PR submitted?
The HEP PR is at

  • If labeled: area/ui Has the UI issue filed or ready to be merged?
    The UI issue/PR is at
  • If labeled: require/doc Has the necessary document PR submitted or merged?
    The documentation issue/PR is at

  • If labeled: require/automation-e2e Has the end-to-end test plan been merged? Have QAs agreed on the automation test case? If only test case skeleton w/o implementation, have you created an implementation issue?
    The automation skeleton PR is at
    The automation test case PR is at
    The issue of automation test case implementation is at (bot will auto create one using the template)

  • If labeled: require/integration-test Has the PR includes the integration test?
    The integration test PR is at

  • If labeled: require/manual-test-plan Has the manual test plan been documented?
    The updated manual test plan is at

  • If the fix introduces the code for backward compatibility Has a separate issue been filed with the label release/obsolete-compatibility?
    The compatibility issue is filed at

@harvesterhci-io-github-bot

Automation e2e test issue: harvester/tests#288

@n313893254
Copy link

n313893254 commented Apr 15, 2022

Currently implemented the first solution, The second solution will be implemented in this issue #1249

@lanfon72 lanfon72 self-assigned this Apr 18, 2022
@lanfon72
Copy link
Member

After discussed with @weihanglo, the implementation is not enough for user to add new disks. (snapshot as below)
user will be confused when they got multiple disks with same size.
so this issue be blocked by #2149, and they should be ship together.

image

@lanfon72 lanfon72 removed their assignment Apr 18, 2022
@lanfon72 lanfon72 self-assigned this Apr 26, 2022
@lanfon72
Copy link
Member

verified along with #2149

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/node-disk-manager area/ui Harvester standalone UI or Rancher UI extension kind/bug Issues that are defects reported by users or that we know have reached a real release priority/0 Must be fixed in this release
Projects
None yet
Development

No branches or pull requests

6 participants