Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Update README.md to add a known issue regarding storage vmotion of VM [SKIP CI] #1639

Merged

Conversation

lipingxue
Copy link
Contributor

Update README.md to add a known issue regarding storage vmotion of VM.

Fixes #1618

Update README.md to add a known issue regarding storage vmotion of VM.

Minor fix.
README.md Outdated
@@ -163,6 +163,7 @@ logging config format for content details.
- Full volume name with format like "volume@datastore" cannot be specified in the compose file for stack deployment. [#1315](https://github.com/vmware/docker-volume-vsphere/issues/1315). It is a docker compose issue and a workaround has been provided in the issue.
- Specifying "Datastore Cluster" name during volume creation is not supported. Datastore clusters (as a part of Storage DRS) is a VC feature and not available on individual ESX. [#556](https://github.com/vmware/docker-volume-vsphere/issues/556)
- Volume creation using VFAT filesystem is not working currently. [#1327](https://github.com/vmware/docker-volume-vsphere/issues/1327)
- Volume is not shown when running ```docker volume ls``` and ```vmdkops_admin volume ls``` command after performing storage vMotion of VM when volume is attached to VM. It is an expected behavior since storage vMotion messes up the location and names of attached vmdk files. [#1618] (https://github.com/vmware/docker-volume-vsphere/issues/1618)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. storage vMotion => Storage vMotion
  2. I won't say "messes up", probably "changes" or any word more neutral

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, should not mention messes up. Actually are we even able to svmotion when attaching the volume in the default independent mode?

The wording should mention exactly the scenarios where this behavior is observed - volumes attached as independent vs. persistent, impact when the volumes are not attached vs. attached and how to avoid the issue (address volume via qualified name - vol@ds - for example).

@ashahi1
Copy link
Contributor

ashahi1 commented Jul 26, 2017

Please make sure to update gh-pages as well.

README.md Outdated
@@ -163,6 +163,7 @@ logging config format for content details.
- Full volume name with format like "volume@datastore" cannot be specified in the compose file for stack deployment. [#1315](https://github.com/vmware/docker-volume-vsphere/issues/1315). It is a docker compose issue and a workaround has been provided in the issue.
- Specifying "Datastore Cluster" name during volume creation is not supported. Datastore clusters (as a part of Storage DRS) is a VC feature and not available on individual ESX. [#556](https://github.com/vmware/docker-volume-vsphere/issues/556)
- Volume creation using VFAT filesystem is not working currently. [#1327](https://github.com/vmware/docker-volume-vsphere/issues/1327)
- Volume is not shown when running ```docker volume ls``` and ```vmdkops_admin volume ls``` command after performing storage vMotion of VM when volume is attached to VM. It is an expected behavior since storage vMotion messes up the location and names of attached vmdk files. [#1618] (https://github.com/vmware/docker-volume-vsphere/issues/1618)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, should not mention messes up. Actually are we even able to svmotion when attaching the volume in the default independent mode?

The wording should mention exactly the scenarios where this behavior is observed - volumes attached as independent vs. persistent, impact when the volumes are not attached vs. attached and how to avoid the issue (address volume via qualified name - vol@ds - for example).

@lipingxue
Copy link
Contributor Author

@govint
See my answer to your questions below.

Actually are we even able to svmotion when attaching the volume in the default independent mode?
[Liping] Yes. In the step that Anup gave, the volume is attached as "

root@sc-rdops-vm02-dhcp-52-237:~# docker volume inspect testVolXXX@sharedVmfs-0
[
    {
        "Driver": "vsphere:latest",
        "Labels": null,
        "Mountpoint": "/mnt/vmdk/testVolXXX@sharedVmfs-0",
        "Name": "testVolXXX@sharedVmfs-0",
        "Options": {},
        "Scope": "global",
        "Status": {
            "access": "read-write",
            "attach-as": "independent_persistent",
            "capacity": {
                "allocated": "13MB",
                "size": "100MB"
            },
            "clone-from": "None",
            "created": "Fri Jul 21 19:07:53 2017",
            "created by VM": "ubuntu-VM0.0",
            "datastore": "sharedVmfs-0",
            "diskformat": "thin",
            "fstype": "ext4",
            "status": "detached"
        }
    }
]

@lipingxue
Copy link
Contributor Author

@govint We have two mode for disk attach, "independent_persistent" and "persistent". I am not quite understand how the disk attach mode will affect the svmotion. Could you explain more?
I believe if the volume is not attached, then svmotion will not affect it, but I need to do some test to verify it.

Address volume via qualified name - vol@ds may not solve this issue if the ds is the datastore where VM resides. For example, svmotion move the vm from ds to ds1, then the volume attached to vm will be put in to the vm folder in ds1 instead of the dockervol folder as we hope.

@govint
Copy link
Contributor

govint commented Jul 27, 2017

When an independent disk is attached to a VM some ops aren't allowed like snaps for example. That's why I was asking if the svmotion happens. The documentation indicates svmotion requires disks in persistent mode - see https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html)

@lipingxue
Copy link
Contributor Author

@govint We allow two attach mode. "independent_persistent"(default mode) and "persistent" mode. So both mode attach disk as "persistent". So svmotion should work.

@ashahi1
Copy link
Contributor

ashahi1 commented Jul 27, 2017

@lipingxue Issue #1618 also happens in case of X-vMotion as well since in X-vMotion, we change datastore as well as host of a vm.
Can you please add X-vMotion also? Something like Storage vMotion/X-vMotion

@lipingxue
Copy link
Contributor Author

@ashahi1 As far as I know, you need to shut down VM before running X-vmotion. So it is not a valid test to running X-vmotion when Vm is not shut down.

@lipingxue lipingxue force-pushed the svmotion_known_issue.liping branch from 6ae3463 to 63fe482 Compare July 27, 2017 21:55
Address comments from Sam and Govindan.

Address comments from Sam and Govindan.

Minor fix.

Small fix.
@lipingxue lipingxue force-pushed the svmotion_known_issue.liping branch from 63fe482 to c11e980 Compare July 27, 2017 22:01
@lipingxue
Copy link
Contributor Author

@shaominchen @govint I have addressed your comments and please take a look.

Copy link
Contributor

@shaominchen shaominchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@ashahi1
Copy link
Contributor

ashahi1 commented Jul 27, 2017

@lipingxue Yes, we do allow X-vMotion of a vm from powered-on state. X-vMotion does not needs vm to be shutdown.

@lipingxue
Copy link
Contributor Author

@govint I have addressed your comments, and please review it. Thanks!

Copy link
Contributor

@govint govint left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@lipingxue lipingxue merged commit 36e24f5 into vmware-archive:master Aug 1, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants