Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Volume status not updated after vfile plugin disable #2005

Closed
pshahzeb opened this issue Nov 29, 2017 · 2 comments
Closed

Volume status not updated after vfile plugin disable #2005

pshahzeb opened this issue Nov 29, 2017 · 2 comments

Comments

@pshahzeb
Copy link
Contributor

pshahzeb commented Nov 29, 2017

Volume status isn't updated as per its usage when a plugin is disabled and then enabled.

Steps to reproduce:

  1. Create a volume and attach it
root@sc-rdops-vm18-dhcp-57-89:~# docker volume create --driver=vfile --name=SharedVol -o size=10gb
SharedVol

root@sc-rdops-vm18-dhcp-57-89:~# docker run --rm -it -v SharedVol:/mnt/myvol --name busybox-on-node1 busybox

  1. Check the status. Disable vFile plugin
root@sc-rdops-vm18-dhcp-57-89:~# docker volume ls
DRIVER              VOLUME NAME
vfile:latest        SharedVol
vfile:latest        SharedVol2
vfile:latest        SharedVol3
vsphere:latest      _vF_SharedVol2@vsanDatastore
vsphere:latest      _vF_SharedVol3@vsanDatastore
vsphere:latest      _vF_SharedVol@vsanDatastore

root@sc-rdops-vm18-dhcp-57-89:~# docker volume inspect SharedVol
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "vfile:latest",
        "Labels": {},
        "Mountpoint": "/mnt/vfile/SharedVol/",
        "Name": "SharedVol",
        "Options": {
            "size": "10gb"
        },
        "Scope": "global",
        "Status": {
            "Clients": [
                "10.161.114.68"
            ],
            "File server Port": 30000,
            "Global Refcount": 1,
            "Service name": "vFileServerSharedVol",
            "Volume Status": "Mounted"
        }
    }
]


root@sc-rdops-vm18-dhcp-57-89:~# docker plugin disable -f vfile

  1. Exit from the container and check status of volume
/ # exit

  1. Enable the vfile plugin
docker plugin enable vfile
  1. Status of the volume is still mounted
root@sc-rdops-vm18-dhcp-57-89:~# docker volume inspect SharedVol
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "vfile:latest",
        "Labels": {},
        "Mountpoint": "/mnt/vfile/SharedVol/",
        "Name": "SharedVol",
        "Options": {
            "size": "10gb"
        },
        "Scope": "global",
        "Status": {
            "Clients": [
                "10.161.114.68"
            ],
            "File server Port": 30000,
            "Global Refcount": 1,
            "Service name": "vFileServerSharedVol",
            "Volume Status": "Mounted"
        }
    }
]
@luomiao
Copy link
Contributor

luomiao commented Jan 17, 2018

The reason for this behavior is because there is only one master node in this setup and when the plugin on this single master node is broken (disabled), no ETCD server is running. There is no way to keep an update-to-date status for the vfile volumes, when there is no functioning master node.

However, the status of the volume is just as a reference for users, it won't stop the users to mount it again or delete it. Even the status is showing wrong "Mounted" state in this case, this volume can be reuse or can be deleted.

According to the above reasons, we will update the user guide to include more information about possible situations with a single master swarm cluster.

@pshahzeb Hi Shahzeb, can you check if the above solution sounds good to you? Thanks.

@pshahzeb
Copy link
Contributor Author

Thank you @luomiao for taking a look.
Given that this is a corner case negative scenario, documenting it should suffice.

@luomiao luomiao closed this as completed Jan 31, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants