Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Docker volume reverted to very old version after reboot #2079

Open
aleskinen opened this issue Apr 4, 2018 · 10 comments
Open

Docker volume reverted to very old version after reboot #2079

aleskinen opened this issue Apr 4, 2018 · 10 comments

Comments

@aleskinen
Copy link

aleskinen commented Apr 4, 2018

I am running 3 photon 2 cluster and using vsphere docker volumes. It seems, that docker volume used by our mariadb data volume has reverted to several months old version. I tried to copy 2 days old vmdk file from backupd from equallogic and still seems to contain old files. How is this possible?

Is there any tool to show files in vsphere docker volume vmdk?

I was using few months old version (latest) tried to update to new latest.

@govint
Copy link
Contributor

govint commented Apr 5, 2018

@aleskinen,

  1. Could you share the /var/log/docker-volume-vsphere.log or the /var/log/vsphere-storage-for-docker.log. Please also upload the /var/log/vmware/vmdk_ops.log from the ESX host.
  2. Can you confirm if the logs would cover the time when the most current data was available to the present where it looks like the data has reverted.
  3. Please also indicate what versions of docker and the plugin and the VIB on ESX are in use - both earlier and the upgraded version.
  4. Can you also share where the volumes are located on ESX - it should be in a folder called "dockvols" on the same datastore as where the VM(s) is located. Please upload a listing of the contents of the folder where the volumes are residing and indicate which volume is the one in question.
  5. You can list all the volumes via docker volume ls and it should show the ones created and managed via the vsphere plugin.

I'm curious if for some reason a different volume is getting attached now that was perhaps earlier in use for this DB. Once we know the volume used by the DB we should be able to track it usage from the time that the data was available till the present.

@aleskinen
Copy link
Author

Attached logs. I made test:
Created new volume Util with same properties. ssh to vmware host and changed name of mariadb-fata-flat.vmdk to Util-flat.vmdk while container using Util was not running. Restarted container. File list on volume did not changed. How does sycronization between progated-
docker-volume-vsphere.log
vmdk_ops.log

mount and vmdk contents get syncronized?

@aleskinen
Copy link
Author

I have copies of vmdk files before accident. It would be really nice to be able to check what was the situation. Is there any easy way for to do that?

@govint
Copy link
Contributor

govint commented Apr 5, 2018

@aleskinen any reason why the flat vmdk names are being changed. User isn't expected to do this. Is it done to use the maria-db data via a different vmdk (Util.vmdk).

VMDK content is the same as whats available via the propagated mount.

The logs are the only way to figure which disk was in use earlier and what happened recently.

Are you saying the problem happened after the update of the plugin?

@aleskinen
Copy link
Author

No the problem occurred before update and I updated only to see if it would help something. And copying vmdk-files top of files same or similar in vmware host is only way I know to mount content so that I can manage files inside. I hve once managed to save one backup like that. We have snapshotting enabled in our iscsi disk server so it is possibe to find old vmdk files. But I do not know any way to access content in those.

@aleskinen
Copy link
Author

docker info
Containers: 8
Running: 8
Paused: 0
Stopped: 0
Images: 238
Server Version: 17.06.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
Is Manager: true
Managers: 3
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Root Rotation In Progress: false
Manager Addresses:
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.80-1.ph2-esx
Operating System: VMware Photon OS/Linux
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.86GiB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

And plugin has been latest both before and after update, so I do not know what was before.

@aleskinen
Copy link
Author

DRIVER VOLUME NAME
local 5292c0afe62a9663dd29d66313b9d252a4c08d4f36b32e5e70c5ea564b6e2111
local customtheme
vsphere:latest mariadb-data@d1
vsphere:latest odoo-db@d1
vsphere:latest odoo-lib@d1
local registry
vsphere:latest tek-webroot@d1
vsphere:latest ttkirja-webroot@d1
vsphere:latest tukoke-webroot@d1
vsphere:latest yhteiso-files@d1

@aleskinen
Copy link
Author

[root@vhost1r630:/vmfs/volumes/58da1e8b-9ac19bfb-5c9c-f04da23e46d4/dockvols/11111111-1111-1111-1111-111111111111] ls -l
total 109743176
-rw------- 1 root root 4096 Apr 5 07:51 mariadb-data-3797b56d3a7e9693.vmfd
-rw------- 1 root root 4096 Apr 4 09:37 mariadb-data-3797b56d3a7e9693.vmfd04-02
-rw------- 1 root root 42949672960 Apr 5 07:51 mariadb-data-flat.vmdk
-rw------- 1 root root 42949672960 Apr 5 10:06 mariadb-data-flat.vmdk04-02
-rw------- 1 root root 628 Apr 5 07:51 mariadb-data.vmdk
-rw------- 1 root root 628 Apr 4 09:37 mariadb-data.vmdk04-02
-rw------- 1 root root 42949672960 Apr 5 09:51 mariadb-flat.vmdk
-rw------- 1 root root 4096 Jan 9 10:13 odoo-db-4d54519c5d4faaea.vmfd
-rw------- 1 root root 3277312 Apr 4 18:30 odoo-db-ctk.vmdk
-rw------- 1 root root 53687091200 Jan 9 10:14 odoo-db-flat.vmdk
-rw------- 1 root root 678 Feb 23 15:01 odoo-db.vmdk
-rw------- 1 root root 4096 Mar 9 12:11 odoo-lib-61e5a6b3044be2bb.vmfd
-rw------- 1 root root 6554112 Apr 4 18:30 odoo-lib-ctk.vmdk
-rw------- 1 root root 107374182400 Mar 9 12:11 odoo-lib-flat.vmdk
-rw------- 1 root root 682 Mar 9 12:11 odoo-lib.vmdk
-rw------- 1 root root 6554112 Apr 4 18:25 tek-webroot-ctk.vmdk
-rw------- 1 root root 4096 Apr 3 19:57 tek-webroot-f0298c874b2b09ab.vmfd
-rw------- 1 root root 107374182400 Apr 5 11:40 tek-webroot-flat.vmdk
-rw------- 1 root root 691 Apr 4 18:24 tek-webroot.vmdk
-rw------- 1 root root 4096 Apr 4 10:59 ttkirja-webroot-3960a17f36abba62.vmfd
-rw------- 1 root root 2621952 Apr 4 18:11 ttkirja-webroot-ctk.vmdk
-rw------- 1 root root 42949672960 Apr 4 10:59 ttkirja-webroot-flat.vmdk
-rw------- 1 root root 701 Apr 4 10:59 ttkirja-webroot.vmdk
-rw------- 1 root root 4096 Apr 4 12:54 tukoke-webroot-0233489a55f88b3c.vmfd
-rw------- 1 root root 2621952 Apr 4 18:11 tukoke-webroot-ctk.vmdk
-rw------- 1 root root 42949672960 Apr 4 12:54 tukoke-webroot-flat.vmdk
-rw------- 1 root root 698 Apr 4 12:54 tukoke-webroot.vmdk
-rw------- 1 root root 2621952 Apr 4 19:15 yhteiso-files-ctk.vmdk
-rw------- 1 root root 4096 Apr 4 19:14 yhteiso-files-debd7c3a1e235e48.vmfd
-rw------- 1 root root 42949672960 Apr 4 19:15 yhteiso-files-flat.vmdk
-rw------- 1 root root 641 Apr 4 19:14 yhteiso-files.vmdk
-rw------- 1 root root 4096 Jan 30 10:13 yhteiso-tmp-7944d313e7fc767c.vmfd.orig
-rw------- 1 root root 42949672960 Jan 30 10:13 yhteiso-tmp-flat.vmdk.orig
-rw------- 1 root root 572 Jan 30 09:28 yhteiso-tmp.vmdk.orig

@aleskinen
Copy link
Author

Based on host backups it seems that content of propagated mount has not been same that was in running container. I have use independent-persistent volumes so they are not in host backup.

Compose file for mariadb was:
version: '3.3'

services:
mariadb:
image: mariadb:10.3
volumes:
- mariadb-data:/var/lib/mysql
- /srv/docker/mariadb-backups:/backups/mysql
environment:
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mariadb_root_pwd
networks:
- mariadb
secrets:
- mariadb_root_pwd
deploy:
replicas: 1
placement:
constraints: [node.hostname == dca]

networks:
mariadb:
external: true

volumes:
mariadb-data:
external: true

secrets:
mariadb_root_pwd:
external: true

@govint
Copy link
Contributor

govint commented Apr 10, 2018

@aleskinen thanks for the logs and the details I'll review and update.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants