Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker EE 2.0 with UCP : Error in creating snapshot. #390

Closed
dockerciuser opened this issue Oct 29, 2018 · 26 comments
Closed

Docker EE 2.0 with UCP : Error in creating snapshot. #390

dockerciuser opened this issue Oct 29, 2018 · 26 comments

Comments

@dockerciuser
Copy link
Contributor

While creating the snapshot got an error due to size parameter.
Below are the detailed steps which I followed -

  1. Created storage class -
    [root@CSSOSBE03-B11 new_yaml]# cat sc.yml
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
    name: test-sc
    provisioner: hpe.com/hpe
    parameters:
    size: "17"
    [root@CSSOSBE03-B11 new_yaml]# kubectl create -f sc.yml
    storageclass "test-sc" created
    [root@CSSOSBE03-B11 new_yaml]# kubectl get sc
    NAME PROVISIONER
    test-sc hpe.com/hpe

  2. Created PVC -
    [root@CSSOSBE03-B11 new_yaml]# cat pvc.yml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: pvc-test
    spec:
    accessModes:

    • ReadWriteOnce
      resources:
      requests:
      storage: 20Gi
      storageClassName: test-sc
      [root@CSSOSBE03-B11 new_yaml]# kubectl create -f pvc.yml
      persistentvolumeclaim "pvc-test" created
      [root@CSSOSBE03-B11 new_yaml]# docker volume ls
      DRIVER VOLUME NAME
      hpe FP-VOLUME
      hpe test-sc-9c1413bb-db39-11e8-a0ae-0242ac110010
  3. Created new storage class required for snapshot -

[root@CSSOSBE03-B11 new_yaml]# cat sc_2.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: test-sc-snp
provisioner: hpe.com/hpe
parameters:
virtualCopyOf: test-sc-9c1413bb-db39-11e8-a0ae-0242ac110010
[root@CSSOSBE03-B11 new_yaml]# kubectl create -f sc_2.yml
storageclass "test-sc-snp" created

  1. Tried to create PVC got an error -

[root@CSSOSBE03-B11 new_yaml]# cat pvc_2.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-test-snp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: test-sc-snp

ERROR -

01:27:51 dockervol.go:267: unable to create docker volume using test-sc-snp-d2300f9a-db3a-11e8-a0ae-0242ac110010 & map[virtualCopyOf:test-sc-9c1413bb-db39-11e8-a0ae-0242ac110010 size:20 mountConflictDelay:30] - Invalid input received: Invalid option(s) ['size'] specified for operation create snapshot. Please check help for usage.

Removed the size parameter as -

[root@CSSOSBE03-B11 new_yaml]# cat pvc_2.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-test-snp
spec:
accessModes:
- ReadWriteOnce
storageClassName: test-sc-snp

ERROR -

[root@CSSOSBE03-B11 new_yaml]# kubectl create -f pvc_2.yml
The PersistentVolumeClaim "pvc-test-snp" is invalid: spec.resources[storage]: Required value

The value of size parameter kept as blank (null)

[root@CSSOSBE03-B11 new_yaml]# cat pvc_2.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-test-snp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage:

ERROR -

[root@CSSOSBE03-B11 new_yaml]# kubectl create -f pvc_2.yml
error: error validating "pvc_2.yml": error validating data: unknown object type "nil" in PersistentVolumeClaim.spec.resources.requests.storage; if you choose to ignore these errors, turn validation off with --validate=false

With the flag "validate=false" able to create the snapshot.

[root@CSSOSBE03-B11 new_yaml]# kubectl create -f pvc_2.yml --validate=false
persistentvolumeclaim "pvc-test-snp" created

[root@CSSOSBE03-B11 new_yaml]# docker volume ls
DRIVER VOLUME NAME
hpe test-sc-9c1413bb-db39-11e8-a0ae-0242ac110010
hpe test-sc-snp-58a9c097-db3e-11e8-a0ae-0242ac110010

@imran-ansari
Copy link
Collaborator

imran-ansari commented Oct 29, 2018

'size' parameter is not allowed for creating a snapshot. However, size parameter seems to be a requirement by K8S.
Since Tushar was able to work out his way with setting size to 0 and using --validate=false, I feel that this is a candidate for documentation issue.

@dockerciuser
Copy link
Contributor Author

Same behavior is seen while importing the volume.

@nilangekarss
Copy link
Collaborator

@dockerciuser could you tell what is the size with which the snapshot gets created when size is not specified and marked validate==false?

@dockerciuser
Copy link
Contributor Author

@nilangekarss , The size of snapshot is same as the size of parent volume with flag validate=false

@dockerciuser
Copy link
Contributor Author

@nilangekarss
Docker Engine - 17.06.2-ee-16
Kubernetes - v1.8.11-docker-8d637ae

@nilangekarss
Copy link
Collaborator

@imran-ansari @wdurairaj @prablr79 We should allow the size parameter to be passed along with the virtualCopyOf and Import in our plugin but this parameter is ignored inside the code. The PVC is associated with a StorageClass and PVC has a required parameter spec.resources[storage]. Clone accepts size which is specified during PVC creation hence we do not see any error while creating a clone of a volume. Hence due to the dependency on K8S, we should remove the restriction on size while creating snap, schedule, and import functionality.

@dockerciuser
Copy link
Contributor Author

This bug was raised while executing the tests on kubernetes 1.8.11. As a workaround we were able to create snapshot / importVol with flag --validate=false
Now I saw same behavior on kubernetes 1.12, where this flag is also not working.

When tried to import volume got following error -

[root@kuber-master STORAGE-CLASS]# kubectl create -f pvc-importvol.yaml --validate=false
The PersistentVolumeClaim "pvc-importvolsc" is invalid: spec.resources[storage]: Invalid value: "0": must be greater than zero

Used following YAMLS -

[root@kuber-master STORAGE-CLASS]# cat sc-importvol.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-importvolsc
provisioner: hpe.com/hpe
parameters:
importVol: sample1
[root@kuber-master STORAGE-CLASS]#
[root@kuber-master STORAGE-CLASS]# cat pvc-importvol.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-importvolsc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage:
storageClassName: sc-importvolsc

Now we cant create snapshot or import volume in kubernetes 1.12

@wdurairaj
Copy link
Collaborator

wdurairaj commented Nov 15, 2018

@dockerciuser , @imran-ansari , i believe we can allow size in -o virtualCopyOf parameter in the request validator and in inside the code we have to ignore processing for size, since the snapshot size is same as that of volume size. Will require a code fix.

@imran-ansari
Copy link
Collaborator

Yes. Code fix + Documentation for Docker users.

@imran-ansari
Copy link
Collaborator

I think this fix will be required for snapshot-schedule as well.

@wdurairaj
Copy link
Collaborator

Even for -o importVol size should be supported.

@imran-ansari
Copy link
Collaborator

Ideally this needed to be fixed in Dory/Doryd

@wdurairaj
Copy link
Collaborator

wdurairaj commented Nov 16, 2018

Even for -o importVol size should be supported.

I totally forgot that this option will be used in a storageClass,

  1. Create storageClass s1
parameters:
   importVol: i1
  1. When PVC based on above storage class s1
  2. When a new PVC is again created, the volume creation will fail, since i1 is already renamed.

So, we should'nt ideally use this in a storageclass.

wdurairaj added a commit to wdurairaj/python-hpedockerplugin that referenced this issue Nov 16, 2018
wdurairaj added a commit to wdurairaj/python-hpedockerplugin that referenced this issue Nov 16, 2018
@dockerciuser
Copy link
Contributor Author

@wdurairaj @imran-ansari , I am able to create the snapshot with latest plugin. When tried to create the snapshot schedule got error.
Error log from DORY -

05:44:53 dockervol.go:262: unable to create docker volume using sc-snapschedule-566bb67b-ebe7-11e8-917a-6cc21739c360 & map[virtualCopyOf:FC-VOLUME mountConflictDelay:30 scheduleFrequency:10 * * * * scheduleName:schedule-tushar snapshotPrefix:tushar] - Post http://unix/VolumeDriver.Create: http: ContentLength=217 with Body length 0 response - &{{ map[]} }
05:44:53 provisioner.go:520: failed to create docker volume, error = Post http://unix/VolumeDriver.Create: http: ContentLength=217 with Body length 0

The storage class YAML which I used is -
[root@kuber-master testing_size_parameter]# cat sc-snapSched.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-snapschedule
provisioner: hpe.com/hpe
parameters:
name: snaptushar
virtualCopyOf: FC-VOLUME
scheduleFrequency: "10 * * * *"
scheduleName: schedule-tushar
snapshotPrefix: tushar
[root@kuber-master testing_size_parameter]#

@imran-ansari
Copy link
Collaborator

Could you please check if you are able to create snapshot schedule directly using Docker with the same parameters?
Also please attach all the logs.

@dockerciuser
Copy link
Contributor Author

@imran-ansari , yes with docker command able to create the snapshot schedule.

/var/log/messages snippet -

snap_sch.txt

@wdurairaj
Copy link
Collaborator

05:44:53 dockervol.go:262: unable to create docker volume using sc-snapschedule-566bb67b-ebe7-11e8-917a-6cc21739c360 & map[virtualCopyOf:FC-VOLUME mountConflictDelay:30 scheduleFrequency:10 * * * * scheduleName:schedule-tushar snapshotPrefix:tushar] - Post http://unix/VolumeDriver.Create: http: ContentLength=217 with Body length 0 response - &{{ map[]} }

I somewhat suspect the scheduleFrequency:10 * * * * which is not passed with double quotes in k8s environment.

@wdurairaj
Copy link
Collaborator

Hi Tushar,

Can you check how this volume got created ?

[root@kuber-master testing_size_parameter]# docker volume ls
DRIVER              VOLUME NAME
hpe                 FC-VOLUME
hpe                 FC-VOLUME_SNAP
hpe                 FC-scheduling
hpe                 VOLUME
local               ed27581f9ec69055760c16874e680243624befae4d50d13b9bc8386c5f79430e
hpe                 sc-snapschedule-c85564f2-ebe2-11e8-917a-6cc21739c360
hpe                 sc-snapshot-b03b5b02-ebd8-11e8-917a-6cc21739c360
hpe                 sc-snapshot-sch-1f40a52f-ebdf-11e8-917a-6cc21739c360
hpe                 sc-snapshot2-f0f1c6a1-ebdf-11e8-917a-6cc21739c360
hpe                 tushar1235_qos_snap
[root@kuber-master testing_size_parameter]# docker volume inspect sc-snapschedule-c85564f2-ebe2-11e8-917a-6cc21739c360
[
    {
        "Driver": "hpe",
        "Labels": null,
        "Mountpoint": "/",
        "Name": "sc-snapschedule-c85564f2-ebe2-11e8-917a-6cc21739c360",
        "Options": {},
        "Scope": "global",
        "Status": {
            "snap_detail": {
                "3par_vol_name": "dcs-V4jKuKeOQS29UjiBphZR7A",
                "backend": "3PAR",
                "compression": null,
                "expiration_hours": null,
                "fsMode": null,
                "fsOwner": null,
                "is_snap": true,
                "mountConflictDelay": 30,
                "parent_id": "870ed217-4425-444b-b170-5c9b15536dce",
                "parent_volume": "FC-VOLUME",
                "provisioning": "thin",
                "retention_hours": null,
                "size": 100,
                "snap_cpg": "SHASHI-SSD",
                "snap_schedule": {
                    "sched_frequency": "20, * * * *",
                    "sched_snap_exp_hrs": null,
                    "sched_snap_ret_hrs": null,
                    "schedule_name": "schedule-tushar",
                    "snap_name_prefix": "tushar"
                }
            }
        }
    }
]

Is the above volume is created using a storage class ?

@wdurairaj
Copy link
Collaborator

wdurairaj commented Nov 19, 2018

Actually as per my analysis, even scheduleName can't used in a storage class for the same reason, we can't have importVol , since

  1. when the first pvc is created with sc having scheduleName, this will succeed
  2. when the second pvc is created again with sc having the same same schedulename as previous one, then the following error is thrown in the docker logs
2018-11-19 11:45:33.541 14 INFO hpedockerplugin.hpe.hpe_3par_common [-] Created a snapshot schedule - command is: ['createsched', '"createsv -f test_prefix.@y@@m@@d@@H@@M@@S@ dcv-hw7SF0QlREuxcFybFVNtzg"', '"10 6,8,10 * * *"', 'doc_schedule', '\r']...^[[00m
2018-11-19 11:45:33.541 14 INFO hpedockerplugin.hpe.hpe_3par_common [-] Create schedule response is: ['CSIM-8K02_MXN6072AC7 cli% createsched "createsv -f test_prefix.@y@@m@@d@@H@@M@@\r', 'S@ dcv-hw7SF0QlREuxcFybFVNtzg" "10 6,8,10 * * *" doc_schedule \r', 'Error: Scheduled task name: doc_schedule is already in use\r', '\r', 'CSIM-8K02_MXN6072AC7 cli% \r']...^[[00m
2018-11-19 11:45:33.541 14 ERROR hpedockerplugin.hpe.hpe_3par_common [-] Create snapschedule failed Error is 'Error: Scheduled task name: doc_schedule is already in use' ^[[00m

You can see the 'doc_schedule' already exists.

@wdurairaj
Copy link
Collaborator

you can confirm the above theory, my creatig a new storageclass with schedulename as different from the already existing ones in 3par, but you will see only the first schedule based on a SC class will work correctly. So, it's an expected behaviour.

@dockerciuser
Copy link
Contributor Author

@wdurairaj , Did more experimentation. My observation is as -

-> You are right, the snapshot is created on 3PAR

  • I can see it from docker side as well.

But the PVC status remains "pending"(Expecting as Bound) this bit confusing.

Also saw following error in dory logs -

08:50:08 provisioner.go:237: statusLogger: provision chains=1, delete chains=0, parked chains=0, ids tracked=1, connection=valid
08:50:09 dockervol.go:262: unable to create docker volume using sc-sch-tushar-f1d54065-ec01-11e8-917a-6cc21739c360 & map[snapshotPrefix:tru virtualCopyOf:vol_test_sch mountConflictDelay:30 scheduleFrequency:10 * * * * scheduleName:trusch] - Post http://unix/VolumeDriver.Create: http: ContentLength=206 with Body length 0 response - &{{ map[]} }
08:50:09 provisioner.go:520: failed to create docker volume, error = Post http://unix/VolumeDriver.Create: http: ContentLength=206 with Body length 0
08:50:13 provisioner.go:237: statusLogger: provision chains=1, delete chains=0, parked chains=0, ids tracked=1, connection=valid

That means functionality is working as expected but it is giving the unwanted error and the " Pending " status.

@wdurairaj
Copy link
Collaborator

I do agree with your statements @dockerciuser , But if we say our guideline would be to just use the snapshot scheduling only as part of static provisioning (as part of PV,POD only) and not relevant for dynamic provisioning (using SC) would that suffice ?

@prablr79
Copy link
Contributor

@dockerciuser @wdurairaj are we thinking of fix here ? shall we live with the document for this scenario, how customer can handle this issue. I believe other scenarios we are able to create the snapshot scheduling without any issues.. kindly confirm.

@nilangekarss
Copy link
Collaborator

nilangekarss commented Nov 20, 2018

@dockerciuser @wdurairaj While creating a snapshot schedule we use ssh connection, if the ssh response from the array is slow the docker http client will timeout in 30 seconds. If the plugin is able to response in a proper manner in a limited time it is observed that snapshot schedule gets created properly(client doesn't time out and proper success response will be returned from plugin side). When we enable the plugin the default client http timeout is set to 30 seconds.
One way to overcome this issue of creating a snapshot schedule on an array which has good response for ssh connection Or we need to increase the time out of http client while enabling the plugin and enable more timeout for plugin APIs.

@dockerciuser , Tushar can you try creating a schedule on 3PAR array which has a good ssh response time? Also please try to create a schedule on a problematic array which has bad ssh response time. The response time of docker http client can be increased up to 2 min while enabling the plugin(But IT LOOKS like to be limited to only plugin enable).

I believe, when dory/plugin sends an error response while creating a pvc, it is expected for a pvc to go in a pending state. Please correct me if this is not a correct statement.

Few of the moby links:
Note: API timeout and plugin timeout are 2 different things.
https://docs.docker.com/engine/reference/commandline/plugin_enable/
moby/moby#37426
moby/moby#37835
moby/moby#37908

wdurairaj added a commit that referenced this issue Nov 20, 2018
Fix duplicate schedule name when -o scheduleName is passed in SC (issue #390)
@dockerciuser
Copy link
Contributor Author

Created the snapshot schedule successfully on Kubernetes setup with both the arrays 10.50.3.9 and 10.50.3.24.
Also I can see the status of PVC as “BOUND” and no exception in dory log.

  1. Removed the old dory binary.
  2. Removed the containerized plugin.
  3. Installed new binaries from - https://github.com/hpe-storage/python-hpedockerplugin/raw/master/dory_installer
  4. Installed the latest plugin - hpestorage/legacyvolumeplugin:3.0

YAMLS used –

[root@kuber-master tushar]# cat sc_sch_tushar.yml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-sch-tushar
provisioner: hpe.com/hpe
parameters:
virtualCopyOf: SNAPSCH
scheduleFrequency: "10 * * * *"
scheduleName: SCHTESTING
snapshotPrefix: SNAPSHOT
[root@kuber-master tushar]# cat pvc_sch_tushar.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-sch-tushar
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10
storageClassName: sc-sch-tushar

Attaching the console output for the reference.
final_scheduling_testing.txt

@wdurairaj
Copy link
Collaborator

Based on Tushar's previous update, I'm closing this bug.

wdurairaj added a commit that referenced this issue Apr 10, 2019
…in v300 branch (#569)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Feature: Add support for RotatingFileHandler in logging
nilangekarss pushed a commit that referenced this issue Apr 15, 2019
* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue #534 - invalid config entry creates session leak
wdurairaj added a commit that referenced this issue Apr 16, 2019
* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fixed issue #513

-Added rollback to mount flow for any cleanup in case of any failure
-Added validation for fsOwner

* Pep8 fixed
wdurairaj added a commit to wdurairaj/python-hpedockerplugin that referenced this issue Apr 25, 2019
* Fix Issue hpe-storage#390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue hpe-storage#534 - invalid config entry creates session leak
wdurairaj added a commit that referenced this issue Apr 25, 2019
* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue #534 - invalid config entry creates session leak
wdurairaj added a commit that referenced this issue Jun 24, 2019
* Fix Issue #534 (#576)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue #534 - invalid config entry creates session leak

* Fix for chcon error -- issue #640
wdurairaj added a commit that referenced this issue Jul 3, 2019
… not present (#669)

* Fix Issue #534 (#576)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue #534 - invalid config entry creates session leak

* Fix pop from empty list error on FC driver
nilangekarss pushed a commit that referenced this issue Sep 25, 2019
* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Added space between words

* Ability to create a regular volume from a replicated backend

* Fixed pep8 issue regarding redundant back slashes

* Pep8 Fix attempt 2: Indentation

* Update hpe_storage_api.py

* Fixed an issue when a regular volume is mounted while the backend is replication enabled

* Fixed pep8 line too long issues

* Not treating as a replication backend if the volume created is not replicated volume

* Allowing the non replicated volume from a replicated backend to be successfully imported to docker volume plugin

* Added back some required checks

* Removed unwanted space

* Update .travis.yml

* Fix for issue 518-encrypted password decrytion fails when the passphrase length is of 16,24,32 characters

* Fix for issue 502 (#555)

* Added needed check to see mgr object is available or not (#559)

* Feature: Add logging to file -- Include changes for Pull Request #563 in v300 branch (#569)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Feature: Add support for RotatingFileHandler in logging

* Fix Issue #534 (#576)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue #534 - invalid config entry creates session leak

* Fix issue #513 on v300 branch (#583)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fixed issue #513

-Added rollback to mount flow for any cleanup in case of any failure
-Added validation for fsOwner

* Pep8 fixed

* Use deferred thread for processing REST calls in twistd

* Fixed msg creation

* Retry on lock exception

* Another attempt on processing lock failed exception

* Changes in mount_volume to avoid lock during mount conflict check

* Fix _is_vol_mounted_on_this_node definition

* Minor change

* Backport pull request #650 and related changes

* Implemented blocking Etcd lock + Eviction fixes merged

* Fix problem with mount entry check

* Returning multiple enums from _is_vol_mounted_on_this_node + inspect output to have volume id

* Expect node_mount_info to be absent for the first mount

-Also removed dead code

* path_info to be handled as JSON object + handled stale mount_id in reboot case

* Fix for UTs

* Replaced path.path with path

* Fixed snap related TC

* PEP8 errors fixed

* Added more information to the logs

* For UT 3pardcv.log location changed

* Added check for manager-list initialization

* Removed redundant code

* Removed duplicate functions from fileutil

As part of merge process, fileutil ended up having two duplicate functions. Fixed it.
Also UT needed to use un-deferred thread code to avoid handling multi-threaded UTs.

* Fixed UTs for File

* Added exception handling for mount_dir()

* Adopted 3.2 async initialization fix required for UT

* Reintroduced sleep of 3 secs

* Corrected usage of sleep() call

* Disabled detailed logging due to Travis CI log size restriction

* Pep8 fix

* Fix for issue #735

* Fixed removal of redundant old_path_info entries

* Added missing argument to rollback call

* Removed code that was added to look for iscsi devices

Ideally, we should remove this file altogether... to be taken up later
wdurairaj pushed a commit that referenced this issue Oct 17, 2019
* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Added space between words

* Ability to create a regular volume from a replicated backend

* Fixed pep8 issue regarding redundant back slashes

* Pep8 Fix attempt 2: Indentation

* Update hpe_storage_api.py

* Fixed an issue when a regular volume is mounted while the backend is replication enabled

* Fixed pep8 line too long issues

* Not treating as a replication backend if the volume created is not replicated volume

* Allowing the non replicated volume from a replicated backend to be successfully imported to docker volume plugin

* Added back some required checks

* Removed unwanted space

* Update .travis.yml

* Fix for issue 518-encrypted password decrytion fails when the passphrase length is of 16,24,32 characters

* Fix for issue 502 (#555)

* Added needed check to see mgr object is available or not (#559)

* Feature: Add logging to file -- Include changes for Pull Request #563 in v300 branch (#569)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Feature: Add support for RotatingFileHandler in logging

* Fix Issue #534 (#576)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fix issue #534 - invalid config entry creates session leak

* Fix issue #513 on v300 branch (#583)

* Fix Issue #390, Allow 'size' in snapshot options

* Updated usage doc

* Fixed issue #513

-Added rollback to mount flow for any cleanup in case of any failure
-Added validation for fsOwner

* Pep8 fixed

* Use deferred thread for processing REST calls in twistd

* Fixed msg creation

* Retry on lock exception

* Another attempt on processing lock failed exception

* Changes in mount_volume to avoid lock during mount conflict check

* Fix _is_vol_mounted_on_this_node definition

* Minor change

* Backport pull request #650 and related changes

* Implemented blocking Etcd lock + Eviction fixes merged

* Fix problem with mount entry check

* Returning multiple enums from _is_vol_mounted_on_this_node + inspect output to have volume id

* Expect node_mount_info to be absent for the first mount

-Also removed dead code

* path_info to be handled as JSON object + handled stale mount_id in reboot case

* Fix for UTs

* Replaced path.path with path

* Fixed snap related TC

* PEP8 errors fixed

* Added more information to the logs

* For UT 3pardcv.log location changed

* Added check for manager-list initialization

* Removed redundant code

* Removed duplicate functions from fileutil

As part of merge process, fileutil ended up having two duplicate functions. Fixed it.
Also UT needed to use un-deferred thread code to avoid handling multi-threaded UTs.

* Fixed UTs for File

* Added exception handling for mount_dir()

* Adopted 3.2 async initialization fix required for UT

* Reintroduced sleep of 3 secs

* Corrected usage of sleep() call

* Disabled detailed logging due to Travis CI log size restriction

* Pep8 fix

* Fix for issue #735

* Fixed removal of redundant old_path_info entries

* Added missing argument to rollback call

* Removed code that was added to look for iscsi devices

Ideally, we should remove this file altogether... to be taken up later

* pyparsing ImportError fix

* Changed setuptools version to 41.0.0

41.0.0 was used by v3.1 of plugin and was

* Device remapping fix

On reboot, the volumes that were mapped to the multipath devices earlier are remapped to different devices. This fix handles that case.

* Fixed lock name

* Fixed PEP8 issues

* Missed a PEP8 conformance

* Fixed log statement
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants