Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Merge dev (v2.0.0) into master branch #280

Closed
wants to merge 28 commits into from
Closed

Merge dev (v2.0.0) into master branch #280

wants to merge 28 commits into from

Conversation

shay-berman
Copy link
Contributor

@shay-berman shay-berman commented Dec 6, 2018

GA v2.0.0

IBM Storage Enabler for Containers allows IBM storage systems to be used as persistent volumes for stateful applications running in Kubernetes clusters. The IBM official solution for Kubernetes, based on the Ubiquity project, is referred to as IBM Storage Enabler for Containers. The solution supports both IBM block- and file-based storage systems.

  • IBM block storage is supported via IBM Spectrum Connect. Ubiquity communicates with the IBM storage systems through Spectrum Connect. Spectrum Connect creates a storage profile (for example, gold, silver or bronze) and makes it available for Kubernetes.
  • New - Support for IBM Spectrum Scale. Ubiquity communicates with Spectrum Scale system directly via Spectrum Scale REST API v2.

New in this release

This change is Reviewable

deeghuge and others added 27 commits August 28, 2018 10:03
UB-1487 : Using ManagementIP and Port instead of accepting URL for endpoint
UB-1488 : Using SPECTRUMSCALE_ prefix instead of SSC_
UB-1495 : Improve the check in backend initilization. Added check for scbe backend. Check for Spectrum Scale will be added later.
Removed MMCLI and MMSSH backend initialization as they are no longer valid
Removed sshconfig from resource.go because it is no longer required
- Reverted changes of NFS, Connector, SSH as they should not be part of epic1
changes.
- Added variable for SpectrumScaleParamPrefix
This PR is solving the following idempotent issue:
in case the umount process fails after the volume is already detached in the storage but the multipath device still exists, the next umount we will try to run will fail and get stuck on the "umount" command forever.
the solution is to run the dmsetup command before the umount command (dmsetup puts 0 to queue_if_no_path). this seems to solve the issue for this scenario and we are able to finish the umount without further issues.
(also this was checked for staging and the regular flow works fine as well)
#247)

if we encounter a bad mpath device we will continue with the scan of the other devices.

(this is based on :#170 + some code review and test fixes changes )
in this PR we want to check before deleting the /ubiquity/wwn folder if there are any files left there. if there are still files(e.g for some reason the mountpoint directory /ubiquity/wwn had some files before the block device mountpoint on it) - an ERROR will be raised until the folder will be cleaned of files.
Spectrumscale-epic1 fix for UB-1487,UB-1488,UB-1495
Changed parameter prefix to SPECTRUMSCALE from SSC
Changed way of accepting rest endpoint
- Added support for PostgreSQL for Spectrum Scale backend along with UT
- Added support for TLS ca-cert for Spectrumscale rest secure communication
- Logger changes
- Removed scale specific checks from Activate
- Removed rest v1, mmcli and ssh connector support.
- Removed nfs and lightweight volume backend support
- Removed hostname from Restconfig
- Removed getClusterID, SetClusterID functionality
* Upgrade to Alpine 3.8, openssl=1.0.2p-r0 and ca-certificates-20171114-r3
in case a device is faulty we identify the situation and skip sg_inq + sontinue to to do the rest of the umount flow as needed.
this is done to prevent a situation where if a device has been faulty for a while and we try to umount it (~6 minutes) the sg_inq command usually gets stuck.
this should work for both ubuntu + red hat (the multipath output is a bit differente between them)
this PR has unnitests added to cover the no-vendor scenario.
the umount itself of a no vendor device was fixed as part of UB-600.
#252)

spectrumscale epic3 - Ignore backend sent by Provisioner/flex during Activate
in this PR the idempotent issues in the delete request are fixed.
the issues are as follows: (in delete volume function)

* in get Back-end in the storage_api_handler delete function - if volume does not exist then return nil.
* in get volume: if volume does not exist then return success
* in vol delete from XIV : if an 404 error is returned from SC then continue with the flow. (to delete from ubiquity DB)
* in delete volume from DB (for the case where we have a non-db volume) if a volume not found error is returned then return success.

in all the above issues an idempotent there is a warning message in the logger.
comment : an assumption is made that if a volume is removed from the DB then it is definitely removed from the storage since the removal from the DB is the latter operation.

Note: addition PR to handle delete idempotent in the flex side -> #266
Parameter Validation
- Added check before initializing the Spectrumscale backend
- Added check before the creation of fileset
- changed chown to spectrum scale native rest command
- Get Mount point from the spectrum scale configuration
- Check for quota before the creation of fileset
- UT change and Addition
in this PR the idea is to allow the user to use ISCSI + FC on the same kubernetes cluster.
before this PR the user had to define a config parameter : skipRescanIscsI to define if they want to use ISCSI or not in ubiquity.
this PR reomves this config and works under the assumption that if a user defined and installed iscsiadm on the node then this node should be connected to the storage system with ISCSI. also the existance of the iscsiadm command or a login to the storage will not fail the rescan and the mount\umount process!
- Using rest JobID in the rest call instead of using in filter.
- Updated testcases.
- Handled existing fileset functionality differently using isPreexisting flag in storageClass.
This PR fixed the DS8k and Storwize Lun0 volume rescan issue.
Fixing the 'Paging' variable type in spectrum scale rest calls
* UB-1564: updated notices file
added wwn to logging message for automation purposes
in this PR the issue that is solved is the following:
as part of the mount there is a check made to see if there are any solf links pointing to the desired mount point ( to /ubiquity/WWN) -this is needed in order to not allow 2 pods to be attached to the same PVC.
an issue may accur when one of those soft links is not actually active (for example of a pod moving to another node in the cluster and back to the original one due to shutdowns) - this is what this PR solves.
beside checking if any soft links exists - a check is made to see if the mount point is mounted. if it is then this is a valid solf link and the current mount should fail. otherwise the current mount can continue un-interrupted.
fix notices from openssl 1.0.2.p -> 1.0.2.q
Upgrade the ubiquity image to use openssl 1.0.2q instead of 1.0.2p. Also update notices file.
- Add scale support text
- Improve text
- new fix central URL to point to the new location
- Remove scale md file (no need it here - only in ubiquity-k8s repo)
@shay-berman shay-berman self-assigned this Dec 6, 2018
@shay-berman shay-berman changed the title Merge v2.0.0 into master branch Merge dev (v2.0.0) into master branch Dec 6, 2018
@shay-berman
Copy link
Contributor Author

close this PR, instead I opened release branch for v2.0.0 -> #281
So the dev will be available for future progress before the v2.0.0 release. (thanks @olgashtivelman for remind me about release branch)

@shay-berman shay-berman closed this Dec 6, 2018
Copy link
Collaborator

@olgashtivelman olgashtivelman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 64 of 64 files at r1.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @ranhrl, @deeghuge, and @yadaven)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants