This repository has been archived by the owner on Jul 22, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 22
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This PR is needed for the automated testing of the idempotent issues. (yes its known that this is not good, but that is the current state of the automation hopefully this could be avoided and changed in the future) the automation needs there to be a WWN in the warning message so in order to not make multiple calls to ubiquity server there was small refactor done .
currently there are errors filling the provisioner logs that look like this: ERROR: logging before flag.Parse: I0716 13:33:58.655590 1 controller.go:1063] volume "ibm-ubiquity-db" deleted from database ERROR: logging before flag.Parse: I0716 13:34:00.224200 1 controller.go:826] volume "ibm-ubiquity-db" for claim "ubiquity/ibm- they seem to be caused by some change in glog (used by kuberentes) this small fix removes them from the log. (fix was taking from : cloudnativelabs/kube-router@78a0aeb )
… PV mountpoint (#209) - Added idempotent changes for IsAttached/Attach/Detach operations. - For SpectrumScale backend IsAttached/Attach/Detach will return "Not Supported". - fixed lnpath for spectrum scale - Added new function getMountpointForVolume which will return apprpriate mountpoint based on respective backends. - Added UT for changes Additional changes pulled in this PR because of rebase. - UB-1491: Add log message for flex detach (#208) - Ub-599: fiscover by sg_ibq continue on bad mpath device - Fix/ub 1431 fix glog errors in provisioner (#212)
…iqutiy cli status
- Provide a basic helm chart for ubiqutiy instead of the the original ubiquity_installer.sh script. - This phase1 helm chart is just the baseline - and currently it has some limitations that mentioned in the PR #213.
align ubiquity-k8s with changed from ubiquity dev branch
…d add pod spec serviceAccount: ubiquity
…ent) and add update update for event
…biquity-k8s-provisioner
Unified Installer - added new storage class yaml for spectrum scale : storage-class-spectrumscale.yml - added new configuration file for spectrum scale : ubiquity_installer_scale.conf - added new secret file for spectrum scale : spectrumscale-credentials-secret.yml - added spectrum scale specific parameter in configmap, deployment yml - installer will be used to install either spectrum scale or scbe not both - added functions to find backend to be installed from configmap and configuration file - backend to be installed is decided based on value of SPECTRUMSCALE_MANAGEMENT_IP and SCBE_MANAGEMENT_IP - secret will be create based on backend - added spectrum-scale ca-cert in configmap based on backend - removed check for username and password from flex setup - default backend will be updated in configmap based on backend being installed. - SKIP_RESCAN_ISCSI_VALUE set to false in case of spectrum scale
By mistake I merged #215 without github squash commits and without rebase before. So this commit fix this gap (instead of doing some reverts in dev, we decided to just fix it by this commit)
during ubiquity in install process we are waiting for all objects to be deleted. one of them is the PV object. the problem is that there are currently some cases which cause the PV to remain in kuberentes even though the PVC is deleted (idempotent delete issue) because of this it is needed to find another way to get the PV name instead of getting it from the PVC as we do now. so the solution here was to use the name in the ubiquity-configmap to check if a PV with this name remains. if the PV is still there we will stop the un-install process on this step no matter how many times it was invoked by the user until someone will intervene and remove the PV and fix the issue.
…chart (#247) Add provisioner service account, clusterRoles and clusterRoleBinding into the new helm chart (there is no more need for dedicated k8s-config file\configmap) In addition this commit drop the use of KUBECONFIG env in the provisioner, so the provisioner can get access to API server only via service account (no option to bring your own k8s config file any more).
in this PR the idempotent issues in the delete request are fixed. the issues are as follows: (in delete volume function in the provisioner) in get volume if volume does not exist error is returned then the operation should return success. comment : an assumption is made that if a volume is removed from the DB then it is definitely removed from the storage since the removal from the DB is the latter operation.
Added template for Spectrum Scale storage class. Added comments in the sanity-pvc yaml and pvc-template yaml. Removed the SPECTRUMSCALE-SSL-MODE as it was not referred in the installer.
UB-1602 : Support Spectrum Scale inside helm chart * if scale backend selected then only the scale specific params (configmap\flex\provisioner and ubiqutiy objects) will be defined in the templates and also only backend specific objects(storage class and secret) will be defined based on backend type. The same apply to spectrum connect backend type. Distinguish by {{ if .Values.spectrumConnect }} and {{ if .Values.spectrumScale }} * Create credential secret and storage class dedicated for Spectrum Scale * sslMode moved to generic section and align all if statement according Other items: * Add to Chart the maintainers, versions and sources. * Align the original installer config file to use v2.0.0 * Allow to set flex Tolerations and db NodeSelector * Set storage class in PVC using v1 spec storageClassName vs old annotation volume.beta.kubernetes.io/storage-class. * Align all Olga and Deepak code review comments (clearer if else states, allow to set storageclass as default, add empty line in EOF and more)
Avoid checking volume symlink check spectrum-scale backend because of this change spectrum-scale can allow multiple PODs to mount same volume
* new layout of values.yml file * Update all the templates according to the new layout. * Update all text in the values.yml file based on Dima feedback
* UB-1564: updated notices file
in this PR the issue that is solved is the following: as part of the mount there is a check made to see if there are any solf links pointing to the desired mount point ( to /ubiquity/WWN) -this is needed in order to not allow 2 pods to be attached to the same PVC. an issue may accur when one of those soft links is not actually active (for example of a pod moving to another node in the cluster and back to the original one due to shutdowns) - this is what this PR solves. beside checking if any soft links exists - a check is made to see if the mount point is mounted. if it is then this is a valid solf link and the current mount should fail. otherwise the current mount can continue un-interrupted.
UB-1731 - update the yaml files to point to new cert location mount
Update notices file with openssl 1.0.2q
- Align README file with version 2.0.0 (adding scale support, fix some syntax and new URL) - improve the scale md file with up to date examples by @deeghuge - Align glide with latest ubiquity
Fixed installer for ssl-mode=verify-full Fixed the steps mentioned at the end of create-secrets-for-certificates step Fixed the names used by create_serviceaccount_and_clusterroles for existence check
Due to master merge conflict.
shay-berman
requested review from
27149chen,
yadaven,
ranhrl,
deeghuge and
olgashtivelman
December 6, 2018 19:38
olgashtivelman
approved these changes
Dec 19, 2018
deeghuge
approved these changes
Dec 19, 2018
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
GA v2.0.0
IBM Storage Enabler for Containers allows IBM storage systems to be used as persistent volumes for stateful applications running in Kubernetes clusters. The IBM official solution for Kubernetes, based on the Ubiquity project, is referred to as IBM Storage Enabler for Containers. The solution supports both IBM block- and file-based storage systems.
New in this release
Spectrum Scale Support
RBAC for Dynamic Provisioner
Expanded support matrix:
(Fix/ub 1675 support fc and iscsi #254, Fix/ub 1675 support fc and iscsi #254). Ubiquity Flex automatically identifies if iSCSI rescan is needed on the host in addition to the regular SCSI rescans operations. Note: The skipRescanIscsi param in configmap is deprecated.
Stability improvements:
Pull requests related: PRs-ubiquity and PRs-ubiquity-k8s and also Stability : remove mountpoint safely ubiquity#248.
Helm Chart (not officially ready yet - its a tech preview)
This change is