}}
| COP/OS | Supported Versions |
|-|-|
-| Kubernetes | 1.23, 1.24, 1.25 |
+| Kubernetes | 1.23, 1.24, 1.25, 1.26 |
| Red Hat OpenShift | 4.10, 4.11 |
| RHEL | 7.x, 8.x |
| CentOS | 7.8, 7.9 |
diff --git a/content/docs/applicationmobility/deployment.md b/content/docs/applicationmobility/deployment.md
index 7950919f77..38e03da3b7 100644
--- a/content/docs/applicationmobility/deployment.md
+++ b/content/docs/applicationmobility/deployment.md
@@ -39,7 +39,7 @@ This table lists the configurable parameters of the Application Mobility Helm ch
| - | - | - | - |
| `replicaCount` | Number of replicas for the Application Mobility controllers | Yes | `1` |
| `image.pullPolicy` | Image pull policy for the Application Mobility controller images | Yes | `IfNotPresent` |
-| `controller.image` | Location of the Application Mobility Docker image | Yes | `dellemc/csm-application-mobility-controller:v0.2.0` |
+| `controller.image` | Location of the Application Mobility Docker image | Yes | `dellemc/csm-application-mobility-controller:v0.3.0` |
| `cert-manager.enabled` | If set to true, cert-manager will be installed during Application Mobility installation | Yes | `false` |
| `veleroNamespace` | If Velero is already installed, set to the namespace where Velero is installed | No | `velero` |
| `licenseName` | Name of the Secret that contains the License for Application Mobility | Yes | `license` |
@@ -57,7 +57,7 @@ This table lists the configurable parameters of the Application Mobility Helm ch
| `velero.configuration.backupStorageLocation.config` | Additional provider-specific configuration. See https://velero.io/docs/v1.9/api-types/backupstoragelocation/ for specific details. | Yes | ` ` |
| `velero.initContainers` | List of plugins used by Velero. Dell Velero plugin is required and plugins for other providers can be added. | Yes | ` ` |
| `velero.initContainers[0].name` | Name of the Dell Velero plugin. | Yes | `dell-custom-velero-plugin` |
-| `velero.initContainers[0].image` | Location of the Dell Velero plugin image. | Yes | `dellemc/csm-application-mobility-velero-plugin:v0.2.0` |
+| `velero.initContainers[0].image` | Location of the Dell Velero plugin image. | Yes | `dellemc/csm-application-mobility-velero-plugin:v0.3.0` |
| `velero.initContainers[0].volumeMounts[0].mountPath` | Mount path of the volume mount. | Yes | `/target` |
| `velero.initContainers[0].volumeMounts[0].name` | Name of the volume mount. | Yes | `plugins` |
| `velero.restic.privileged` | If set to true, Restic Pods will be run in privileged mode. Note: Set to true when using Red Hat OpenShift | No | `false` |
diff --git a/content/docs/applicationmobility/release.md b/content/docs/applicationmobility/release.md
index 996587bd98..b0f6d68579 100644
--- a/content/docs/applicationmobility/release.md
+++ b/content/docs/applicationmobility/release.md
@@ -6,6 +6,19 @@ Description: >
Release Notes
---
+## Release Notes - CSM Application Mobility 0.3.0
+### New Features/Changes
+
+There are no new features in this release
+
+### Fixed Issues
+
+- [CSM app-mobility can delete restores but they pop back up after 10 seconds.](https://github.com/dell/csm/issues/690)
+- [dellctl crashes on a "backup get" when a trailing "/" is added to the namespace](https://github.com/dell/csm/issues/691)
+
+### Known Issues
+
+There are no known issues in this release.
## Release Notes - CSM Application Mobility 0.2.0
diff --git a/content/docs/authorization/_index.md b/content/docs/authorization/_index.md
index 12808a5a7e..bc9ac035f1 100644
--- a/content/docs/authorization/_index.md
+++ b/content/docs/authorization/_index.md
@@ -33,7 +33,7 @@ The following diagram shows a high-level overview of CSM for Authorization with
{{
}}
## Roles and Responsibilities
diff --git a/content/docs/authorization/cli.md b/content/docs/authorization/cli.md
index eee82e73bd..a1b72e0434 100644
--- a/content/docs/authorization/cli.md
+++ b/content/docs/authorization/cli.md
@@ -38,6 +38,7 @@ If you feel that something is unclear or missing in this document, please open u
| [karavictl tenant list](#karavictl-tenant-list) | Lists tenant resources within CSM |
| [karavictl tenant revoke](#karavictl-tenant-revoke) | Get a tenant resource within CSM |
| [karavictl tenant delete](#karavictl-tenant-delete) | Deletes a tenant resource within CSM |
+| [karavictl tenant update](#karavictl-tenant-update) | Updates a tenant resource within CSM |
## General Commands
@@ -1095,3 +1096,42 @@ karavictl tenant delete [flags]
$ karavictl tenant delete --name Alice
```
On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the deletion occurred.
+
+
+
+---
+
+
+
+### karavictl tenant update
+
+Updates a tenant's resource within CSM
+
+##### Synopsis
+
+Updates a tenant resource within CSM
+
+```
+karavictl tenant update [flags]
+```
+
+#### Options
+
+```
+ -h, --help help for create
+ -n, --name string Tenant name
+ --approvesdc boolean (Usage: --approvesdc=true/false | This flag is only applicable to PowerFlex. This flag will Approve/Deny a tenant's SDC request )
+```
+
+##### Options inherited from parent commands
+
+```
+ --addr string Address of the server (default "localhost:443")
+ --config string config file (default is $HOME/.karavictl.yaml)
+```
+
+##### Output
+```
+$ karavictl tenant update --name Alice --approvesdc=false
+```
+On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the update was persisted.
\ No newline at end of file
diff --git a/content/docs/authorization/configuration/proxy-server/_index.md b/content/docs/authorization/configuration/proxy-server/_index.md
index 5b41203732..0a0f583ec4 100644
--- a/content/docs/authorization/configuration/proxy-server/_index.md
+++ b/content/docs/authorization/configuration/proxy-server/_index.md
@@ -14,7 +14,7 @@ The storage administrator must first configure the proxy server with the followi
- Bind roles to tenants
>__Note__:
-> - The `RPM deployment` will use the address and port of the server (i.e. grpc.DNS-hostname:443).
+> - The `RPM deployment` will use the address and port of the server (i.e. grpc.:443).
> - The `Helm deployment` will use the address and port of the Ingress hosts for the storage, tenant, and role services.
### Configuring Storage
@@ -24,15 +24,17 @@ A `storage` entity in CSM Authorization consists of the storage type (PowerFlex,
```yaml
# RPM Deployment
-karavictl storage create --type powerflex --endpoint https://10.0.0.1 --system-id ${systemID} --user ${user} --password ${password} --array-insecure
+karavictl storage create --type powerflex --endpoint ${powerflexIP} --system-id ${systemID} --user ${user} --password ${password} --array-insecure
# Helm Deployment
-karavictl storage create --type powerflex --endpoint https://10.0.0.1 --system-id ${systemID} --user ${user} --password ${password} --insecure --array-insecure --addr storage.csm-authorization.com:
+karavictl storage create --type powerflex --endpoint ${powerflexIP} --system-id ${systemID} --user ${user} --password ${password} --insecure --array-insecure --addr storage.csm-authorization.com:
```
>__Note__:
> - The `insecure` flag specifies to skip certificate validation when connecting to the CSM Authorization storage service.
> - The `array-insecure` flag specifies to skip certificate validation when proxy-service connects to the backend storage array. Run `karavictl storage create --help` for help.
+> - The `powerflexIP` is the endpoint to your powerflex machine. You can find the `systemID` at the `https:///dashboard/performance` near the `System` title.
+> - The `user` and `password` arguments are credentials to the powerflex UI.
### Configuring Tenants
@@ -40,7 +42,7 @@ A `tenant` is a Kubernetes cluster that a role will be bound to. For example, to
```yaml
# RPM Deployment
-karavictl tenant create --name Finance --insecure --addr grpc.DNS-hostname:443
+karavictl tenant create --name Finance --insecure --addr grpc.:443
# Helm Deployment
karavictl tenant create --name Finance --insecure --addr tenant.csm-authorization.com:
@@ -48,6 +50,17 @@ karavictl tenant create --name Finance --insecure --addr tenant.csm-authorizatio
>__Note__:
> - The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl tenant create --help` for help.
+> - `DNS-hostname` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running `nslookup `
+
+> - For the Powerflex Pre-approved Guid feature, the `approvesdc` boolean flag is `true` by default. If the `approvesdc` flag is false for a tenant, the proxy server will deny the requests to approve SDC if the SDCs are already in not-approved state. Inorder to change this flag for an already created tenant, see `tenant update` command in CLI section.
+
+```yaml
+# RPM Deployment
+karavictl tenant create --name Finance --approvesdc=false --insecure --addr grpc.DNS-hostname:443
+
+# Helm Deployment
+karavictl tenant create --name Finance --approvesdc=false --insecure --addr tenant.csm-authorization.com:
+```
### Configuring Roles
@@ -70,7 +83,7 @@ A `role binding` binds a role to a tenant. For example, to bind the `FinanceRole
```yaml
# RPM Deployment
-karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr grpc.DNS-hostname:443
+karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr grpc.:443
# Helm Deployment
karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr tenant.csm-authorization.com:
@@ -93,7 +106,7 @@ After creating the role bindings, the next logical step is to generate the acces
```
echo === Generating token ===
- karavictl generate token --tenant ${tenantName} --insecure --addr grpc.DNS-hostname:443 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' > token.yaml
+ karavictl generate token --tenant ${tenantName} --insecure --addr grpc.:443 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' > token.yaml
echo === Copy token to Driver Host ===
sshpass -p ${DriverHostPassword} scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
diff --git a/content/docs/authorization/deployment/rpm/_index.md b/content/docs/authorization/deployment/rpm/_index.md
index ddddc0ed30..2528d8ea7c 100644
--- a/content/docs/authorization/deployment/rpm/_index.md
+++ b/content/docs/authorization/deployment/rpm/_index.md
@@ -124,6 +124,16 @@ A Storage Administrator can execute the shell script, install_karavi_auth.sh as
sh install_karavi_auth.sh
```
+ As an option, on version 1.6.0, the Nodeports for the ingress controller can be specified:
+
+ ```
+ sh install_karavi_auth.sh --traefik_web_port --traefik_websecure_port
+
+ Ex.:
+
+ sh install_karavi_auth.sh --traefik_web_port 30001 --traefik_websecure_port 30002
+
+
5. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`.
If errors occur during installation, review the [Troubleshooting](../../troubleshooting) section.
diff --git a/content/docs/authorization/release/_index.md b/content/docs/authorization/release/_index.md
index 16ff0da3bb..bf2037e278 100644
--- a/content/docs/authorization/release/_index.md
+++ b/content/docs/authorization/release/_index.md
@@ -6,13 +6,14 @@ Description: >
Dell Container Storage Modules (CSM) release notes for authorization
---
-## Release Notes - CSM Authorization 1.5.1
-### New Features/Changes
+## Release Notes - CSM Authorization 1.6.0
-- Show volumes associated with the tenant from the k8s server. ([#408](https://github.com/dell/csm/issues/408))
-- CSM 1.5.1 release specific changes. ([#582](https://github.com/dell/csm/issues/582))
+### New Features/Changes
+- Restrict the version of TLS to v1.2 for all requests to CSM authorization proxy server. ([#642](https://github.com/dell/csm/issues/642))
+- PowerFlex preapproved GUIDs. ([#402](https://github.com/dell/csm/issues/402))
+- CSM 1.6 release specific changes. ([#583](https://github.com/dell/csm/issues/583))
### Bugs
-
-- CSM Authorization installation fails due to a PATH not looking in /usr/local/bin. ([#580](https://github.com/dell/csm/issues/580))
+- CSM Authorization quota of zero should allow infinite use for PowerFlex and PowerMax. ([#654](https://github.com/dell/csm/issues/654))
+- CSM Authorization CRD in the CSM Operator doesn't read custom configurations. ([#633](https://github.com/dell/csm/issues/633))
diff --git a/content/docs/authorization/upgrade.md b/content/docs/authorization/upgrade.md
index 8be889ac83..72fae3377e 100644
--- a/content/docs/authorization/upgrade.md
+++ b/content/docs/authorization/upgrade.md
@@ -12,7 +12,7 @@ This section outlines the upgrade steps for Container Storage Modules (CSM) for
### Upgrading CSM for Authorization proxy server
-Obtain the latest single binary installer RPM by following one of our two options [here](../deployment/#single-binary-installer).
+Obtain the latest single binary installer RPM by following one of our two options [here](../deployment/#single-binary-installer).
To update the rpm package on the system, run the below command from within the extracted folder:
@@ -20,6 +20,16 @@ To update the rpm package on the system, run the below command from within the e
sh install_karavi_auth.sh --upgrade
```
+As an option, on version 1.6.0, the Nodeports for the ingress controller can be specified:
+
+```
+sh install_karavi_auth.sh --upgrade --traefik_web_port --traefik_websecure_port
+
+Ex.:
+
+sh install_karavi_auth.sh --upgrade --traefik_web_port 30001 --traefik_websecure_port 30002
+```
+
To verify that the new version of the rpm is installed and K3s has been updated, run the below commands:
```
diff --git a/content/docs/csidriver/_index.md b/content/docs/csidriver/_index.md
index 8f3b093ee9..8be8c5c1bb 100644
--- a/content/docs/csidriver/_index.md
+++ b/content/docs/csidriver/_index.md
@@ -16,16 +16,16 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
{{
}}
@@ -33,7 +33,7 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
{{
}}
| Features | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
|--------------------------|:--------:|:---------:|:---------:|:----------:|:----------:|
-| CSI Driver version | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.1 |
+| CSI Driver version | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 |
| Static Provisioning | yes | yes | yes | yes | yes |
| Dynamic Provisioning | yes | yes | yes | yes | yes |
| Expand Persistent Volume | yes | yes | yes | yes | yes |
diff --git a/content/docs/csidriver/features/powerflex.md b/content/docs/csidriver/features/powerflex.md
index cba623fc29..d2b2663551 100644
--- a/content/docs/csidriver/features/powerflex.md
+++ b/content/docs/csidriver/features/powerflex.md
@@ -642,23 +642,85 @@ To accomplish this, two new parameters are introduced in the storage class: band
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
-name: vxflexos
-annotations:
-storageclass.kubernetes.io/is-default-class: "true"
+ name: vxflexos
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-vxflexos.dellemc.com
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
-storagepool: # Insert Storage pool
-systemID: # Insert System ID
-bandwidthLimitInKbps: # Insert bandwidth limit in Kbps
-iopsLimit: # Insert iops limit
-csi.storage.k8s.io/fstype: ext4
+ storagepool: "pool2" # Insert Storage pool
+ systemID: # Insert System ID
+ bandwidthLimitInKbps: "10240" # Insert bandwidth limit in Kbps
+ iopsLimit: "11" # Insert iops limit
+ csi.storage.k8s.io/fstype: ext4
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
-- matchLabelExpressions:
- - key: csi-vxflexos.dellemc.com/ # Insert System ID
- values:
- - csi-vxflexos.dellemc.com
+ - matchLabelExpressions:
+ - key: csi-vxflexos.dellemc.com/ # Insert System ID
+ values:
+ - csi-vxflexos.dellemc.com
+```
+Once the volume gets created, the ControllerPublishVolume will set the QoS limits for the volumes mapped to SDC.
+
+## Rename SDC
+
+Starting with version 2.6, the CSI driver for PowerFlex will support renaming of SDCs. To use this feature, the node section of values.yaml should have renameSDC keys enabled with a prefix value.
+
+To enable renaming of SDC, make the following edits to [values.yaml](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml) file:
+```yaml
+# "node" allows to configure node specific parameters
+node:
+ ...
+ ...
+
+ # "renameSDC" defines the rename operation for SDC
+ # Default value: None
+ renameSDC:
+ # enabled: Enable/Disable rename of SDC
+ # Allowed values:
+ # true: enable renaming
+ # false: disable renaming
+ # Default value: "false"
+ enabled: false
+ # "prefix" defines a string for the new name of the SDC.
+ # "prefix" + "worker_node_hostname" should not exceed 31 chars.
+ # Default value: none
+ # Examples: "rhel-sdc", "sdc-test"
+ prefix: "sdc-test"
+```
+The renameSDC section is going to be used by the Node Service, it has two keys enabled and prefix:
+* `enabled`: Boolean variable that specifies if the renaming for SDC is to be carried out or not. If true then the driver will perform the rename operation. By default, its value will be false.
+* `prefix`: string variable that is used to set the prefix for SDC name.
+
+Based on these two keys, there are certain scenarios on which the driver is going to perform the rename SDC operation:
+* If enabled and prefix given then set the prefix+worker_node_name for SDC name.
+* If enabled and prefix not given then set worker_node_name for SDC name.
+
+> NOTE: name of the SDC cannot be more than 31 characters, hence the prefix given and the worker node hostname name taken should be such that the total length does not exceed 31 character limit.
+
+## Pre-approving SDC by GUID
+
+Starting with version 2.6, the CSI Driver for PowerFlex will support pre-approving SDC by GUID.
+CSI PowerFlex driver will detect the SDC mode set on the PowerFlex array and will request SDC approval from the array prior to publishing a volume. This is specific to each SDC.
+
+To request SDC approval for GUID, make the following edits to [values.yaml](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml) file:
+```yaml
+# "node" allows to configure node specific parameters
+node:
+ ...
+ ...
+
+ # "approveSDC" defines the approve operation for SDC
+ # Default value: None
+ approveSDC:
+ # enabled: Enable/Disable SDC approval
+ #Allowed values:
+ # true: Driver will attempt to approve restricted SDC by GUID during setup
+ # false: Driver will not attempt to approve restricted SDC by GUID during setup
+ # Default value: false
+ enabled: false
```
-Once the volume gets created, the ControllerPublishVolume will set the QoS limits for the volumes mapped to SDC.
\ No newline at end of file
+> NOTE: Currently, the CSI-PowerFlex driver only supports GUID for the restricted SDC mode.
+
+If SDC approval is denied, then provisioning of the volume will not be attempted and an appropriate error message is reported in the logs/events so the user is informed.
diff --git a/content/docs/csidriver/features/powermax.md b/content/docs/csidriver/features/powermax.md
index 40dce2261e..f97abf01a6 100644
--- a/content/docs/csidriver/features/powermax.md
+++ b/content/docs/csidriver/features/powermax.md
@@ -56,8 +56,6 @@ status:
### Creating PVCs with VolumeSnapshots as Source
->Note: This is not supported for metro volumes.
-
The following is a sample manifest for creating a PVC with a VolumeSnapshot as a source:
```yaml
apiVersion: v1
@@ -80,8 +78,6 @@ spec:
### Creating PVCs with PVCs as source
->Note: This is not supported for replicated volumes.
-
This is a sample manifest for creating a PVC with another PVC as a source:
```yaml
apiVersion: v1
@@ -114,7 +110,7 @@ When the driver is installed and all the node plug-ins have initialized successf
`symaccess -sid -iscsi set chap -cred -secret `
-Where is the name of the iSCSI initiator of a host IQN, and is the chapsecret that is used at the time of the installation of the driver.
+Where `host IQN` is the name of the iSCSI initiator of a host IQN, and `CHAP secret` is the chapsecret that is used at the time of the installation of the driver.
*NOTE*: The host IQN is also used as the username when setting up the CHAP credentials.
@@ -574,7 +570,9 @@ This feature supports volume provisioning on Kubernetes clusters running on vSph
It will be supported only on new/freshly installed clusters where the cluster is exclusively deployed in a virtualized vSphere environment. Having hybrid topologies like ISCSI or FC (in pass-through) is not supported.
-To use this feature, set vSphere.enabled to true
+To use this feature
+- Set `vSphere.enabled` to true.
+- Create a secret which contains vCenter privileges. Follow the steps [here](../../installation/helm/powermax/#auto-rdm-for-vsphere-over-fc-requirements) to create it. Update `vCenterCredSecret` with the secret name created.
```
VMware/vSphere virtualization support
@@ -595,11 +593,8 @@ vSphere:
fcHostGroup: "csi-vsphere-VC-HG"
# vCenterHost: URL/endpoint of the vCenter where all the ESX are present
vCenterHost: "00.000.000.01"
- # vCenterUserName: username from the vCenter credentials
- vCenterUserName: "user"
- # vCenterPassword: password from the vCenter credentials
- vCenterPassword: "pwd"
-
+ # vCenterCredSecret: secret name for the vCenter credentials
+ vCenterCredSecret: vcenter-creds
```
>Note: Replication is not supported with this feature.
diff --git a/content/docs/csidriver/installation/helm/isilon.md b/content/docs/csidriver/installation/helm/isilon.md
index fc1f9975bf..b7888484fa 100644
--- a/content/docs/csidriver/installation/helm/isilon.md
+++ b/content/docs/csidriver/installation/helm/isilon.md
@@ -48,14 +48,14 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.2.1](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.2.1](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -74,7 +74,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.2.x version of snapshotter/snapshot-controller.
### (Optional) Volume Health Monitoring
Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm.
@@ -93,7 +93,7 @@ controller:
# false: disable checking of health condition of CSI volumes
# Default value: None
enabled: false
- # healthMonitorInterval: Interval of monitoring volume health condition
+ # interval: Interval of monitoring volume health condition
# Allowed values: Number followed by unit (s,m,h)
# Examples: 60s, 5m, 1h
# Default value: 60s
@@ -125,7 +125,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
@@ -185,7 +185,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| image | image for podmon. | No | " " |
| **encryption** | [Encryption](../../../../secure/encryption/deployment) is an optional feature to apply encryption to CSI volumes. | - | - |
| enabled | A boolean that enables/disables Encryption feature. | No | false |
- | image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.1.0" |
+ | image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.3.0" |
*NOTE:*
@@ -226,7 +226,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| ISI_PRIV_SYNCIQ | Read Write |
Create isilon-creds secret using the following command:
- `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
+ `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml`
*NOTE:*
- If any key/value is present in all *my-isilon-settings.yaml*, *secret*, and storageClass, then the values provided in storageClass parameters take precedence.
@@ -268,7 +268,7 @@ If the 'skipCertificateValidation' parameter is set to false and a previous inst
CSI Driver for Dell PowerScale now provides supports for Multi cluster. Now users can link the single CSI Driver to multiple OneFS Clusters by updating *secret.yaml*. Users can now update the isilon-creds secret by editing the *secret.yaml* and executing the following command
-`kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
+`kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl replace -f -`
**Note**: Updating isilon-certs-x secrets is a manual process, unlike isilon-creds. Users have to re-install the driver in case of updating/adding the SSL certificates or changing the certSecretCount parameter.
@@ -300,3 +300,18 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c
Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. Sample volume snapshot class manifests are available at `samples/volumesnapshotclass/`. Use these sample manifests to create a volumesnapshotclass for creating volume snapshots; uncomment/ update the manifests as per the requirements.
+## Silent Mount Re-tries (v2.6.0)
+There are race conditions, when completing the ControllerPublish call to populate the client to volumes export list takes longer time than usual due to background NFS refresh process on OneFS wouldn't have completed at same time, resulted in error:"mount failed" with initial attempts and might log success after few re-tries. This unnecessarily logs false positive "mount failed" error logs and to overcome this scenario Driver does silent mount re-tries attempts after every two sec. (five attempts max) for every NodePublish Call and allows successful mount within five re-tries without logging any mount error messages.
+"mount failed" will be logged once these five mount retrial attempts are exhausted and still client is not populated to export list.
+
+Mount Re-tries handles below scenarios:
+- Access denied by server while mounting (NFSv3)
+- No such file or directory (NFSv4)
+
+*Sample*:
+```
+level=error clusterName=powerscale runid=10 msg="mount failed: exit status 32
+mounting arguments: -t nfs -o rw XX.XX.XX.XX:/ifs/data/csi/k8s-ac7b91962d /var/lib/kubelet/pods/9f72096a-a7dc-4517-906c-20697f9d7375/volumes/kubernetes.io~csi/k8s-ac7b91962d/mount
+output: mount.nfs: access denied by server while mounting XX.XX.XX.XX:/ifs/data/csi/k8s-ac7b91962d
+```
+
diff --git a/content/docs/csidriver/installation/helm/powerflex.md b/content/docs/csidriver/installation/helm/powerflex.md
index 7d219de6b2..f5c8851e6c 100644
--- a/content/docs/csidriver/installation/helm/powerflex.md
+++ b/content/docs/csidriver/installation/helm/powerflex.md
@@ -47,7 +47,7 @@ Verify that zero padding is enabled on the PowerFlex storage pools that will be
### Install PowerFlex Storage Data Client
The CSI Driver for PowerFlex requires you to have installed the PowerFlex Storage Data Client (SDC) on all Kubernetes nodes which run the node portion of the CSI driver.
-SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS), RHEL 7.9 and RHEL 8.x. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
+SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS), RHEL 7.9, RHEL 8.4, RHEL 8.6. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
Refer to https://hub.docker.com/r/dellemc/sdc for supported OS versions.
*NOTE:* To install CSI driver for Powerflex with automated SDC deployment, you need below two packages on worker nodes.
@@ -112,7 +112,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
## Install the Driver
**Steps**
-1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace vxflexos` to create a new one.
@@ -160,7 +160,7 @@ Use the below command to replace or update the secret:
- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
- If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
+ kubeVersion: ">= 1.21.0-0 < 1.27.0-0"
5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features.
@@ -176,9 +176,9 @@ Use the below command to replace or update the secret:
| Parameter | Description | Required | Default |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- |
-| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.5.0 |
+| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.6.0 |
| driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
-| powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | No | dellemc/sdc:3.6 |
+| powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | Yes | dellemc/sdc:3.6.0.6 |
| certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. | No | 0 |
| logLevel | CSI driver log level. Allowed values: "error", "warn"/"warning", "info", "debug". | Yes | "debug" |
| logFormat | CSI driver log format. Allowed values: "TEXT" or "JSON". | Yes | "TEXT" |
@@ -203,6 +203,10 @@ Use the below command to replace or update the secret:
| healthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false |
| nodeSelector | Defines what nodes would be selected for pods of node daemonset. Leave as blank to use all nodes. | Yes | " " |
| tolerations | Defines tolerations that would be applied to node daemonset. Leave as blank to install node driver only on worker nodes. | Yes | " " |
+| **renameSDC** | This section allows the rename operation for SDC. | - | - |
+| enabled | A boolean that enable/disable rename SDC feature. | No | false |
+| prefix | Defines a string for the prefix of the SDC. | No | " " |
+| approveSDC.enabled | A boolean that enable/disable SDC approval feature. | No | false |
| **monitor** | This section allows the configuration of the SDC monitoring pod. | - | - |
| enabled | Set to enable the usage of the monitoring pod. | Yes | false |
| hostNetwork | Set whether the monitor pod should run on the host network or not. | Yes | true |
diff --git a/content/docs/csidriver/installation/helm/powermax.md b/content/docs/csidriver/installation/helm/powermax.md
index 3b6dd65f86..56c1a3cf8e 100644
--- a/content/docs/csidriver/installation/helm/powermax.md
+++ b/content/docs/csidriver/installation/helm/powermax.md
@@ -16,6 +16,8 @@ The controller section of the Helm chart installs the following components in a
- Kubernetes External Resizer, which resizes the volume
- (optional) Kubernetes External health monitor, which provides volume health status
- (optional) Dell CSI Replicator, which provides Replication capability.
+- (optional) Dell CSI Migrator, which provides migrating capability within and across arrays
+- (optional) Node rescanner, which rescans the node for new data paths after migration
The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
- CSI Driver for Dell PowerMax
@@ -36,6 +38,21 @@ The following requirements must be met before installing CSI Driver for Dell Pow
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
- If using Powerpath , install the PowerPath for Linux requirements
+### Prerequisite for CSI Reverse Proxy
+
+CSI PowerMax Reverse Proxy is an HTTPS server and has to be configured with an SSL certificate and a private key.
+
+The certificate and key are provided to the proxy via a Kubernetes TLS secret (in the same namespace). The SSL certificate must be an X.509 certificate encoded in PEM format. The certificates can be obtained via a Certificate Authority or can be self-signed and generated by a tool such as openssl.
+
+Here is an example showing how to generate a private key and use that to sign an SSL certificate using the openssl tool:
+
+```bash
+openssl genrsa -out tls.key 2048
+openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
+kubectl create secret -n tls revproxy-certs --cert=tls.crt --key=tls.key
+kubectl create secret -n tls csirevproxy-tls-secret --cert=tls.crt --
+key=tls.key
+```
### Install Helm 3
@@ -78,8 +95,19 @@ Set up the environment as follows:
- Add initiators from all ESX/ESXis to a host(initiator group) where the cluster is hosted.
+- Edit `samples/secret/vcenter-secret.yaml` file, to point to the correct namespace, and replace the values for the username and password parameters.
+ These values can be obtained using base64 encoding as described in the following example:
+ ```bash
+ echo -n "myusername" | base64
+ echo -n "mypassword" | base64
+ ```
+ where *myusername* and *mypassword* are credentials for a user with vCenter privileges.
+
>Note: Initiators from all ESX/ESXi should be part of a single host(initiator group) and not hostgroup(cascaded intitiator group).
+Create the secret by running the below command,
+`kubectl create -f samples/secret/vcenter-secret.yaml`.
+
### Certificate validation for Unisphere REST API calls
As part of the CSI driver installation, the CSI driver requires a secret with the name _powermax-certs_ present in the namespace _powermax_. This secret contains the X509 certificates of the CA which signed the Unisphere SSL certificate in PEM format. This secret is mounted as a volume in the driver container. In earlier releases, if the install script did not find the secret, it created an empty secret with the same name. From the 1.2.0 release, the secret volume has been made optional. The install script no longer attempts to create an empty secret.
@@ -141,7 +169,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.0/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots.
@@ -149,7 +177,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.2.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.0/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -189,9 +217,9 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
-3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters.
+3. Edit the `samples/secret/secret.yaml` file,to point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
```bash
echo -n "myusername" | base64
@@ -269,6 +297,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| **migration** | [Migration](../../../../replication/migrating-volumes) is an optional feature to enable migration between storage classes | - | - |
| enabled | A boolean that enables/disables migration feature. | No | false |
| image | Image for dell-csi-migrator sidecar. | No | " " |
+| nodeRescanSidecarImage | Image for node rescan sidecar which rescans nodes for identifying new paths. | No | " " |
| migrationPrefix | enables migration sidecar to read required information from the storage class fields | No | migration.storage.dell.com |
| **replication** | [Replication](../../../../replication/deployment) is an optional feature to enable replication & disaster recovery capabilities of PowerMax to Kubernetes clusters.| - | - |
| enabled | A boolean that enables/disables replication feature. | No | false |
@@ -280,8 +309,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| fcPortGroup | Existing portGroup that driver will use for vSphere. | Yes | "" |
| fcHostGroup | Existing host(initiator group) that driver will use for vSphere. | Yes | "" |
| vCenterHost | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
-| vCenterUserName | Username from the vCenter credentials. | Yes | "" |
-| vCenterPassword | Password from the vCenter credentials. | Yes | "" |
+| vCenterCredSecret | Secret name for the vCenter credentials. | Yes | "" |
8. Install the driver using `csi-install.sh` bash script by running `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ../helm/my-powermax-settings.yaml`
@@ -293,20 +321,14 @@ CRDs should be configured during replication prepare stage with repctl as descri
- This script also runs the verify.sh script in the same directory. You will be prompted to enter the credentials for each of the Kubernetes nodes. The `verify.sh` script needs the credentials to check if the iSCSI initiators have been configured on all nodes. You can also skip the verification step by specifying the `--skip-verify-node` option
- In order to enable authorization, there should be an authorization proxy server already installed.
- PowerMax Array username must have role as `StorageAdmin` to be able to perform CRUD operations.
-- If the user is using complex K8s version like “v1.22.3-mirantis-1”, use below kubeVersion check in [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) file. kubeVersion: “>= 1.22.0-0 < 1.25.0-0”.
+- If the user is using complex K8s version like “v1.23.3-mirantis-1”, use this kubeVersion check in [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) file. kubeVersion: “>= 1.23.0-0 < 1.27.0-0”.
- User should provide all boolean values with double-quotes. This applies only for values.yaml. Example: “true”/“false”.
- controllerCount parameter value should be <= number of nodes in the kubernetes cluster else install script fails.
- Endpoint should not have any special character at the end apart from port number.
## Storage Classes
-Starting CSI PowerMax v1.6, `dell-csi-helm-installer` will not create any storage classes as part of the driver installation. A wide set of annotated storage class manifests has been provided in the `samples/storageclass` folder. Please use these samples to create new storage classes to provision storage.
-
-### What happens to my existing storage classes?
-
-Upgrading from an older version of the driver: The storage classes will be deleted if you upgrade the driver. To continue using those storage classes, you can patch them and apply the annotation “helm.sh/resource-policy”: keep before performing an upgrade.
-
->Note: If you continue to use the old storage classes, you may not be able to take advantage of any new storage class parameter supported by the driver.
+A wide set of annotated storage class manifests has been provided in the `samples/storageclass` folder. Please use these samples to create new storage classes to provision storage.
## Volume Snapshot Class
diff --git a/content/docs/csidriver/installation/helm/powerstore.md b/content/docs/csidriver/installation/helm/powerstore.md
index 1726937351..8ddafe634a 100644
--- a/content/docs/csidriver/installation/helm/powerstore.md
+++ b/content/docs/csidriver/installation/helm/powerstore.md
@@ -74,6 +74,7 @@ If you want to use the protocol, set up the NVMe initiators as follows:
modprobe nvme
modprobe nvme_tcp
```
+- The NVMe modules may not be available after a node reboot. Loading the modules at startup is recommended.
**Requirements for NVMeFC**
- NVMeFC Zoning of the Host Bus Adapters (HBAs) to the Fibre Channel port must be done.
@@ -177,7 +178,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.5.1 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
3. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters:
diff --git a/content/docs/csidriver/installation/helm/unity.md b/content/docs/csidriver/installation/helm/unity.md
index bd46ea332a..eba3659a90 100644
--- a/content/docs/csidriver/installation/helm/unity.md
+++ b/content/docs/csidriver/installation/helm/unity.md
@@ -60,7 +60,7 @@ If you use the iSCSI protocol, set up the iSCSI initiators as follows:
To do this, run the `systemctl enable --now iscsid` command.
- Ensure that the unique initiator name is set in _/etc/iscsi/initiatorname.iscsi_.
-For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/en-us/products/storage/technical-support/docu5128.pdf).
### Linux multipathing requirements
Dell Unity XT supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver for Dell
@@ -88,7 +88,7 @@ Install CSI Driver for Unity XT using this procedure.
*Before you begin*
- * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.5.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
+ * As a pre-requisite for running this procedure, you must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.6.0 https://github.com/dell/csi-unity.git```.
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -101,8 +101,8 @@ Procedure
**Note**:
* ArrayId corresponds to the serial number of Unity XT array.
* Unity XT Array username must have role as Storage Administrator to be able to perform CRUD operations.
- * If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
+ * If the user is using a complex K8s version like "v1.24.6-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
+ kubeVersion: ">= 1.24.0-0 < 1.27.0-0"
2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation.
@@ -252,14 +252,14 @@ Procedure
In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster
#### Volume Snapshot CRD's
- The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd) for the installation.
+ The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.2.1](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.1/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
- Use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller) for the installation.
+ Use [v6.2.1](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller) for the installation.
#### Installation example
diff --git a/content/docs/csidriver/installation/offline/_index.md b/content/docs/csidriver/installation/offline/_index.md
index 8707ec4051..8c0d380b8e 100644
--- a/content/docs/csidriver/installation/offline/_index.md
+++ b/content/docs/csidriver/installation/offline/_index.md
@@ -65,7 +65,7 @@ The resulting offline bundle file can be copied to another machine, if necessary
For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
```
-git clone -b v1.10.0 https://github.com/dell/dell-csi-operator.git
+git clone -b v1.11.0 https://github.com/dell/dell-csi-operator.git
```
```
cd dell-csi-operator/scripts
@@ -76,9 +76,9 @@ cd dell-csi-operator/scripts
*
* Pulling and saving container images
- dellemc/csi-isilon:v2.3.0
dellemc/csi-isilon:v2.4.0
dellemc/csi-isilon:v2.5.0
+ dellemc/csi-isilon:v2.6.0
dellemc/csipowermax-reverseproxy:v2.4.0
dellemc/csi-powermax:v2.3.1
dellemc/csi-powermax:v2.4.0
@@ -89,10 +89,10 @@ cd dell-csi-operator/scripts
dellemc/csi-unity:v2.3.0
dellemc/csi-unity:v2.4.0
dellemc/csi-unity:v2.5.0
- dellemc/csi-vxflexos:v2.3.0
dellemc/csi-vxflexos:v2.4.0
dellemc/csi-vxflexos:v2.5.0
- dellemc/dell-csi-operator:v1.10.0
+ dellemc/csi-vxflexos:v2.6.0
+ dellemc/dell-csi-operator:v1.11.0
dellemc/sdc:3.5.1.1-1
dellemc/sdc:3.6
dellemc/sdc:3.6.0.6
@@ -203,7 +203,7 @@ Preparing a offline bundle for installation
*
* Tagging and pushing images
- dellemc/dell-csi-operator:v1.10.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.10.0
+ dellemc/dell-csi-operator:v1.11.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.11.0
dellemc/csi-isilon:v2.3.0 -> localregistry:5000/csi-operator/csi-isilon:v2.3.0
dellemc/csi-isilon:v2.4.0 -> localregistry:5000/csi-operator/csi-isilon:v2.4.0
dellemc/csi-isilon:v2.5.0 -> localregistry:5000/csi-operator/csi-isilon:v2.5.0
@@ -217,9 +217,9 @@ Preparing a offline bundle for installation
dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
- dellemc/csi-vxflexos:v2.3.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.3.0
dellemc/csi-vxflexos:v2.4.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.4.0
dellemc/csi-vxflexos:v2.5.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.5.0
+ dellemc/csi-vxflexos:v2.6.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.6.0
dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1
dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
@@ -230,7 +230,7 @@ Preparing a offline bundle for installation
*
* Preparing operator files within /root/dell-csi-operator-bundle
- changing: dellemc/dell-csi-operator:v1.10.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.10.0
+ changing: dellemc/dell-csi-operator:v1.11.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.11.0
changing: dellemc/csi-isilon:v2.3.0 -> localregistry:5000/csi-operator/csi-isilon:v2.3.0
changing: dellemc/csi-isilon:v2.4.0 -> localregistry:5000/csi-operator/csi-isilon:v2.4.0
changing: dellemc/csi-isilon:v2.5.0 -> localregistry:5000/csi-operator/csi-isilon:v2.5.0
@@ -244,9 +244,9 @@ Preparing a offline bundle for installation
changing: dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
changing: dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
changing: dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
- changing: dellemc/csi-vxflexos:v2.3.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.3.0
changing: dellemc/csi-vxflexos:v2.4.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.4.0
changing: dellemc/csi-vxflexos:v2.5.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.5.0
+ changing: dellemc/csi-vxflexos:v2.6.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.6.0
changing: dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1
changing: dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
changing: dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
diff --git a/content/docs/csidriver/installation/operator/_index.md b/content/docs/csidriver/installation/operator/_index.md
index ed99acf458..e95ebc0245 100644
--- a/content/docs/csidriver/installation/operator/_index.md
+++ b/content/docs/csidriver/installation/operator/_index.md
@@ -11,14 +11,14 @@ The Dell CSI Operator is a Kubernetes Operator, which can be used to install and
## Prerequisites
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.2.1](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.2.1](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.2.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
@@ -35,7 +35,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
```
*NOTE:*
-- It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.2.1 version of snapshotter/snapshot-controller.
## Installation
@@ -48,21 +48,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa
#### Full list of CSI Drivers and versions supported by the Dell CSI Operator
| CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version |
| ------------------ | --------- | -------------- | -------------------- | --------------------- |
-| CSI PowerMax | 2.3.0 | v2.3.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerMax | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerMax | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerFlex | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI PowerMax | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerFlex | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerFlex | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerScale | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI PowerFlex | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerScale | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerScale | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI Unity XT | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI PowerScale | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
| CSI Unity XT | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI Unity XT | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
-| CSI PowerStore | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI Unity XT | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerStore | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerStore | 2.5.0 | v2.5.0 | 1.23, 1.24. 1.25 | 4.10, 4.10 EUS, 4.11 |
+| CSI PowerStore | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
+| CSI PowerStore | 2.6.0 | v2.6.0 | 1.24, 1.25, 1.26 | 4.10, 4.10 EUS, 4.11 |
@@ -79,7 +79,7 @@ The installation process involves the creation of a `Subscription` object either
#### Pre-Requisite for installation with OLM
Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM.
```
-$ git clone https://github.com/dell/dell-csi-operator.git
+$ git clone -b v1.11.0 https://github.com/dell/dell-csi-operator.git
$ cd dell-csi-operator
$ tar -czf config.tar.gz driverconfig/
# Replace operator-namespace in the below command with the actual namespace where the operator will be deployed by OLM
@@ -95,7 +95,7 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
#### Steps
>**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.10.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.11.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Run `bash scripts/install.sh` to install the operator.
@@ -122,7 +122,7 @@ For installation of the supported drivers, a `CustomResource` has to be created
### Pre-requisites for upstream Kubernetes Clusters
On upstream Kubernetes clusters, make sure to install
* VolumeSnapshot CRDs
- * On clusters running v1.23,v1.24 & v1.25, make sure to install v1 VolumeSnapshot CRDs
+ * On clusters running v1.24,v1.25 & v1.26, make sure to install v1 VolumeSnapshot CRDs
* External Volume Snapshot Controller with the correct version
### Pre-requisites for Red Hat OpenShift Clusters
@@ -216,8 +216,8 @@ Or
{driver name}_{driver version}_ops_{OpenShift version}.yaml
For e.g.
-* samples/powermax_v250_k8s_125.yaml* <- To install CSI PowerMax driver v2.5.0 on a Kubernetes 1.25 cluster
-* samples/powermax_v250_ops_411.yaml* <- To install CSI PowerMax driver v2.5.0 on an OpenShift 4.11 cluster
+* samples/powermax_v260_k8s_126.yaml* <- To install CSI PowerMax driver v2.6.0 on a Kubernetes 1.26 cluster
+* samples/powermax_v260_ops_411.yaml* <- To install CSI PowerMax driver v2.6.0 on an OpenShift 4.11 cluster
Copy the correct sample file and edit the mandatory & any optional parameters specific to your driver installation by following the instructions [here](#modify-the-driver-specification)
>NOTE: A detailed explanation of the various mandatory and optional fields in the CustomResource is available [here](#custom-resource-specification). Please make sure to read through and understand the various fields.
@@ -250,16 +250,44 @@ Please refer to the _Troubleshooting_ section [here](../../troubleshooting/opera
The CSI Drivers installed by the Dell CSI Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include –
* Modifying the installation directly via `kubectl edit`
- For example - If the name of the installed Unity XT driver is unity, then run
+ ```
+ $ kubectl get -n
+ ```
+ For example - If the Unity XT driver is installed then run this command to get the object name of kind CSIUnity.
```
# Replace driver-namespace with the namespace where the Unity XT driver is installed
- $ kubectl edit csiunity/unity -n
+ $ kubectl get csiunity -n
+ ```
+ use the object name in `kubectl edit` command.
+ ```
+ $ kubectl edit / -n
+ ```
+ For example - If the object name is CSIUnity.
+ ```
+ # Replace object-name with the object name of kind CSIUnity
+ $ kubectl edit csiunity/ -n
```
- and modify the installation. The usual fields to edit are the version of drivers and sidecars and the env variables.
+ and modify the installation. The usual fields to edit are the version of drivers, sidecars and the environment variables.
+
* Modify the API object in place via `kubectl patch` command.
+ For example if you want to patch the deployment to have two replicas for Unity XT driver then run this command to get the deployment
+ ```
+ $ kubectl get deployments -n
+ ```
+ to patch the deployment with your patch object inline run this command.
+ ```
+ # Replace deployment with the name of the deployment
+ $ kubectl patch deploy/ -n -p '{"spec":{"replicas": 2}}'
+ ```
+ to patch the deployment with your patch file run this command.
+ ```
+ # Replace deployment with the name of the deployment
+ kubectl patch deployment --patch-file patch-file.yaml
+ ```
+
-To create patch file or edit deployments, refer [here](https://github.com/dell/dell-csi-operator/tree/master/samples) for driver version & env variables and [here](https://github.com/dell/dell-csi-operator/tree/master/driverconfig/config.yaml) for version of side-cars.
-The latest versions of drivers could have additional env variables or sidecars.
+To create patch file or edit deployments, refer [here](https://github.com/dell/dell-csi-operator/tree/master/samples) for driver version & environment variables and [here](https://github.com/dell/dell-csi-operator/tree/master/driverconfig/config.yaml) for version of side-cars.
+The latest versions of drivers could have additional environment variables or sidecars.
The below notes explain some of the general items to take care of.
@@ -267,7 +295,7 @@ The below notes explain some of the general items to take care of.
1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required.
```yaml
driver:
- configVersion: v2.5.0
+ configVersion: v2.6.0
```
2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add
@@ -291,26 +319,26 @@ The below notes explain some of the general items to take care of.
- args:
- --volume-name-prefix=csiunity
- --default-fstype=ext4
- image: k8s.gcr.io/sig-storage/csi-provisioner:v3.3.0
+ image: k8s.gcr.io/sig-storage/csi-provisioner:v3.4.0
imagePullPolicy: IfNotPresent
name: provisioner
- args:
- --snapshot-name-prefix=csiunitysnap
- image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.1.0
+ image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.2.1
imagePullPolicy: IfNotPresent
name: snapshotter
- args:
- --monitor-interval=60s
- image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.7.0
+ image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.8.0
imagePullPolicy: IfNotPresent
name: external-health-monitor
- - image: k8s.gcr.io/sig-storage/csi-attacher:v4.0.0
+ - image: k8s.gcr.io/sig-storage/csi-attacher:v4.2.0
imagePullPolicy: IfNotPresent
name: attacher
- - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.0
+ - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.3
imagePullPolicy: IfNotPresent
name: registrar
- - image: k8s.gcr.io/sig-storage/csi-resizer:v1.6.0
+ - image: k8s.gcr.io/sig-storage/csi-resizer:v1.7.0
imagePullPolicy: IfNotPresent
name: resizer
```
@@ -405,7 +433,7 @@ spec:
You can set the field ***replicas*** to a higher number than `1` for the latest driver versions.
Note - The `image` field should point to the correct image tag for version of the driver you are installing.
-For e.g. - If you wish to install v2.5.0 of the CSI PowerMax driver, use the image tag `dellemc/csi-powermax:v2.5.0`
+For e.g. - If you wish to install v2.6.0 of the CSI PowerMax driver, use the image tag `dellemc/csi-powermax:v2.6.0`
### SideCars
Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support.
diff --git a/content/docs/csidriver/installation/operator/isilon.md b/content/docs/csidriver/installation/operator/isilon.md
index 6b5fcef159..a0d9eead69 100644
--- a/content/docs/csidriver/installation/operator/isilon.md
+++ b/content/docs/csidriver/installation/operator/isilon.md
@@ -86,7 +86,7 @@ User can query for CSI-PowerScale driver using the following command:
Use the following command to replace or update the secret
- `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run | kubectl replace -f -`
+ `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl replace -f -`
**Note**: The user needs to validate the YAML syntax and array related key/values while replacing the isilon-creds secret.
The driver will continue to use previous values in case of an error found in the YAML file.
diff --git a/content/docs/csidriver/installation/operator/powerflex.md b/content/docs/csidriver/installation/operator/powerflex.md
index 29b9b2693b..ee3228ea8b 100644
--- a/content/docs/csidriver/installation/operator/powerflex.md
+++ b/content/docs/csidriver/installation/operator/powerflex.md
@@ -43,7 +43,7 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c
- Optionally, enable sdc monitor by uncommenting the section for sidecar in manifest yaml. Please note the following:
- **If using sidecar**, you will need to edit the value fields under the HOST_PID and MDM fields by filling the empty quotes with host PID and the MDM IPs.
- **If not using sidecar**, please leave this commented out -- otherwise, the empty fields will cause errors.
-##### Example CR: [config/samples/vxflex_v250_ops_411.yaml](https://github.com/dell/dell-csi-operator/blob/master/samples/vxflex_v250_ops_411.yaml)
+##### Example CR: [config/samples/vxflex_v260_ops_411.yaml](https://github.com/dell/dell-csi-operator/blob/main/samples/vxflex_v260_ops_411.yaml)
```yaml
sideCars:
# Comment the following section if you don't want to run the monitoring sidecar
@@ -161,13 +161,13 @@ metadata:
namespace: test-vxflexos
spec:
driver:
- configVersion: v2.5.0
+ configVersion: v2.6.0
replicas: 1
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
fsGroupPolicy: File
common:
- image: "dellemc/csi-vxflexos:v2.5.0"
+ image: "dellemc/csi-vxflexos:v2.6.0"
imagePullPolicy: IfNotPresent
envs:
- name: X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT
diff --git a/content/docs/csidriver/installation/operator/powermax.md b/content/docs/csidriver/installation/operator/powermax.md
index 2dc6f9ca74..e2fa576862 100644
--- a/content/docs/csidriver/installation/operator/powermax.md
+++ b/content/docs/csidriver/installation/operator/powermax.md
@@ -47,6 +47,7 @@ Set up the environment as follows:
- Add all FC array ports zoned to the ESX/ESXis to a port group where the cluster is hosted .
- Add initiators from all ESX/ESXis to a host(initiator group) where the cluster is hosted.
+- Create a secret which contains vCenter privileges. Follow the steps [here](#support-for-auto-rdm-for-vsphere-over-fc) to create the same.
#### Linux multipathing requirements
@@ -136,8 +137,6 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
| X_CSI_VSPHERE_PORTGROUP | Existing portGroup that driver will use for vSphere | Yes | "" |
| X_CSI_VSPHERE_HOSTGROUP | Existing host(initiator group) that driver will use for vSphere | Yes | "" |
| X_CSI_VCenter_HOST | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
- | X_CSI_VCenter_USERNAME | Username from the vCenter credentials | Yes | "" |
- | X_CSI_VCenter_PWD | Password from the vCenter credentials | Yes | "" |
| ***Node parameters***|
| X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false |
| X_CSI_TOPOLOGY_CONTROL_ENABLED | Enable/Disabe topology control. It filters out arrays, associated transport protocol available to each node and creates topology keys based on any such user input. | No | false |
@@ -147,7 +146,7 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
**Note** - If CSI driver is getting installed using OCP UI , create these two configmaps manually using the command `oc create -f `
1. Configmap name powermax-config-params
```yaml
- apiVersion: v1
+ apiVersion: v1
kind: ConfigMap
metadata:
name: powermax-config-params
@@ -195,10 +194,12 @@ Deployment and ClusterIP service will be created by dell-csi-operator.
Create a TLS secret that holds an SSL certificate and a private key which is required by the reverse proxy server.
Use a tool such as `openssl` to generate this secret using the example below:
-```
- openssl genrsa -out tls.key 2048
- openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
- kubectl create secret -n powermax tls revproxy-certs --cert=tls.crt --key=tls.key
+```bash
+openssl genrsa -out tls.key 2048
+openssl req -new -x509 -sha256 -key tls.key -out tls.crt -days 3650
+kubectl create secret -n tls revproxy-certs --cert=tls.crt --key=tls.key
+kubectl create secret -n tls csirevproxy-tls-secret --cert=tls.crt --
+key=tls.key
```
#### Set the following parameters in the CSI PowerMaxReverseProxy Spec
@@ -301,188 +302,10 @@ To update the log level dynamically user has to edit the ConfigMap `powermax-con
```
kubectl edit configmap -n powermax powermax-config-params
```
-### Sample CRD file for powermax
+### Sample CRD file for powermax
+You can find the sample CRD file [here](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml)
-``` yaml
-apiVersion: storage.dell.com/v1
-kind: CSIPowerMax
-metadata:
- name: test-powermax
- namespace: test-powermax
-spec:
- driver:
- # Config version for CSI PowerMax v2.5.0 driver
- configVersion: v2.5.0
- # replica: Define the number of PowerMax controller nodes
- # to deploy to the Kubernetes release
- # Allowed values: n, where n > 0
- # Default value: None
- replicas: 2
- dnsPolicy: ClusterFirstWithHostNet
- forceUpdate: false
- common:
- # Image for CSI PowerMax driver v2.5.0
- image: dellemc/csi-powermax:v2.5.0
- # imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container.
- # Allowed values:
- # Always: Always pull the image.
- # IfNotPresent: Only pull the image if it does not already exist on the node.
- # Never: Never pull the image.
- # Default value: None
- imagePullPolicy: IfNotPresent
- envs:
- # X_CSI_MANAGED_ARRAYS: Serial ID of the arrays that will be used for provisioning
- # Default value: None
- # Examples: "000000000001", "000000000002"
- - name: X_CSI_MANAGED_ARRAYS
- value: "000000000000,000000000001"
- # X_CSI_POWERMAX_ENDPOINT: Address of the Unisphere server that is managing the PowerMax arrays
- # Default value: None
- # Example: https://0.0.0.1:8443
- - name: X_CSI_POWERMAX_ENDPOINT
- value: "https://0.0.0.0:8443/"
- # X_CSI_K8S_CLUSTER_PREFIX: Define a prefix that is appended onto
- # all resources created in the Array
- # This should be unique per K8s/CSI deployment
- # maximum length of this value is 3 characters
- # Default value: None
- # Examples: "XYZ", "EMC"
- - name: X_CSI_K8S_CLUSTER_PREFIX
- value: "XYZ"
- # X_CSI_POWERMAX_PORTGROUPS: Define the set of existing port groups that the driver will use.
- # It is a comma separated list of portgroup names.
- # Required only in case of iSCSI port groups
- # Allowed values: iSCSI Port Group names
- # Default value: None
- # Examples: "pg1", "pg1, pg2"
- - name: "X_CSI_POWERMAX_PORTGROUPS"
- value: ""
- # "X_CSI_TRANSPORT_PROTOCOL" can be "FC" or "FIBRE" for fibrechannel,
- # "ISCSI" for iSCSI, or "" for autoselection.
- # Allowed values:
- # "FC" - Fiber Channel protocol
- # "FIBER" - Fiber Channel protocol
- # "ISCSI" - iSCSI protocol
- # "" - Automatic selection of transport protocol
- # Default value: ""
- - name: "X_CSI_TRANSPORT_PROTOCOL"
- value: ""
- # X_CSI_POWERMAX_PROXY_SERVICE_NAME: Refers to the name of the proxy service in kubernetes
- # Allowed values: "powermax-reverseproxy"
- # default values: "powermax-reverseproxy"
- - name: "X_CSI_POWERMAX_PROXY_SERVICE_NAME"
- value: "powermax-reverseproxy"
- # X_CSI_GRPC_MAX_THREADS: Defines the maximum number of concurrent grpc requests.
- # Set this value to a higher number (max 50) if you are using the proxy
- # Allowed values: n, where n > 4
- # default values: None
- - name: "X_CSI_GRPC_MAX_THREADS"
- value: "4"
-
- sideCars:
- # Uncomment the following to install 'external-health-monitor' sidecar to enable health monitor of CSI volumes from Controller plugin.
- # Also set the env variable controller.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true" for controller plugin.
- # Also set the env variable node.envs.X_CSI_HEALTH_MONITOR_ENABLED to "true" for node plugin.
- #- name: external-health-monitor
- # args: ["--monitor-interval=300s"]
-
- controller:
- envs:
- # X_CSI_HEALTH_MONITOR_ENABLED: Determines if the controller plugin will monitor health of CSI volumes- volume status, volume condition
- # Install the 'external-health-monitor' sidecar accordingly.
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- node:
- envs:
- # X_CSI_POWERMAX_ISCSI_ENABLE_CHAP: Determine if the node plugin is going to configure
- # ISCSI node databases on the nodes with the CHAP credentials
- # If enabled, the CHAP secret must be provided in the credentials secret
- # and set to the key "chapsecret"
- # Allowed values:
- # "true" - CHAP is enabled
- # "false" - CHAP is disabled
- # Default value: "false"
- - name: "X_CSI_POWERMAX_ISCSI_ENABLE_CHAP"
- value: "false"
- # X_CSI_HEALTH_MONITOR_ENABLED: Enable/Disable health monitor of CSI volumes from node plugin- volume usage, volume condition
- # Allowed values:
- # true: enable checking of health condition of CSI volumes
- # false: disable checking of health condition of CSI volumes
- # Default value: false
- - name: X_CSI_HEALTH_MONITOR_ENABLED
- value: "false"
- # X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node based on array and transport protocol
- # if enabled, user can create custom topology keys by editing node-topology-config configmap.
- # Allowed values:
- # true: enable the filtration based on config map
- # false: disable the filtration based on config map
- # Default value: false
- - name: X_CSI_TOPOLOGY_CONTROL_ENABLED
- value: "false"
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: powermax-config-params
- namespace: test-powermax
-data:
- driver-config-params.yaml: |
- CSI_LOG_LEVEL: "debug"
- CSI_LOG_FORMAT: "JSON"
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: node-topology-config
- namespace: test-powermax
-data:
- topologyConfig.yaml: |
- # allowedConnections contains a list of (node, array and protocol) info for user allowed configuration
- # For any given storage array ID and protocol on a Node, topology keys will be created for just those pair and
- # every other configuration is ignored
- # Please refer to the doc website about a detailed explanation of each configuration parameter
- # and the various possible inputs
- allowedConnections:
- # nodeName: Name of the node on which user wants to apply given rules
- # Allowed values:
- # nodeName - name of a specific node
- # * - all the nodes
- # Examples: "node1", "*"
- - nodeName: "node1"
- # rules is a list of 'StorageArrayID:TransportProtocol' pair. ':' is required between both value
- # Allowed values:
- # StorageArrayID:
- # - SymmetrixID : for specific storage array
- # - "*" :- for all the arrays connected to the node
- # TransportProtocol:
- # - FC : Fibre Channel protocol
- # - ISCSI : iSCSI protocol
- # - "*" - for all the possible Transport Protocol
- # Examples: "000000000001:FC", "000000000002:*", "*:FC", "*:*"
- rules:
- - "000000000001:FC"
- - "000000000002:FC"
- - nodeName: "*"
- rules:
- - "000000000002:FC"
- # deniedConnections contains a list of (node, array and protocol) info for denied configurations by user
- # For any given storage array ID and protocol on a Node, topology keys will be created for every other configuration but
- # not these input pairs
- deniedConnections:
- - nodeName: "node2"
- rules:
- - "000000000002:*"
- - nodeName: "node3"
- rules:
- - "*:*"
-```
-
-
-Note:
+>Note:
- `Kubelet config dir path` is not yet configurable in case of Operator based driver installation.
- Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
@@ -582,19 +405,25 @@ To enable this feature, set `X_CSI_VSPHERE_ENABLED` to `true` in the driver man
# Default value: ""
- name: "X_CSI_VSPHERE_HOSTGROUP"
value: ""
- # X_CSI_VCenter_HOST: URL/endpoint of the vCenter where all the ESX are present
- # Allowed value: valid vCenter host endpoint
- # Default value: ""
- - name: "X_CSI_VCenter_HOST"
- value: ""
- # X_CSI_VCenter_USERNAME: username from the vCenter credentials
- # Allowed value: valid vCenter host username
- # Default value: ""
- - name: "X_CSI_VCenter_USERNAME"
- value: ""
- # X_CSI_VCenter_PWD: password from the vCenter credentials
- # Allowed value: valid vCenter host password
- # Default value: ""
- - name: "X_CSI_VCenter_PWD"
- value: ""
-```
\ No newline at end of file
+```
+Edit the section in the driver manifest having the sample for the following `Secret` with required values.
+```
+apiVersion: v1
+kind: Secret
+metadata:
+ name: vcenter-creds
+ # Set driver namespace
+ namespace: test-powermax
+type: Opaque
+data:
+ # set username to the base64 encoded username
+ username: YWRtaW4=
+ # set password to the base64 encoded password
+ password: YWRtaW4=
+```
+These values can be obtained using base64 encoding as described in the following example:
+```bash
+echo -n "myusername" | base64
+echo -n "mypassword" | base64
+```
+where *myusername* and *mypassword* are credentials for a user with vCenter privileges.
diff --git a/content/docs/csidriver/installation/operator/powerstore.md b/content/docs/csidriver/installation/operator/powerstore.md
index 110fbca777..4be737301d 100644
--- a/content/docs/csidriver/installation/operator/powerstore.md
+++ b/content/docs/csidriver/installation/operator/powerstore.md
@@ -69,14 +69,14 @@ metadata:
namespace: test-powerstore
spec:
driver:
- configVersion: v2.5.0
+ configVersion: v2.6.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
fsGroupPolicy: ReadWriteOnceWithFSType
storageCapacity: true
common:
- image: "dellemc/csi-powerstore:v2.5.0"
+ image: "dellemc/csi-powerstore:v2.6.0"
imagePullPolicy: IfNotPresent
envs:
- name: X_CSI_POWERSTORE_NODE_NAME_PREFIX
diff --git a/content/docs/csidriver/installation/operator/unity.md b/content/docs/csidriver/installation/operator/unity.md
index 1f485b3070..b4a1040cea 100644
--- a/content/docs/csidriver/installation/operator/unity.md
+++ b/content/docs/csidriver/installation/operator/unity.md
@@ -98,12 +98,12 @@ metadata:
namespace: test-unity
spec:
driver:
- configVersion: v2.5.0
+ configVersion: v2.6.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- image: "dellemc/csi-unity:v2.5.0"
+ image: "dellemc/csi-unity:v2.6.0"
imagePullPolicy: IfNotPresent
sideCars:
- name: provisioner
diff --git a/content/docs/csidriver/installation/test/certcsi.md b/content/docs/csidriver/installation/test/certcsi.md
new file mode 100644
index 0000000000..e017c58ee9
--- /dev/null
+++ b/content/docs/csidriver/installation/test/certcsi.md
@@ -0,0 +1,429 @@
+---
+title: Cert-CSI
+linktitle: Cert-CSI
+description: Tool to validate Dell CSI Drivers
+---
+
+Cert-CSI is a tool to validate Dell CSI Drivers. It contains various test suites to validate the drivers.
+
+## Installation
+To install this tool you can download one of binary files located in [RELEASES](https://github.com/dell/cert-csi/releases)
+
+You can build the tool by cloning the repository and running this command:
+```bash
+make build
+```
+
+You can also build a docker container image by running this command:
+```bash
+docker build -t cert-csi .
+```
+
+If you want to collect csi-driver resource usage metrics, then please provide the namespace where it can be found and install the metric-server using this command (kubectl is required):
+
+```bash
+make install-ms
+```
+[FOR UNIX] If you want to build and install the tool to your $PATH and enable the **auto-completion** feature, then run this command:
+
+```bash
+make install-nix
+```
+> Alternatively, you can install the metric-server by following the instructions at https://github.com/kubernetes-incubator/metrics-server
+
+## Running Cert-CSI
+
+To get information on how to use the program, you can use built-in help. If you're using a UNIX-like system and enabled _auto-completion feature_ while installing the tool, then you can use shell's built-in auto-completion to navigate through program's subcommands and flags interactively by just pressing TAB.
+
+To run cert-csi, you have to point your environment to a kube cluster. This allows you to receive dynamically formatted suggestions from your cluster.
+For example if you press TAB while passing --storageclass (or --sc) argument, the tool will parse all existing Storage Classes from your cluster and suggest them as an input for you.
+
+> To run a docker container your command should look something like this
+> `docker run --rm -it -v ~/.kube/config:/root/.kube/config -v $(pwd):/app/cert-csi cert-csi `
+
+## Driver Certification
+
+You can use cert-csi to launch a certification test run against multiple storage classes to check if the driver adheres to advertised capabilities.
+
+### Preparing Config
+
+To run the certification test you need to provide `.yaml` config with storage classes and their capabilities. You can use `example-certify-config.yaml` as an example.
+
+Example:
+```yaml
+storageClasses:
+ - name: # storage-class-name (ex. powerstore)
+ minSize: # minimal size for your sc (ex. 1Gi)
+ rawBlock: # is Raw Block supported (true or false)
+ expansion: # is volume expansion supported (true or false)
+ clone: # is volume cloning supported (true or false)
+ snapshot: # is volume snapshotting supported (true or false)
+ RWX: # is ReadWriteMany volume access mode supported for non RawBlock volumes (true or false)
+ ephemeral: # if exists, then run EphemeralVolumeSuite
+ driver: # driver name for EphemeralVolumeSuite
+ fstype: # fstype for EphemeralVolumeSuite
+ volumeAttributes: # volume attrs for EphemeralVolumeSuite.
+ attr1: # volume attr for EphemeralVolumeSuite
+ attr2: # volume attr for EphemeralVolumeSuite
+```
+
+### Launching Certification Test Run
+
+After preparing a certification configuration file, you can launch certification by running
+```bash
+cert-csi certify --cert-config
+Optional Params:
+ --vsc: volume snapshot class, required if you specified snapshot capability
+ --timeout: set the timeout value for certification suites
+ --no-metrics: disables metrics aggregation (set if you encounter k8s performance issues)
+ --path: path to folder where reports will be created (if not specified ~/.cert-csi/ will be used)
+```
+
+## Functional Tests
+
+### Running Individual Suites
+#### Volume/PVC Creation
+
+To run volume or PVC creation test suite, run the command:
+```bash
+cert-csi functional-test volume-creation --sc -n 5
+Optional Params:
+--custom-name : To give custom name for PVC while creating only 1 PVC
+--size : To give custom size, possible values for size in Gi/Mi
+--access-mode : To set custom access-modes, possible values - ReadWriteOnce,ReadOnlyMany and ReadWriteMany
+--block : To create raw block volumes
+```
+
+#### Provisioning/Pod creation
+
+To run volume provisioning or pod creation test suite, run the command:
+```bash
+cert-csi functional-test provisioning --sc
+Optional Params:
+--volumeNumber : number of volumes to attach to each pod
+--podNumber : number of pod to create
+--podName : To give custom name for pod while creating only 1 pod
+--block : To create raw block volumes and attach it to pods
+--vol-access-mode: To set volume access modes
+```
+
+#### Running Volume Deletion suite
+
+To run volume delete test suite, run the command:
+```bash
+cert-csi functional-test volume-deletion
+--pvc-name value : PVC name to delete
+--pvc-namespace : PVC namespace where PVC is present
+```
+
+#### Running Pod Deletion suite
+
+To run pod deletion test suite, run the command:
+```bash
+cert-csi functional-test pod-deletion
+--pod-name : Pod name to delete
+--pod-namespace : Pod namespace where pod is present
+```
+
+#### Running Cloned Volume deletion suite
+
+To run cloned volume deletion test suite, run the command:
+```bash
+cert-csi functional-test clone-volume-deletion
+--clone-volume-name : Volume name to delete
+```
+
+#### Multi Attach Volume Tests
+
+To run multi-attach volume test suite, run the command:
+```bash
+cert-csi functional-test multi-attach-vol --sc
+--pods : Number of pods to create
+--block : To create raw block volume
+```
+
+#### Ephemeral volumes suite
+
+To run ephemeral volume test suite, run the command:
+```bash
+cert-csi functional-test ephemeral-volume --driver --attr ephemeral-config.properties
+--pods : Number of pods to create
+--pod-name : To create pods with custom name
+--attr : CSI volume attributes file name
+--fs-type: FS Type can be specified
+
+Sample ephemeral-config.properties (key/value pair)
+arrayId=arr1
+protocol=iSCSI
+size=5Gi
+```
+
+#### Storage Capacity Tracking Suite
+
+To run storage capacity tracking test suite, run the command:
+```bash
+cert-csi functional-test capacity-tracking --sc --drns --pi
+Optional Params:
+--vs : volume size to be created
+```
+
+### Other Options
+
+#### Generating tabular report from DB
+
+To generate tabular report from the database, run the command:
+```bash
+cert-csi -db functional-report -tabular
+Example: cert-csi -db ./test.db functional-report -tabular
+```
+> Note: DB is mandatory parameter
+
+#### Generating XML report from DB
+
+To generate XML report from the database, run the command:
+```bash
+cert-csi -db functional-report -xml
+Example: cert-csi -db ./test.db functional-report -xml
+```
+> Note: DB is mandatory parameter
+
+#### Including Array configuration file
+
+```bash
+# Array properties sample (array-config.properties)
+arrayIPs: 192.168.1.44
+name: Unity
+user: root
+password: test-password
+arrayIds: arr-1
+```
+
+### Screenshots
+
+Tabular Report example
+
+![img9](../img/tabularReport.png)
+
+## Kubernetes End-To-End Tests
+All Kubernetes end to end tests require that you provide the driver config based on the storage class you want to test and the version of the kubernetes you want to test against. These are the mandatory parameters that you can provide in command like..
+` --driver-config and --version "v1.25.0" `
+
+### Running kubernetes end-to-end tests
+
+To run kubernetes end-to-end tests, run the command:
+```bash
+cert-csi k8s-e2e --config --driver-config --focus --timeout --version < version of k8s Ex: "v1.25.0"> --skip-tests --skip
+```
+
+### Kubernetes end-to-end reporting
+
+- All the reports generated by kubernetes end-to-end tests will be under `$HOME/reports` directory by default if user doesn't mention the report path.
+- Kubernetes end to end tests Execution log file will be placed under `$HOME/reports/execution_[storage class name].log`
+- Cert-CSI logs will be present in the execution directory `info.log` , `error.log`
+
+### Test config files format
+- #### [driver-config](https://github.com/dell/cert-csi/blob/main/pkg/utils/testdata/config-nfs.yaml)
+- #### [ignore-tests](https://github.com/dell/cert-csi/blob/main/pkg/utils/ignore.yaml)
+
+### Example Commands
+- ```bash
+ cert-csi k8s-e2e --config "/root/.kube/config" --driver-config "/root/e2e_config/config-nfs.yaml" --focus "External.Storage.*" --timeout "2h" --version "v1.25.0" --skip-tests "/root/e2e_config/ignore.yaml"
+ ```
+- ```bash
+ ./cert-csi k8s-e2e --config "/root/.kube/config" --driver-config "/root/e2e_config/config-iscsi.yaml" --focus "External.Storage.*" --timeout "2h" --version "v1.25.0" --focus-file "capacity.go"
+ ```
+
+## Performance Tests
+
+All performance tests require that you provide a storage class that you want to test. You can provide multiple storage classes in one command. For example, `... --sc --sc ...`
+
+### Running Individual Suites
+#### Running Volume Creation test suite
+
+To run volume creation test suite, run the command:
+```bash
+cert-csi test volume-creation --sc -n 25
+```
+
+#### Running Provisioning test suite
+
+To run volume provisioning test suite, run the command:
+```bash
+cert-csi test provisioning --sc --podNum 1 --volNum 10
+```
+
+#### Running Scalability test suite
+
+To run scalability test suite, run the command:
+```bash
+cert-csi test scaling --sc --replicas 5
+```
+
+#### Running VolumeIO test suite
+
+To run volumeIO test suite, run the command:
+```bash
+cert-csi test vio --sc --chainNumber 5 --chainLength 20
+```
+
+#### Running Snap test suite
+
+To run volume snapshot test suite, run the command:
+```bash
+cert-csi test snap --sc --vsc
+```
+
+#### Running Multi-attach volume suite
+
+To run multi-attach volume test suite, run the command:
+```bash
+cert-csi test multi-attach-vol --sc --podNum 3
+```
+```bash
+cert-csi test multi-attach-vol --sc --podNum 3 --block # to use raw block volumes
+```
+
+#### Running Replication test suite
+
+To run replication test suite, run the command:
+```bash
+cert-csi test replication --sc --pn 1 --vn 5 --vsc
+```
+
+#### Running Volume Cloning test suite
+
+To run volume cloning test suite, run the command:
+```bash
+cert-csi test clone-volume --sc --pn 1 --vn 5
+```
+
+#### Running Volume Expansion test suite
+
+To run volume expansion test, run the command:
+```bash
+cert-csi test expansion --sc --pn 1 --vn 5 --iSize 8Gi --expSize 16Gi
+
+cert-csi test expansion --sc --pn 1 --vn 5 # `iSize` and `expSize` default to 3Gi and 6Gi respectively
+
+cert-csi test expansion --sc --pn 1 --vn 5 --block # to create block volumes
+```
+
+#### Running Blocksnap suite
+
+To run block snapshot test suite, run the command:
+```bash
+cert-csi test blocksnap --sc --vsc
+```
+
+### Running Longevity mode
+
+To run longevity test suite, run the command:
+```bash
+cert-csi test --sc --longevity
+```
+
+### Interacting with DB
+
+#### Generating report from runs without running tests
+
+To generate test report from the database, run the command:
+```bash
+cert-csi --db report --testrun --html --txt
+Report types:
+--html: performance html report
+--txt: performance txt report
+--xml: junit compatible xml report, contains basic run infomation
+--tabular: tidy html report with basic run information
+```
+
+#### Customizing report folder
+
+To specify test report folder path, use --path option as follows:
+```bash
+cert-csi --db report --testrun --path
+Options:
+--path: path to folder where reports will be created (if not specified ~/.cert-csi/ will be used)
+```
+
+#### Generating report from multiple databases and test runs
+
+To generate report from multiple databases, run the command:
+```bash
+cert-csi report --tr : --tr ... --tabular --xml
+Supported report types:
+--xml
+--tabular
+```
+
+#### Listing all known test runs
+
+To list all test runs, run the command:
+```bash
+cert-csi --db list test-runs
+```
+
+### Other options
+
+#### Customizing report folder
+
+To specify test report folder path, use --path option as follows:
+```bash
+cert-csi --path
+Commands:
+ test
+ certify
+ report
+```
+
+#### Running with enabled driver resource usage metrics
+
+To run tests with driver resource usage metrics enabled, run the command:
+```bash
+cert-csi test --sc <...> --ns
+```
+
+#### Running custom hooks from program
+
+To run tests with custom hooks, run the command:
+```bash
+cert-csi test --sc <...> --sh ./hooks/start.sh --rh ./hooks/ready.sh --fh ./hooks/finish.sh
+```
+
+## Screenshots
+
+### Running provisioning test
+
+![img1](../img/unifiedTest.png)
+
+You can interrupt the application by sending an interruption signal (for example pressing Ctrl + C).
+It will stop polling and try to cleanup resources.
+
+![img2](../img/interruptTest.png)
+
+### Running scaling test
+
+![img3](../img/scaling.PNG)
+
+### Listing available test runs
+
+![img4](../img/listRuns.png)
+
+### Running longevity mode
+
+![img5](../img/longevity.png)
+
+### Multi DB Tabular report example
+
+![img6](../img/multiDBTabularReport.png)
+
+Text report example
+
+![img7](../img/textReport.png)
+
+### HTML report example
+
+![img8](../img/HTMLReport.png)
+
+### Resource usage example chart
+
+![img9](../img/resourceUsage.png)
\ No newline at end of file
diff --git a/content/docs/csidriver/installation/test/img/HTMLReport.png b/content/docs/csidriver/installation/test/img/HTMLReport.png
new file mode 100644
index 0000000000..a2cd5b94d6
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/HTMLReport.png differ
diff --git a/content/docs/csidriver/installation/test/img/interruptTest.png b/content/docs/csidriver/installation/test/img/interruptTest.png
new file mode 100644
index 0000000000..fc1b79f230
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/interruptTest.png differ
diff --git a/content/docs/csidriver/installation/test/img/listRuns.png b/content/docs/csidriver/installation/test/img/listRuns.png
new file mode 100644
index 0000000000..9cf977f608
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/listRuns.png differ
diff --git a/content/docs/csidriver/installation/test/img/longevity.png b/content/docs/csidriver/installation/test/img/longevity.png
new file mode 100644
index 0000000000..490a7b90db
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/longevity.png differ
diff --git a/content/docs/csidriver/installation/test/img/multiDBTabularReport.png b/content/docs/csidriver/installation/test/img/multiDBTabularReport.png
new file mode 100644
index 0000000000..595f1ca58c
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/multiDBTabularReport.png differ
diff --git a/content/docs/csidriver/installation/test/img/resourceUsage.png b/content/docs/csidriver/installation/test/img/resourceUsage.png
new file mode 100644
index 0000000000..bd64d0608b
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/resourceUsage.png differ
diff --git a/content/docs/csidriver/installation/test/img/scaling.PNG b/content/docs/csidriver/installation/test/img/scaling.PNG
new file mode 100644
index 0000000000..d747381d59
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/scaling.PNG differ
diff --git a/content/docs/csidriver/installation/test/img/tabularReport.png b/content/docs/csidriver/installation/test/img/tabularReport.png
new file mode 100644
index 0000000000..da67d8ff86
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/tabularReport.png differ
diff --git a/content/docs/csidriver/installation/test/img/textReport.png b/content/docs/csidriver/installation/test/img/textReport.png
new file mode 100644
index 0000000000..9e56b867bc
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/textReport.png differ
diff --git a/content/docs/csidriver/installation/test/img/unifiedTest.png b/content/docs/csidriver/installation/test/img/unifiedTest.png
new file mode 100644
index 0000000000..9d00df9a22
Binary files /dev/null and b/content/docs/csidriver/installation/test/img/unifiedTest.png differ
diff --git a/content/docs/csidriver/installation/test/powermax.md b/content/docs/csidriver/installation/test/powermax.md
index f1350305ce..3e177c3630 100644
--- a/content/docs/csidriver/installation/test/powermax.md
+++ b/content/docs/csidriver/installation/test/powermax.md
@@ -40,8 +40,6 @@ This script does the following:
- After that, it uses that PVC as the data source to create a new PVC and mounts it on the same container. It checks if the file that existed in the source PVC also exists in the new PVC, calculates its checksum, and compares it to the checksum previously calculated.
- Finally, it cleans up all the resources that are created as part of the test.
-> This is not supported for replicated volumes.
-
#### Snapshot test
Use this procedure to perform a snapshot test.
diff --git a/content/docs/csidriver/partners/rancher.md b/content/docs/csidriver/partners/rancher.md
index a818009ab1..d509db9522 100644
--- a/content/docs/csidriver/partners/rancher.md
+++ b/content/docs/csidriver/partners/rancher.md
@@ -3,10 +3,10 @@ title: "RKE"
Description: "About Rancher Kubernetes Engine"
---
-The Dell CSI Drivers support Rancher Kubernetes Engine (RKE) v1.2.8.
+The Dell CSI Drivers support Rancher Kubernetes Engine (RKE) v1.4.1.
The installation process for the drivers on such clusters remains the same as the installation process on regular Kubernetes clusters. Installation on this cluster is done using helm and via Operator has not been qualified.
## RKE Examples
-![](../rancher1.PNG)
\ No newline at end of file
+![](../rancher1.PNG)
diff --git a/content/docs/csidriver/partners/tanzu.md b/content/docs/csidriver/partners/tanzu.md
index 33c7aafeaa..41c96e03ed 100644
--- a/content/docs/csidriver/partners/tanzu.md
+++ b/content/docs/csidriver/partners/tanzu.md
@@ -5,7 +5,7 @@ Description: "About VMware Tanzu basic"
The CSI Driver for Dell Unity XT, PowerScale and PowerStore supports VMware Tanzu. The deployment of these Tanzu clusters is done using the VMware Tanzu supervisor cluster and the supervisor namespace.
-Currently, VMware Tanzu with normal configuration(without NAT) supports Kubernetes 1.20 and higher.
+Currently, VMware Tanzu 7.0 with normal configuration(without NAT) supports Kubernetes 1.22.
The CSI driver can be installed on this cluster using Helm. Installation of CSI drivers in Tanzu via Operator has not been qualified.
To login to the Tanzu cluster, download kubectl and kubectl vsphere binaries to any of the system
diff --git a/content/docs/csidriver/release/operator.md b/content/docs/csidriver/release/operator.md
index 0583c4272f..b04deff352 100644
--- a/content/docs/csidriver/release/operator.md
+++ b/content/docs/csidriver/release/operator.md
@@ -3,22 +3,24 @@ title: Operator
description: Release notes for Dell CSI Operator
---
-## Release Notes - Dell CSI Operator 1.10.0
+## Release Notes - Dell CSI Operator 1.11.0
### New Features/Changes
-- [Added support to Kubernetes 1.25](https://github.com/dell/csm/issues/478)
-- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
+- [Added support to Kubernetes 1.26](https://github.com/dell/csm/issues/597)
+- [Added pre-approved GUIDs support for PowerFlex](https://github.com/dell/csm/issues/402)
+- [Updated Go version from 1.19 to 1.20](https://github.com/dell/csm/issues/658)
->**Note:** There will be a delay in certification of Dell CSI Operator 1.10.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.10.0 release.
+>**Note:** There will be a delay in certification of Dell CSI Operator 1.11.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.11.0 release.
### Fixed Issues
-- [Fix for secrets getting regenerated on apply of CSM driver manifest](https://github.com/dell/csm/issues/485)
+- [Updated Powermax environment variables name for consistency](https://github.com/dell/csm/issues/584)
+- [Updated PowerMax vCenter to use secrets for its credentials](https://github.com/dell/csm/issues/686)
### Known Issues
There are no known issues in this release.
### Support
-The Dell CSI Operator image is available on Dockerhub and is officially supported by Dell.
+The Dell CSI Operator image is available on Docker Hub and is officially supported by Dell.
For any CSI operator and driver issues, questions or feedback, please follow our [support process](../../../support/).
diff --git a/content/docs/csidriver/release/powerflex.md b/content/docs/csidriver/release/powerflex.md
index 4c82574ead..05ac5bd98a 100644
--- a/content/docs/csidriver/release/powerflex.md
+++ b/content/docs/csidriver/release/powerflex.md
@@ -3,20 +3,16 @@ title: PowerFlex
description: Release notes for PowerFlex CSI driver
---
-## Release Notes - CSI PowerFlex v2.5.0
+## Release Notes - CSI PowerFlex v2.6.0
### New Features/Changes
-- [Read Only Block support](https://github.com/dell/csm/issues/509)
-- [Added support for setting QoS limits by CSI-PowerFLex driver](https://github.com/dell/csm/issues/533)
-- [Added support for standardizing helm installation for CSI-PowerFlex driver](https://github.com/dell/csm/issues/494)
-- [Automated SDC deployment on RHEL 7.9 and 8.x](https://github.com/dell/csm/issues/494)
-- [SLES 15 SP4 support added](https://github.com/dell/csm/issues/539)
-- [OCP 4.11 support added](https://github.com/dell/csm/issues/480)
-- [K8 1.25 support added](https://github.com/dell/csm/issues/478)
-- [Added support for PowerFlex storage system v4.0](https://github.com/dell/csm/issues/476)
-
-### Fixed Issues
-- [Fix for volume RO mount option](https://github.com/dell/csm/issues/503)
+- [PowerFlex pre-approved GUIDs support added.](https://github.com/dell/csm/issues/402)
+- [Rename SDC support added.](https://github.com/dell/csm/issues/402)
+- [K8 1.26 support added.](https://github.com/dell/csm/issues/597)
+- [RKE 1.4.1 support added.](https://github.com/dell/csm/issues/670)
+- [MKE 3.6.0 support added.](https://github.com/dell/csm/issues/672)
+
+### Fixed Issues
### Known Issues
@@ -24,6 +20,8 @@ description: Release notes for PowerFlex CSI driver
|-------|------------|
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation.| Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
+| sdc:3.6.0.6 is causing issues while installing the csi-powerflex driver on ubuntu,RHEL8.3 | Workaround: Change the powerflexSdc to sdc:3.6 in values.yaml https://github.com/dell/csi-powerflex/blob/72b27acee7553006cc09df97f85405f58478d2e4/helm/csi-vxflexos/values.yaml#L13 |
+
### Note:
diff --git a/content/docs/csidriver/release/powermax.md b/content/docs/csidriver/release/powermax.md
index 4659f4b5de..9fec415c0b 100644
--- a/content/docs/csidriver/release/powermax.md
+++ b/content/docs/csidriver/release/powermax.md
@@ -3,19 +3,20 @@ title: PowerMax
description: Release notes for PowerMax CSI driver
---
-## Release Notes - CSI PowerMax v2.5.0
+## Release Notes - CSI PowerMax v2.6.0
> Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
### New Features/Changes
-- [Added support for Kubernetes 1.25.](https://github.com/dell/csm/issues/478)
-- [csi-reverseproxy is mandated along with the driver](https://github.com/dell/csm/issues/495)
-- [Added support for auto RDM for vSphere over FC](https://github.com/dell/csm/issues/528)
-- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
-- [SLES 15 SP4 support added](https://github.com/dell/csm/issues/539)
-
->Note: Replication for PowerMax is supported in Kubernetes 1.25.
->Replication is not supported with VMware/Vsphere virtualization support.
+- [Added support for RKE 1.4.2.](https://github.com/dell/csm/issues/670)
+- [Added support to cleanup powerpath dead paths](https://github.com/dell/csm/issues/669)
+- [Added support for Kubernetes 1.26](https://github.com/dell/csm/issues/597)
+- [Added support to clone the replicated volumes](https://github.com/dell/csm/issues/646)
+- [Added support to restore the snapshot of metro volumes](https://github.com/dell/csm/issues/652)
+- [Added support for MKE 3.6.1](https://github.com/dell/csm/issues/672)
+- [Added support for user array migration between arrays](https://github.com/dell/csm/issues/267)
+- [Added support for Observability](https://github.com/dell/csm/issues/586)
+- [Added support for generating manifest file via CSM Installation wizard](https://github.com/dell/csm/issues/591)
### Fixed Issues
There are no fixed issues in this release.
diff --git a/content/docs/csidriver/release/powerscale.md b/content/docs/csidriver/release/powerscale.md
index 7732968b83..3403f542ea 100644
--- a/content/docs/csidriver/release/powerscale.md
+++ b/content/docs/csidriver/release/powerscale.md
@@ -3,14 +3,14 @@ title: PowerScale
description: Release notes for PowerScale CSI driver
---
-## Release Notes - CSI Driver for PowerScale v2.5.0
+## Release Notes - CSI Driver for PowerScale v2.6.0
### New Features/Changes
-- [Add support for Standalone Helm charts.](https://github.com/dell/csm/issues/506)
-- [Add an option to the CSI driver force the client list to be updated even if there are unresolvable host.](https://github.com/dell/csm/issues/534)
-- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
-- [Added support for Kubernetes 1.25](https://github.com/dell/csm/issues/478)
+- [Added support for Kubernetes 1.26](https://github.com/dell/csm/issues/597)
+- [Added support for Ubuntu 22.04](https://github.com/dell/csm/issues/671)
+- [Added support for MKE 3.6.x](https://github.com/dell/csm/issues/672)
+- [Added support for RKE 1.4.1](https://github.com/dell/csm/issues/670)
### Fixed Issues
diff --git a/content/docs/csidriver/release/powerstore.md b/content/docs/csidriver/release/powerstore.md
index ebf09d1f0d..28ebee50b1 100644
--- a/content/docs/csidriver/release/powerstore.md
+++ b/content/docs/csidriver/release/powerstore.md
@@ -3,11 +3,12 @@ title: PowerStore
description: Release notes for PowerStore CSI driver
---
-## Release Notes - CSI PowerStore v2.5.1
+## Release Notes - CSI PowerStore v2.6.0
### New Features/Changes
-There are no features/changes in this release.
+- [Added support for Resiliency](https://github.com/dell/csm/issues/587)
+- [Added support for Kubernetes 1.26](https://github.com/dell/csm/issues/597)
### Fixed Issues
diff --git a/content/docs/csidriver/release/unity.md b/content/docs/csidriver/release/unity.md
index 12801433ba..bcc7979e32 100644
--- a/content/docs/csidriver/release/unity.md
+++ b/content/docs/csidriver/release/unity.md
@@ -3,12 +3,19 @@ title: Unity XT
description: Release notes for Unity XT CSI driver
---
-## Release Notes - CSI Unity XT v2.5.0
+## Release Notes - CSI Unity XT v2.6.0
### New Features/Changes
-- [Added support to Kubernetes 1.25](https://github.com/dell/csm/issues/478)
-- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
+- [Added support to Kubernetes 1.26](https://github.com/dell/csm/issues/597)
+- [Added support for MKE 3.6](https://github.com/dell/csm/issues/672)
+- [Added support for RKE 1.4.1](https://github.com/dell/csm/issues/670)
+- [Added support for SLES SP4](https://github.com/dell/csm/issues/695)
+
+### Fixed Issues
+
+- [PVC fails to resize with message "Invalid value 0; must be greater than zero"](https://github.com/dell/csm/issues/507)
+
### Known Issues
diff --git a/content/docs/csidriver/troubleshooting/powerflex.md b/content/docs/csidriver/troubleshooting/powerflex.md
index 3a93f5bed6..7a395a95a8 100644
--- a/content/docs/csidriver/troubleshooting/powerflex.md
+++ b/content/docs/csidriver/troubleshooting/powerflex.md
@@ -14,11 +14,11 @@ description: Troubleshooting PowerFlex Driver
|CreateVolume error System is not configured in the driver | Powerflex name if used for systemID in StorageClass ensure same name is also used in array config systemID |
|Defcontext mount option seems to be ignored, volumes still are not being labeled correctly.|Ensure SElinux is enabled on a worker node, and ensure your container run time manager is properly configured to be utilized with SElinux.|
|Mount options that interact with SElinux are not working (like defcontext).|Check that your container orchestrator is properly configured to work with SElinux.|
-|Installation of the driver on Kubernetes v1.23/v1.24/v1.25 fails with the following error: ```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.23/v1.24/v1.25 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)|
+|Installation of the driver on Kubernetes v1.24/v1.25/v1.26 fails with the following error: ```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.23/v1.24/v1.25 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)|
| The `kubectl logs -n vxflexos vxflexos-controller-* driver` logs show `x509: certificate signed by unknown authority` |A self assigned certificate is used for PowerFlex array. See [certificate validation for PowerFlex Gateway](../../installation/helm/powerflex/#certificate-validation-for-powerflex-gateway-rest-api-calls)|
| When you run the command `kubectl apply -f snapclass-v1.yaml`, you get the error `error: unable to recognize "snapclass-v1.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"` | Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported. |
| The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.26.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix: 1. Remove any multipath mapping involving a powerflex volume with `multipath -f ` 2. Blacklist CSI-PowerFlex volumes in multipath config file |
diff --git a/content/docs/csidriver/troubleshooting/powermax.md b/content/docs/csidriver/troubleshooting/powermax.md
index ba6db41fbf..960491a02a 100644
--- a/content/docs/csidriver/troubleshooting/powermax.md
+++ b/content/docs/csidriver/troubleshooting/powermax.md
@@ -5,10 +5,15 @@ description: Troubleshooting PowerMax Driver
---
| Symptoms | Prevention, Resolution or Workaround |
|------------|--------------|
-| Warning about feature gates | Double check that you have applied all the features to the indicated processes. Restart kubelet when remediated.|
| `kubectl describe pod powermax-controller- –n ` indicates that the driver image could not be loaded | You may need to put an insecure-registries entry in `/etc/docker/daemon.json` or log in to the docker registry |
| `kubectl logs powermax-controller- –n driver` logs show that the driver cannot authenticate | Check your secret’s username and password |
| `kubectl logs powermax-controller- –n driver` logs show that the driver failed to connect to the U4P because it could not verify the certificates | Check the powermax-certs secret and ensure it is not empty or it has the valid certificates|
-|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.23.0 < 1.27.0 which is incompatible with Kubernetes V1.23.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which are not supported.|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../upgradation/drivers/powermax) for more details |
+| Ater the migration group is in “migrated” state but unable to move to “commit ready” state because the new paths are not being discovered on the cluster nodes.| Run the following commands manually on the cluster nodes `rescan-scsi-bus.sh -i` `rescan-scsi-bus.sh -a`|
+| `Failed to fetch details for array: 000000000000. [Unauthorized]`" | Please make sure that correct encrypted username and password in secret files are used, also ensure whether the RBAC is enabled for the user |
+| `Error looking up volume for idempotence check: Not Found` or `Get Volume step fails for: (000000000000) symID with error (Invalid Response from API)`| Make sure that Unisphere endpoint doesn't end with front slash |
+|`FailedPrecondition desc = no topology keys could be generate`| Make sure that FC or iSCSI connectivity to the arrays are proper |
+| CreateHost failed with error `initiator is already part of different host.` | Update modifyHostName to true in values.yaml Or Remove the initiator from existing host |
+| `kubectl logs powermax-controller- –n ` driver logs says connection refused and the reverseproxy logs says "Failed to setup server.(secrets \"secret-name\" not found)" | Make sure the given secret exist on the cluster |
diff --git a/content/docs/csidriver/troubleshooting/unity.md b/content/docs/csidriver/troubleshooting/unity.md
index 6933aa5630..1fcd86fc2c 100644
--- a/content/docs/csidriver/troubleshooting/unity.md
+++ b/content/docs/csidriver/troubleshooting/unity.md
@@ -12,6 +12,5 @@ description: Troubleshooting Unity XT Driver
| Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically|
| If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. |
| PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.26.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.24.0 < 1.27.0 which is incompatible with Kubernetes 1.24.6-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
-| Volume attachments are not removed after deleting the pods | If you are using Kubernetes version < 1.24, assign the volume name prefix such that the total length of volume name created in array should be more than 68 bytes. From Kubernetes version >= 1.24, this issue is taken care. Please refer the kubernetes issue https://github.com/kubernetes/kubernetes/issues/97230 which has detailed explanation. |
diff --git a/content/docs/csidriver/upgradation/drivers/isilon.md b/content/docs/csidriver/upgradation/drivers/isilon.md
index 84d5dccee1..7e45e79755 100644
--- a/content/docs/csidriver/upgradation/drivers/isilon.md
+++ b/content/docs/csidriver/upgradation/drivers/isilon.md
@@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver
---
You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.
-## Upgrade Driver from version 2.4.0 to 2.5.0 using Helm
+## Upgrade Driver from version 2.5.0 to 2.6.0 using Helm
**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Clone the repository using `git clone -b v2.5.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
+1. Clone the repository using `git clone -b v2.6.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
3. Upgrade the CSI Driver for Dell PowerScale using following command:
diff --git a/content/docs/csidriver/upgradation/drivers/operator.md b/content/docs/csidriver/upgradation/drivers/operator.md
index 5d317b2a1e..a2fe295dec 100644
--- a/content/docs/csidriver/upgradation/drivers/operator.md
+++ b/content/docs/csidriver/upgradation/drivers/operator.md
@@ -13,7 +13,7 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the
### Using Installation Script
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.10.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.11.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Execute `bash scripts/install.sh --upgrade`. This command will install the latest version of the operator.
@@ -24,5 +24,5 @@ The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role whi
- If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version.
- If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process.
-**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.10.0`.
+**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.11.0`.
diff --git a/content/docs/csidriver/upgradation/drivers/powerflex.md b/content/docs/csidriver/upgradation/drivers/powerflex.md
index 7c2eb2e59d..87af52fa75 100644
--- a/content/docs/csidriver/upgradation/drivers/powerflex.md
+++ b/content/docs/csidriver/upgradation/drivers/powerflex.md
@@ -10,9 +10,9 @@ Description: Upgrade PowerFlex CSI driver
You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator.
-## Update Driver from v2.4 to v2.5 using Helm
+## Update Driver from v2.5 to v2.6 using Helm
**Steps**
-1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.3.0 driver.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.6.0 driver.
2. You need to create config.yaml with the configuration of your system.
Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver)
3. Update values file as needed.
diff --git a/content/docs/csidriver/upgradation/drivers/powermax.md b/content/docs/csidriver/upgradation/drivers/powermax.md
index d64909ba08..b7f06e89b1 100644
--- a/content/docs/csidriver/upgradation/drivers/powermax.md
+++ b/content/docs/csidriver/upgradation/drivers/powermax.md
@@ -16,10 +16,10 @@ You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
1. Upgrade the Unisphere to have 10.0 endpoint support.Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support.
-## Update Driver from v2.4 to v2.5 using Helm
+## Update Driver from v2.5 to v2.6 using Helm
**Steps**
-1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
2. Update the values file as needed.
2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`.
diff --git a/content/docs/csidriver/upgradation/drivers/powerstore.md b/content/docs/csidriver/upgradation/drivers/powerstore.md
index 311751629e..f44e02d6ee 100644
--- a/content/docs/csidriver/upgradation/drivers/powerstore.md
+++ b/content/docs/csidriver/upgradation/drivers/powerstore.md
@@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver
You can upgrade the CSI Driver for Dell PowerStore using Helm.
-## Update Driver from v2.5 to v2.5.1 using Helm
+## Update Driver from v2.5 to v2.6 using Helm
Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Run `git clone -b v2.5.1 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.6.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
2. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
@@ -28,7 +28,7 @@ Note: While upgrading the driver via helm, controllerCount variable in myvalues.
Add more blocks similar to above for each PowerStore array if necessary.
3. (optional) create new storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
- >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 driver will not be deleted, v2.5.1 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 driver and make it default in the `secret.yaml` file.
+ >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 driver will not be deleted, v2.6 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 driver and make it default in the `secret.yaml` file.
4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
5. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` and update parameters as per the requirement.
6. Run the `csi-install` script with the option _\-\-upgrade_ by running: `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade`.
diff --git a/content/docs/csidriver/upgradation/drivers/unity.md b/content/docs/csidriver/upgradation/drivers/unity.md
index d328a4d21a..db3b279b4d 100644
--- a/content/docs/csidriver/upgradation/drivers/unity.md
+++ b/content/docs/csidriver/upgradation/drivers/unity.md
@@ -20,9 +20,9 @@ You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator
Preparing myvalues.yaml is the same as explained in the install section.
-To upgrade the driver from csi-unity v2.4.0 to csi-unity v2.5.0
+To upgrade the driver from csi-unity v2.5.0 to csi-unity v2.6.0
-1. Get the latest csi-unity v2.5.0 code from Github using `git clone -b v2.5.0 https://github.com/dell/csi-unity.git`.
+1. Get the latest csi-unity v2.6.0 code from Github using `git clone -b v2.6.0 https://github.com/dell/csi-unity.git`.
2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed.
3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command:
`./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade`
diff --git a/content/docs/csm_hexagon.png b/content/docs/csm_hexagon.png
index f2d5eecfd1..bba9f9e0a1 100644
Binary files a/content/docs/csm_hexagon.png and b/content/docs/csm_hexagon.png differ
diff --git a/content/docs/deployment/_index.md b/content/docs/deployment/_index.md
index 5c698bdce4..0c84e03328 100644
--- a/content/docs/deployment/_index.md
+++ b/content/docs/deployment/_index.md
@@ -9,7 +9,7 @@ The Container Storage Modules along with the required CSI Drivers can each be de
{{< cardpane >}}
{{< card header="[**CSM Operator**](csmoperator/)"
- footer="Supports driver [PowerScale](csmoperator/drivers/powerscale/), modules [Authorization](csmoperator/modules/authorization/) [Replication](csmoperator/modules/replication/)">}}
+ footer="Supported drivers: [PowerScale](csmoperator/drivers/powerscale/), [PowerStore](csmoperator/drivers/powerstore/), [PowerFlex](csmoperator/drivers/powerflex/) Supported modules: [Authorization](csmoperator/modules/authorization/), [Replication](csmoperator/modules/replication/), [Observability](csmoperator/modules/observability/)">}}
Dell CSM Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually.
[...More on installation instructions](csmoperator/)
{{< /card >}}
@@ -22,6 +22,11 @@ The Container Storage Modules and the required CSI Drivers can each be deployed
footer="Installs [PowerStore](../csidriver/installation/helm/powerstore/) [PowerMax](../csidriver/installation/helm/powermax/) [PowerScale](../csidriver/installation/helm/isilon/) [PowerFlex](../csidriver/installation/helm/powerflex/) [Unity](../csidriver/installation/helm/unity/)">}}
Dell CSI Helm installer installs the CSI Driver components using the provided Helm charts.
[...More on installation instructions](../csidriver/installation/helm)
+ {{< /card >}}
+ {{< card header="[CSM Installation Wizard](csminstallationwizard/)"
+ footer="Generates manifest file for installation">}}
+ CSM Installation Wizard generates manifest files to install Dell CSI Drivers and supported modules.
+ [...More on installation instructions](csminstallationwizard)
{{< /card >}}
{{< card header="[Dell CSI Drivers Installation via offline installer](../csidriver/installation/offline)"
footer="[Offline installation for all drivers](../csidriver/installation/offline)">}}
diff --git a/content/docs/deployment/csminstallationwizard/_index.md b/content/docs/deployment/csminstallationwizard/_index.md
new file mode 100644
index 0000000000..d05ddad471
--- /dev/null
+++ b/content/docs/deployment/csminstallationwizard/_index.md
@@ -0,0 +1,82 @@
+---
+title: "CSM Installation Wizard"
+linkTitle: "CSM Installation Wizard"
+description: Container Storage Modules Installation Wizard
+weight: 1
+---
+
+The [Dell Container Storage Modules Installation Wizard](./src/index.html) is a webpage that generates a manifest file for installing Dell CSI Drivers and its supported CSM Modules, based on input from the user. It generates a single manifest file to install both Dell CSI Drivers and its supported CSM Modules, thereby eliminating the need to download individual Helm charts for drivers and modules. The user can enable or disable the necessary modules through the UI, and a manifest file is generated accordingly without manually editing the helm charts.
+
+>NOTE: The CSM Installation Wizard currently supports Helm based manifest file generation only.
+
+## Supported Dell CSI Drivers
+
+| CSI Driver | Version |
+| ------------------ | --------- |
+| CSI PowerStore | 2.6.0 |
+| CSI PowerMax | 2.6.0 |
+
+## Supported Dell CSM Modules
+
+| CSM Modules | Version |
+| ---------------------| --------- |
+| Application Mobility | 0.3.0 |
+| CSM Observability | 1.5.0 |
+| CSM Replication | 1.4.0 |
+| CSM Resiliency | 1.5.0 |
+
+## Installation
+
+1. Open the [CSM Installation Wizard](./src/index.html).
+2. Select the `Installation Type` as `Helm`.
+3. Select the `Array`.
+4. Enter the `Image Repository`. The default value is `dellemc`.
+5. Select the `CSM Version`.
+6. Select the modules for installation. If there are module specific inputs, enter their values.
+7. If needed, modify the `Controller Pods Count`.
+8. If needed, select `Install Controller Pods on Control Plane` and/or `Install Node Pods on Control Plane`.
+9. Select `Single Namespace` if the Dell CSI Driver and Modules should be installed in the same namespace.
+10. Enter the `Driver Namespace`. The default value is `csi-`.
+11. Enter the `Module Namespace`. The default value is `csm-module`.
+12. Click on `Generate YAML`.
+13. A manifest file, `values.yaml` will be generated and downloaded.
+14. A section `Run the following commands to install` will be displayed.
+15. Run the commands displayed to install Dell CSI Driver and Modules using the generated manifest file.
+
+## Install Helm Chart
+
+**Steps**
+
+>> NOTE: Ensure that the namespaces and secrets are created before installing the Helm chart.
+
+1. Add the Dell Helm Charts repository.
+
+ On your terminal, run each of the commands below:
+
+ ```terminal
+ helm repo add dell https://dell.github.io/helm-charts
+ helm repo update
+ ```
+
+2. Copy the downloaded values.yaml file.
+
+3. Look over all the fields in the generated `values.yaml` and fill in/adjust any as needed.
+
+4. For the Observability module, please refer [Observability](../../observability/deployment/#post-installation-dependencies) to install the post installation dependencies.
+
+5. If Authorization is enabled , please refer to [Authorization](../../authorization/deployment/helm/) for the installation and configuration of the Proxy Server.
+
+>> NOTE: Only the Authorization sidecar is enabled by the CSM Installation Wizard. The Proxy Server has to be installed and configured separately.
+
+6. If the Volume Snapshot feature is enabled, please refer to [Volume Snapshot for PowerStore](../../csidriver/installation/helm/powerstore/#optional-volume-snapshot-requirements) and [Volume Snapshot for PowerMax](../../csidriver/installation/helm/powermax/#optional-volume-snapshot-requirements) to install the Volume Snapshot CRDs and the default snapshot controller.
+
+>> NOTE: The CSM Installation Wizard generates values.yaml with the minimal inputs required to install the CSM. To configure additional parameters in values.yaml, please follow the steps outlined in [PowerStore](../../csidriver/installation/helm/powerstore/#install-the-driver), [PowerMax](../../csidriver/installation/helm/powermax/#install-the-driver), [Observability](../../observability/), [Replication](../../replication/), [Resiliency](../../resiliency/), and [Application Mobility](../../applicationmobility/).
+
+7. Install the Helm chart.
+
+ On your terminal, run this command:
+
+ ```terminal
+ helm install dell/container-storage-modules -f
+ Example: helm install powerstore dell/container-storage-modules -f values.yaml
+ ```
diff --git a/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.4.0.properties b/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.4.0.properties
new file mode 100644
index 0000000000..2e9afb6e8c
--- /dev/null
+++ b/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.4.0.properties
@@ -0,0 +1,6 @@
+driverVersion=v2.4.0
+vgsnapshotImage=csi-volumegroup-snapshotter:v1.1.0
+replicationImage=dell-csi-replicator:v1.3.0
+authorizationImage=csm-authorization-sidecar:v1.4.0
+migrationImage=dell-csi-migrator:v1.0.0
+powermaxCSIReverseProxyImage=csipowermax-reverseproxy:v2.3.0
diff --git a/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.5.0.properties b/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.5.0.properties
new file mode 100644
index 0000000000..3fad6a924e
--- /dev/null
+++ b/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.5.0.properties
@@ -0,0 +1,6 @@
+driverVersion=v2.5.0
+vgsnapshotImage=csi-volumegroup-snapshotter:v1.1.0
+replicationImage=dell-csi-replicator:v1.3.0
+authorizationImage=csm-authorization-sidecar:v1.5.0
+migrationImage=dell-csi-migrator:v1.0.0
+powermaxCSIReverseProxyImage=csipowermax-reverseproxy:v2.4.0
\ No newline at end of file
diff --git a/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.6.0.properties b/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.6.0.properties
new file mode 100644
index 0000000000..d242f20dfd
--- /dev/null
+++ b/content/docs/deployment/csminstallationwizard/src/csm-versions/csm-1.6.0.properties
@@ -0,0 +1,7 @@
+driverVersion=v2.6.0
+vgsnapshotImage=csi-volumegroup-snapshotter:v1.2.0
+replicationImage=dell-csi-replicator:v1.4.0
+podmonImage=podmon:v1.5.0
+authorizationImage=csm-authorization-sidecar:v1.6.0
+migrationImage=dell-csi-migrator:v1.1.0
+powermaxCSIReverseProxyImage=csipowermax-reverseproxy:v2.5.0
diff --git a/content/docs/deployment/csminstallationwizard/src/csm-versions/default-values.properties b/content/docs/deployment/csminstallationwizard/src/csm-versions/default-values.properties
new file mode 100644
index 0000000000..0f7bf197ab
--- /dev/null
+++ b/content/docs/deployment/csminstallationwizard/src/csm-versions/default-values.properties
@@ -0,0 +1,4 @@
+csmVersion=1.6.0
+imageRepository=dellemc
+controllerCount=2
+nodeSelectorLabel=node-role.kubernetes.io/control-plane:
\ No newline at end of file
diff --git a/content/docs/deployment/csminstallationwizard/src/index.html b/content/docs/deployment/csminstallationwizard/src/index.html
new file mode 100644
index 0000000000..fcbf85ed90
--- /dev/null
+++ b/content/docs/deployment/csminstallationwizard/src/index.html
@@ -0,0 +1,499 @@
+
+
+
+
+ CSM Installation Wizard | Dell Technologies
+
+
+
+
+
+
+
+
+
` all receive top and bottom margins. We nuke the top\n// margin for easier control within type scales as it avoids margin collapsing.\n\n%heading {\n margin-top: 0; // 1\n margin-bottom: $headings-margin-bottom;\n font-family: $headings-font-family;\n font-style: $headings-font-style;\n font-weight: $headings-font-weight;\n line-height: $headings-line-height;\n color: $headings-color;\n}\n\nh1 {\n @extend %heading;\n @include font-size($h1-font-size);\n}\n\nh2 {\n @extend %heading;\n @include font-size($h2-font-size);\n}\n\nh3 {\n @extend %heading;\n @include font-size($h3-font-size);\n}\n\nh4 {\n @extend %heading;\n @include font-size($h4-font-size);\n}\n\nh5 {\n @extend %heading;\n @include font-size($h5-font-size);\n}\n\nh6 {\n @extend %heading;\n @include font-size($h6-font-size);\n}\n\n\n// Reset margins on paragraphs\n//\n// Similarly, the top margin on `
`s get reset. However, we also reset the\n// bottom margin to use `rem` units instead of `em`.\n\np {\n margin-top: 0;\n margin-bottom: $paragraph-margin-bottom;\n}\n\n\n// Abbreviations\n//\n// 1. Add the correct text decoration in Chrome, Edge, Opera, and Safari.\n// 2. Add explicit cursor to indicate changed behavior.\n// 3. Prevent the text-decoration to be skipped.\n\nabbr[title] {\n text-decoration: underline dotted; // 1\n cursor: help; // 2\n text-decoration-skip-ink: none; // 3\n}\n\n\n// Address\n\naddress {\n margin-bottom: 1rem;\n font-style: normal;\n line-height: inherit;\n}\n\n\n// Lists\n\nol,\nul {\n padding-left: 2rem;\n}\n\nol,\nul,\ndl {\n margin-top: 0;\n margin-bottom: 1rem;\n}\n\nol ol,\nul ul,\nol ul,\nul ol {\n margin-bottom: 0;\n}\n\ndt {\n font-weight: $dt-font-weight;\n}\n\n// 1. Undo browser default\n\ndd {\n margin-bottom: .5rem;\n margin-left: 0; // 1\n}\n\n\n// Blockquote\n\nblockquote {\n margin: 0 0 1rem;\n}\n\n\n// Strong\n//\n// Add the correct font weight in Chrome, Edge, and Safari\n\nb,\nstrong {\n font-weight: $font-weight-bolder;\n}\n\n\n// Small\n//\n// Add the correct font size in all browsers\n\nsmall {\n @include font-size($small-font-size);\n}\n\n\n// Mark\n\nmark {\n padding: $mark-padding;\n background-color: var(--#{$prefix}highlight-bg);\n}\n\n\n// Sub and Sup\n//\n// Prevent `sub` and `sup` elements from affecting the line height in\n// all browsers.\n\nsub,\nsup {\n position: relative;\n @include font-size($sub-sup-font-size);\n line-height: 0;\n vertical-align: baseline;\n}\n\nsub { bottom: -.25em; }\nsup { top: -.5em; }\n\n\n// Links\n\na {\n color: var(--#{$prefix}link-color);\n text-decoration: $link-decoration;\n\n &:hover {\n color: var(--#{$prefix}link-hover-color);\n text-decoration: $link-hover-decoration;\n }\n}\n\n// And undo these styles for placeholder links/named anchors (without href).\n// It would be more straightforward to just use a[href] in previous block, but that\n// causes specificity issues in many other styles that are too complex to fix.\n// See https://github.com/twbs/bootstrap/issues/19402\n\na:not([href]):not([class]) {\n &,\n &:hover {\n color: inherit;\n text-decoration: none;\n }\n}\n\n\n// Code\n\npre,\ncode,\nkbd,\nsamp {\n font-family: $font-family-code;\n @include font-size(1em); // Correct the odd `em` font sizing in all browsers.\n}\n\n// 1. Remove browser default top margin\n// 2. Reset browser default of `1em` to use `rem`s\n// 3. Don't allow content to break outside\n\npre {\n display: block;\n margin-top: 0; // 1\n margin-bottom: 1rem; // 2\n overflow: auto; // 3\n @include font-size($code-font-size);\n color: $pre-color;\n\n // Account for some code outputs that place code tags in pre tags\n code {\n @include font-size(inherit);\n color: inherit;\n word-break: normal;\n }\n}\n\ncode {\n @include font-size($code-font-size);\n color: var(--#{$prefix}code-color);\n word-wrap: break-word;\n\n // Streamline the style when inside anchors to avoid broken underline and more\n a > & {\n color: inherit;\n }\n}\n\nkbd {\n padding: $kbd-padding-y $kbd-padding-x;\n @include font-size($kbd-font-size);\n color: $kbd-color;\n background-color: $kbd-bg;\n @include border-radius($border-radius-sm);\n\n kbd {\n padding: 0;\n @include font-size(1em);\n font-weight: $nested-kbd-font-weight;\n }\n}\n\n\n// Figures\n//\n// Apply a consistent margin strategy (matches our type styles).\n\nfigure {\n margin: 0 0 1rem;\n}\n\n\n// Images and content\n\nimg,\nsvg {\n vertical-align: middle;\n}\n\n\n// Tables\n//\n// Prevent double borders\n\ntable {\n caption-side: bottom;\n border-collapse: collapse;\n}\n\ncaption {\n padding-top: $table-cell-padding-y;\n padding-bottom: $table-cell-padding-y;\n color: $table-caption-color;\n text-align: left;\n}\n\n// 1. Removes font-weight bold by inheriting\n// 2. Matches default `
` alignment by inheriting `text-align`.\n// 3. Fix alignment for Safari\n\nth {\n font-weight: $table-th-font-weight; // 1\n text-align: inherit; // 2\n text-align: -webkit-match-parent; // 3\n}\n\nthead,\ntbody,\ntfoot,\ntr,\ntd,\nth {\n border-color: inherit;\n border-style: solid;\n border-width: 0;\n}\n\n\n// Forms\n//\n// 1. Allow labels to use `margin` for spacing.\n\nlabel {\n display: inline-block; // 1\n}\n\n// Remove the default `border-radius` that macOS Chrome adds.\n// See https://github.com/twbs/bootstrap/issues/24093\n\nbutton {\n // stylelint-disable-next-line property-disallowed-list\n border-radius: 0;\n}\n\n// Explicitly remove focus outline in Chromium when it shouldn't be\n// visible (e.g. as result of mouse click or touch tap). It already\n// should be doing this automatically, but seems to currently be\n// confused and applies its very visible two-tone outline anyway.\n\nbutton:focus:not(:focus-visible) {\n outline: 0;\n}\n\n// 1. Remove the margin in Firefox and Safari\n\ninput,\nbutton,\nselect,\noptgroup,\ntextarea {\n margin: 0; // 1\n font-family: inherit;\n @include font-size(inherit);\n line-height: inherit;\n}\n\n// Remove the inheritance of text transform in Firefox\nbutton,\nselect {\n text-transform: none;\n}\n// Set the cursor for non-`
}}
@@ -32,14 +33,14 @@ CSM for Observability provides the following capabilities:
{{
}}
| Capability | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
| - | :-: | :-: | :-: | :-: | :-: |
-| Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | yes | yes |
+| Collect and expose Volume Metrics via the OpenTelemetry Collector | yes | yes | no | yes | yes |
| Collect and expose File System Metrics via the OpenTelemetry Collector | no | no | no | no | yes |
| Collect and expose export (k8s) node metrics via the OpenTelemetry Collector | no | yes | no | no | no |
-| Collect and expose block storage metrics via the OpenTelemetry Collector | no | yes | no | no | yes |
+| Collect and expose block storage metrics via the OpenTelemetry Collector | yes | yes | no | no | yes |
| Collect and expose file storage metrics via the OpenTelemetry Collector | no | no | no | yes | yes |
-| Non-disruptive config changes | no | yes | no | yes | yes |
-| Non-disruptive log level changes | no | yes | no | yes | yes |
-| Grafana Dashboards for displaying metrics and topology data | no | yes | no | yes | yes |
+| Non-disruptive config changes | yes | yes | no | yes | yes |
+| Non-disruptive log level changes | yes | yes | no | yes | yes |
+| Grafana Dashboards for displaying metrics and topology data | yes | yes | no | yes | yes |
{{
}}
## Supported Operating Systems/Container Orchestrator Platforms
@@ -47,7 +48,7 @@ CSM for Observability provides the following capabilities:
{{
}}
## Supported CSI Drivers
@@ -71,6 +72,7 @@ CSM for Observability supports the following CSI drivers and versions.
| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0 + |
| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + |
| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0 + |
+| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.5 + |
{{
}}
## Topology Data
diff --git a/content/docs/observability/deployment/_index.md b/content/docs/observability/deployment/_index.md
index 62b10741bb..a65985587a 100644
--- a/content/docs/observability/deployment/_index.md
+++ b/content/docs/observability/deployment/_index.md
@@ -286,6 +286,8 @@ Once Grafana is properly configured, you can import the pre-built observability
| [PowerScale: I/O Performance by Cluster](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/cluster_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth) by cluster |
| [PowerScale: Capacity by Cluster](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/cluster_capacity.json) | Provides visibility into the total, used, available capacity and directory quota capacity by cluster |
| [PowerScale: Capacity by Quota](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/volume_capacity.json) | Provides visibility into the subscribed, remaining capacity and usage by quota |
+| [PowerMax: PowerMax Capacity](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powermax/storage_consumption.json) | Provides visibility into the subscribed, used, available capacity for a storage class and associated underlying storage construct |
+| [PowerMax: PowerMax Performance](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powermax/performance.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth) by storage group and volume |
| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
## Dynamic Configuration
@@ -297,6 +299,7 @@ Some parameters can be configured/updated during runtime without restarting the
| karavi-metrics-powerflex-configmap | karavi-metrics-powerflex |
|
To update any of these settings, run the following command on the Kubernetes cluster then save the updated ConfigMap data.
@@ -402,7 +405,7 @@ In this case, all storage system requests made by CSM for Observability will be
2. Copy the `proxy-authz-tokens` Secret from the CSI Driver for Dell PowerFlex to the CSM namespace.
```console
- $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
##### CSI Driver for Dell PowerScale
@@ -414,7 +417,19 @@ In this case, all storage system requests made by CSM for Observability will be
2. Copy the `isilon-proxy-authz-tokens` Secret from the CSI Driver for Dell PowerScale namespace to the CSM namespace.
```console
- $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f
+ $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f
+ ```
+
+##### CSI Driver for Dell PowerMax
+
+1. Delete the current `powermax-proxy-authz-tokens` Secret from the CSM namespace.
+ ```console
+ $ kubectl delete secret powermax-proxy-authz-tokens -n [CSM_NAMESPACE]
+ ```
+
+2. Copy the `powermax-proxy-authz-tokens` Secret from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ ```console
+ $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: powermax-proxy-authz-tokens/' | kubectl create -f
```
#### Update Storage Systems
@@ -424,12 +439,12 @@ If the list of storage systems managed by a Dell CSI Driver have changed, the fo
1. Delete the current `karavi-authorization-config` Secret from the CSM namespace.
```console
- $ kubectl delete secret proxy-authz-tokens -n [CSM_NAMESPACE]
+ $ kubectl delete secret karavi-authorization-config -n [CSM_NAMESPACE]
```
2. Copy the `karavi-authorization-config` Secret from the CSI Driver for Dell PowerFlex namespace to CSM for Observability namespace.
```console
- $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
##### CSI Driver for Dell PowerScale
@@ -439,11 +454,23 @@ If the list of storage systems managed by a Dell CSI Driver have changed, the fo
$ kubectl delete secret isilon-karavi-authorization-config -n [CSM_NAMESPACE]
```
-2. Copy the isilon-karavi-authorization-config Secret from the CSI Driver for Dell PowerScale namespace to CSM for Observability namespace.
+2. Copy the `isilon-karavi-authorization-config` Secret from the CSI Driver for Dell PowerScale namespace to CSM for Observability namespace.
```console
- $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | kubectl create -f
+ $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | kubectl create -f
```
+##### CSI Driver for Dell PowerMax
+
+1. Delete the current `powermax-karavi-authorization-config` secret from the CSM namespace.
+ ```console
+ $ kubectl delete secret powermax-karavi-authorization-config -n [CSM_NAMESPACE]
+ ```
+
+2. Copy `powermax-karavi-authorization-config` secret from the CSI Driver for Dell PowerMax to the CSM namespace.
+ ```console
+ $ kubectl get secret karavi-authorization-config proxy-server-root-certificate -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: powermax-karavi-authorization-config/' | kubectl create -f -
+ ```
+
### When CSM for Observability does not use the Authorization module
In this case all storage system requests made by CSM for Observability will not be routed through the Authorization module. The following must be performed:
@@ -460,7 +487,12 @@ In this case all storage system requests made by CSM for Observability will not
$ kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-### CSI Driver for Dell PowerStore
+ If the CSI driver secret name is not the default `vxflexos-config`, please use the following command to copy secret:
+ ```console
+ $ kubectl get secret [VXFLEXOS-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [VXFLEXOS-CONFIG]/name: vxflexos-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+#### CSI Driver for Dell PowerStore
1. Delete the current `powerstore-config` Secret from the CSM namespace.
```console
@@ -472,7 +504,12 @@ In this case all storage system requests made by CSM for Observability will not
$ kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-### CSI Driver for Dell PowerScale
+ If the CSI driver secret name is not the default `powerstore-config`, please use the following command to copy secret:
+ ```console
+ $ kubectl get secret [POWERSTORE-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERSTORE-CONFIG]/name: powerstore-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+#### CSI Driver for Dell PowerScale
1. Delete the current `isilon-creds` Secret from the CSM namespace.
```console
@@ -482,4 +519,52 @@ In this case all storage system requests made by CSM for Observability will not
2. Copy the `isilon-creds` Secret from the CSI Driver for Dell PowerScale namespace to the CSM namespace.
```console
$ kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
- ```
\ No newline at end of file
+ ```
+
+ If the CSI driver secret name is not the default `isilon-creds`, please use the following command to copy secret:
+ ```console
+ $ kubectl get secret [ISILON-CREDS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [ISILON-CREDS]/name: isilon-creds/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+#### CSI Driver for Dell PowerMax
+
+1. Delete the secrets in `powermax-reverseproxy-config` configmap from the CSM namespace.
+ ```console
+ for secret in $(kubectl get configmap powermax-reverseproxy-config -n [CSM_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl delete secret $secret -n [CSM_NAMESPACE]
+ done
+ ```
+
+2. Delete the current `powermax-reverseproxy-config` configmap from the CSM namespace.
+ ```console
+ $ kubectl delete configmap powermax-reverseproxy-config -n [CSM_NAMESPACE]
+ ```
+
+3. Copy the configmap `powermax-reverseproxy-config` from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ __Note:__ Observability for PowerMax works only with [CSI PowerMax driver with Proxy in StandAlone mode](../../csidriver/installation/helm/powermax/#csi-powermax-driver-with-proxy-in-standalone-mode).
+ ```console
+ $ kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy configmap:
+
+ ```console
+ $ kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-REVERSEPROXY-CONFIG]/name: powermax-reverseproxy-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+4. Copy the secrets in `powermax-reverseproxy-config` from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ ```console
+ for secret in $(kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f -
+ done
+ ```
+
+ If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy secrets:
+ ```console
+ for secret in $(kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f -
+ done
+ ```
diff --git a/content/docs/observability/deployment/helm.md b/content/docs/observability/deployment/helm.md
index cc8860f6e3..3353bbc9cb 100644
--- a/content/docs/observability/deployment/helm.md
+++ b/content/docs/observability/deployment/helm.md
@@ -27,13 +27,21 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
1. Copy the config Secret from the CSI PowerFlex namespace into the CSM for Observability namespace:
- `kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+ `kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver secret name is not the default `vxflexos-config`, please use the following command to copy secret:
+
+ `kubectl get secret [VXFLEXOS-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [VXFLEXOS-CONFIG]/name: vxflexos-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform the following steps:
2. Copy the driver configuration parameters ConfigMap from the CSI PowerFlex namespace into the CSM for Observability namespace:
- `kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+ `kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver configmap name is not the default `vxflexos-config-params`, please use the following command to copy configmap:
+
+ `kubectl get configmap [VXFLEXOS-CONFIG-PARAMS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [VXFLEXOS-CONFIG-PARAMS]/name: vxflexos-config-params/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
3. Copy the `karavi-authorization-config`, `proxy-server-root-certificate`, `proxy-authz-tokens` Secret from the CSI PowerFlex namespace into the CSM for Observability namespace:
@@ -43,24 +51,76 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
1. Copy the config Secret from the CSI PowerStore namespace into the CSM for Observability namespace:
- `kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+ `kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver secret name is not the default `powerstore-config`, please use the following command to copy secret:
+ `kubectl get secret [POWERSTORE-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERSTORE-CONFIG]/name: powerstore-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
### PowerScale
1. Copy the config Secret from the CSI PowerScale namespace into the CSM for Observability namespace:
- `kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+ `kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver secret name is not the default `isilon-creds`, please use the following command to copy secret:
+
+ `kubectl get secret [ISILON-CREDS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [ISILON-CREDS]/name: isilon-creds/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerScale, perform these steps:
2. Copy the driver configuration parameters ConfigMap from the CSI PowerScale namespace into the CSM for Observability namespace:
`kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver configmap name is not the default `isilon-config-params`, please use the following command to copy configmap:
+
+ `kubectl get configmap [ISILON-CONFIG-PARAMS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [ISILON-CONFIG-PARAMS]/name: isilon-config-params/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
3. Copy the `karavi-authorization-config`, `proxy-server-root-certificate`, `proxy-authz-tokens` Secret from the CSI PowerScale namespace into the CSM for Observability namespace:
- `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -`
+ `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -`
+
+ ### PowerMax
+
+ 1. Copy the configmap `powermax-reverseproxy-config` from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ __Note:__ Observability for PowerMax works only with [CSI PowerMax driver with Proxy in StandAlone mode](../../../csidriver/installation/helm/powermax/#csi-powermax-driver-with-proxy-in-standalone-mode).
+
+ `kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy configmap:
+
+ `kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-REVERSEPROXY-CONFIG]/name: powermax-reverseproxy-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ 2. Copy the secrets in `powermax-reverseproxy-config` from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ ```console
+ for secret in $(kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f -
+ done
+ ```
+
+ If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy secrets:
+ ```console
+ for secret in $(kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f -
+ done
+ ```
+
+ If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerMax, perform these steps:
+
+ 3. Copy the driver configuration parameters ConfigMap from the CSI PowerMax namespace into the CSM for Observability namespace:
+
+ `kubectl get configmap powermax-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If the CSI driver configmap name is not the default `powermax-config-params`, please use the following command to copy configmap:
+
+ `kubectl get configmap [POWERMAX-CONFIG-PARAMS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-CONFIG-PARAMS]/name: powermax-config-params/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ 4. Copy the `karavi-authorization-config`, `proxy-server-root-certificate`, `proxy-authz-tokens` Secret from the CSI PowerMax namespace into the CSM for Observability namespace:
+ `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: powermax-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: powermax-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: powermax-proxy-authz-tokens/' | kubectl create -f -`
5. Configure the [parameters](#configuration) and install the CSM for Observability Helm Chart
@@ -71,6 +131,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in your values file for CSM Observability.
- If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured in your values file for CSM Observability.
+ - If CSM for Authorization is enabled for CSI PowerMax, the `karaviMetricsPowermax.authorization` parameters must be properly configured in your values file for CSM Observability.
```console
$ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] -f myvalues.yaml
@@ -144,7 +205,7 @@ The following table lists the configurable parameters of the CSM for Observabili
| `karaviMetricsPowerscale.clusterCapacityPollFrequencySeconds` | The polling frequency (in seconds) to gather cluster capacity metrics | `30` |
| `karaviMetricsPowerscale.clusterPerformancePollFrequencySeconds` | The polling frequency (in seconds) to gather cluster performance metrics | `20` |
| `karaviMetricsPowerscale.quotaCapacityPollFrequencySeconds` | The polling frequency (in seconds) to gather volume capacity metrics | `30` |
-| `karaviMetricsPowerscale.concurrentPowerscaleQueries` | The number of simultaneous metrics queries to make to PowerScale(MUST be less than 10; otherwise, several request errors from PowerScale will ensue. | `10` |
+| `karaviMetricsPowerscale.concurrentPowerscaleQueries` | The number of simultaneous metrics queries to make to PowerScale(MUST be less than 10; otherwise, several request errors from PowerScale will ensue.) | `10` |
| `karaviMetricsPowerscale.endpoint` | Endpoint for pod leader election | `karavi-metrics-powerscale` |
| `karaviMetricsPowerscale.service.type` | Kubernetes service type | `ClusterIP` |
| `karaviMetricsPowerscale.logLevel` | Output logs that are at or above the given log level severity (Valid values: TRACE, DEBUG, INFO, WARN, ERROR, FATAL, PANIC) | `INFO`|
@@ -155,3 +216,11 @@ The following table lists the configurable parameters of the CSM for Observabili
| `karaviMetricsPowerscale.authorization.enabled` | [Authorization](../../../authorization) is an optional feature to apply credential shielding of the backend PowerScale. | `false` |
| `karaviMetricsPowerscale.authorization.proxyHost` | Hostname of the csm-authorization server. | |
| `karaviMetricsPowerscale.authorization.skipCertificateValidation` | A boolean that enables/disables certificate validation of the csm-authorization server. | |
+| `karaviMetricsPowermax.capacityMetricsEnabled` | Enable PowerMax capacity metric Collection | `true` |
+| `karaviMetricsPowermax.performanceMetricsEnabled` | Enable PowerMax performance metric Collection | `true` |
+| `karaviMetricsPowermax.capacityPollFrequencySeconds` | The polling frequency (in seconds) to gather capacity metrics | `20` |
+| `karaviMetricsPowermax.performancePollFrequencySeconds` | The polling frequency (in seconds) to gather performance metrics | `20` |
+| `karaviMetricsPowermax.concurrentPowermaxQueries` | The number of simultaneous metrics queries to make to PowerMax (MUST be less than 10; otherwise, several request errors from PowerMax will ensue.) | `10` |
+| `karaviMetricsPowermax.authorization.enabled` | [Authorization](../../../authorization) is an optional feature to apply credential shielding of the backend PowerMax. | `false` |
+| `karaviMetricsPowermax.authorization.proxyHost` | Hostname of the csm-authorization server. | |
+| `karaviMetricsPowermax.authorization.skipCertificateValidation` | A boolean that enables/disables certificate validation of the csm-authorization server. | |
diff --git a/content/docs/observability/deployment/offline.md b/content/docs/observability/deployment/offline.md
index d8e7a6539f..7065310325 100644
--- a/content/docs/observability/deployment/offline.md
+++ b/content/docs/observability/deployment/offline.md
@@ -72,10 +72,11 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Downloading and saving Docker images
- dellemc/csm-topology:v1.4.0
- dellemc/csm-metrics-powerflex:v1.4.0
- dellemc/csm-metrics-powerstore:v1.4.0
- dellemc/csm-metrics-powerscale:v1.1.0
+ dellemc/csm-topology:v1.5.0
+ dellemc/csm-metrics-powerflex:v1.5.0
+ dellemc/csm-metrics-powerstore:v1.5.0
+ dellemc/csm-metrics-powerscale:v1.2.0
+ dellemc/csm-metrics-powermax:v1.0.0
otel/opentelemetry-collector:0.42.0
nginxinc/nginx-unprivileged:1.20
@@ -105,10 +106,11 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Loading, tagging, and pushing Docker images to registry :5000/
- dellemc/csm-topology:v1.4.0 -> :5000/csm-topology:v1.4.0
- dellemc/csm-metrics-powerflex:v1.4.0 -> :5000/csm-metrics-powerflex:v1.4.0
- dellemc/csm-metrics-powerstore:v1.4.0 -> :5000/csm-metrics-powerstore:v1.4.0
- dellemc/csm-metrics-powerscale:v1.1.0 -> :5000/csm-metrics-powerscale:v1.1.0
+ dellemc/csm-topology:v1.5.0 -> :5000/csm-topology:v1.5.0
+ dellemc/csm-metrics-powerflex:v1.5.0 -> :5000/csm-metrics-powerflex:v1.5.0
+ dellemc/csm-metrics-powerstore:v1.5.0 -> :5000/csm-metrics-powerstore:v1.5.0
+ dellemc/csm-metrics-powerscale:v1.2.0 -> :5000/csm-metrics-powerscale:v1.2.0
+ dellemc/csm-metrics-powermax:v1.0.0 -> :5000/csm-metrics-powerscale:v1.0.0
otel/opentelemetry-collector:0.42.0 -> :5000/opentelemetry-collector:0.42.0
nginxinc/nginx-unprivileged:1.20 -> :5000/nginx-unprivileged:1.20
```
@@ -129,41 +131,114 @@ To perform an offline installation of a Helm chart, the following steps should b
Copy the CSI Driver Secret from the namespace where CSI Driver is installed to the namespace where CSM for Observability is to be installed.
- CSI Driver for PowerFlex:
+ __CSI Driver for PowerFlex:__
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret vxflexos-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
+
+ If the CSI driver secret name is not the default `vxflexos-config`, please use the following command to copy secret:
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret [VXFLEXOS-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [VXFLEXOS-CONFIG]/name: vxflexos-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerFlex, perform these steps:
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap vxflexos-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
+
+ If the CSI driver configmap name is not the default `vxflexos-config-params`, please use the following command to copy configmap:
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap [VXFLEXOS-CONFIG-PARAMS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [VXFLEXOS-CONFIG-PARAMS]/name: vxflexos-config-params/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
- CSI Driver for PowerStore
+ __CSI Driver for PowerStore:__
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
- CSI Driver for PowerScale:
- ```
- [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
- ```
+ If the CSI driver secret name is not the default `powerstore-config`, please use the following command to copy secret:
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret [POWERSTORE-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERSTORE-CONFIG]/name: powerstore-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ __CSI Driver for PowerScale:__
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ If the CSI driver secret name is not the default `isilon-creds`, please use the following command to copy secret:
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret [ISILON-CREDS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [ISILON-CREDS]/name: isilon-creds/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerScale, perform these steps:
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
+
+ If the CSI driver configmap name is not the default `isilon-config-params`, please use the following command to copy configmap:
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap [ISILON-CONFIG-PARAMS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [ISILON-CONFIG-PARAMS]/name: isilon-config-params/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -
```
+ __CSI Driver for PowerMax:__
+
+ Copy the configmap from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ __Note:__ Observability for PowerMax works only with [CSI PowerMax driver with Proxy in StandAlone mode](../../../csidriver/installation/helm/powermax/#csi-powermax-driver-with-proxy-in-standalone-mode).
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy configmap:
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-REVERSEPROXY-CONFIG]/name: powermax-reverseproxy-config/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ Copy the secrets from the CSI Driver for Dell PowerMax namespace to the CSM namespace.
+ ```
+ for secret in $(kubectl get configmap powermax-reverseproxy-config -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f -
+ done
+ ```
+
+ If the CSI driver configmap name is not the default `powermax-reverseproxy-config`, please use the following command to copy secrets:
+ ```console
+ for secret in $(kubectl get configmap [POWERMAX-REVERSEPROXY-CONFIG] -n [CSI_DRIVER_NAMESPACE] -o jsonpath="{.data.config\.yaml}" | grep arrayCredentialSecret | awk 'BEGIN{FS=":"}{print $2}' | uniq)
+ do
+ kubectl get secret $secret -n [CSI_DRIVER_NAMESPACE] -o yaml | sed "s/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/" | kubectl create -f -
+ done
+ ```
+
+ If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerMax, perform these steps:
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap powermax-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ If the CSI driver configmap name is not the default `powermax-config-params`, please use the following command to copy configmap:
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap [POWERMAX-CONFIG-PARAMS] -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/name: [POWERMAX-CONFIG-PARAMS]/name: powermax-config-params/' | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: powermax-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: powermax-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: powermax-proxy-authz-tokens/' | kubectl create -f -
+ ```
+
4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository.
**Note:**
@@ -171,6 +246,7 @@ To perform an offline installation of a Helm chart, the following steps should b
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured.
- If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured.
+ - If CSM for Authorization is enabled for CSI PowerMax, the `karaviMetricsPowermax.authorization` parameters must be properly configured.
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# helm install -n install-namespace app-name karavi-observability
diff --git a/content/docs/observability/deployment/online.md b/content/docs/observability/deployment/online.md
index 82524a658c..aef13db401 100644
--- a/content/docs/observability/deployment/online.md
+++ b/content/docs/observability/deployment/online.md
@@ -40,7 +40,8 @@ If the Authorization module is enabled for the CSI drivers installed in the same
## Online Installer
-The following instructions can be followed to install CSM for Observability in an environment that has an internet connection and is capable of downloading the required Helm chart and Docker images.
+The following instructions can be followed to install CSM for Observability in an environment that has an internet connection and is capable of downloading the required Helm chart and Docker images.
+The installer expects CSI drivers are using the default secret and configmap names.
### Dependencies
@@ -71,6 +72,7 @@ Options:
--csi-powerflex-namespace[=] Namespace where CSI PowerFlex is installed, default is 'vxflexos'
--csi-powerstore-namespace[=] Namespace where CSI PowerStore is installed, default is 'csi-powerstore'
--csi-powerscale-namespace[=] Namespace where CSI PowerScale is installed, default is 'isilon'
+ --csi-powermax-namespace[=] Namespace where CSI PowerMax is installed, default is 'powermax'
--set-file Set values from files used during helm installation (can be specified multiple times)
--skip-verify Skip verification of the environment
--values[=] Values file, which defines configuration values
@@ -104,6 +106,7 @@ To perform an online installation of CSM for Observability, the following steps
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability.
- If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability.
+ - If CSM for Authorization is enabled for CSI PowerMax, the `karaviMetricsPowermax.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability.
```
[user@system /home/user/karavi-observability/installer]# ./karavi-observability-install.sh install --namespace [CSM_NAMESPACE] --values myvalues.yaml
@@ -139,6 +142,16 @@ To perform an online installation of CSM for Observability, the following steps
|
|- Copying Secret from powerstore to karavi Success
|
+ |- CSI Driver for PowerScale is installed Success
+ |
+ |- Copying Secret from isilon to karavi Success
+ |
+ |- CSI Driver for PowerMax is installed Success
+ |
+ |- Copying ConfigMap from powermax to karavi Success
+ |
+ |- Copying Secret from powermax to karavi Success
+ |
|- Installing CertManager CRDs Success
|
|- Enabling Karavi Authorization for Karavi Observability
@@ -146,6 +159,14 @@ To perform an online installation of CSM for Observability, the following steps
|--> Copying ConfigMap from vxflexos to karavi Success
|
|--> Copying Karavi Authorization Secrets from vxflexos to karavi Success
+ |
+ |--> Copying ConfigMap from isilon to karavi Success
+ |
+ |--> Copying Karavi Authorization Secrets from isilon to karavi Success
+ |
+ |--> Copying ConfigMap from powermax to karavi Success
+ |
+ |--> Copying Karavi Authorization Secrets from powermax to karavi Success
|
|- Installing Karavi Observability helm chart Success
|
diff --git a/content/docs/observability/design/_index.md b/content/docs/observability/design/_index.md
index adb56abcb8..cb7616fb00 100644
--- a/content/docs/observability/design/_index.md
+++ b/content/docs/observability/design/_index.md
@@ -23,6 +23,7 @@ The following prerequisites must be deployed into the namespace where CSM for Ob
- CSI PowerFlex driver uses the 'vxflexos-config' secret.
- CSI PowerStore driver uses the 'powerstore-config' secret.
- CSI PowerScale driver uses the 'isilon-creds' secret.
+ - CSI PowerMax driver uses the secrets in configmap 'powermax-reverseproxy-config'.
## Deployment Architectures
diff --git a/content/docs/observability/metrics/powermax.md b/content/docs/observability/metrics/powermax.md
new file mode 100644
index 0000000000..2ed3b1c9b9
--- /dev/null
+++ b/content/docs/observability/metrics/powermax.md
@@ -0,0 +1,63 @@
+---
+title: PowerMax Metrics
+linktitle: PowerMax Metrics
+weight: 1
+description: >
+ Dell Container Storage Modules (CSM) for Observability PowerMax Metrics
+---
+
+This section outlines the metrics collected by the Container Storage Modules (CSM) Observability module for PowerMax. The [Grafana reference dashboards](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powermax) for PowerMax metrics can be uploaded to your Grafana instance.
+
+## Prerequisites
+- Unisphere user credentials must have PERF_MONITOR permissions.
+- Ensure time synchronization for Kubernetes cluster and PowerMax Unisphere by using Network Time Protocol (NTP).
+
+## I/O Performance Metrics
+
+Storage system I/O performance metrics (IOPS, bandwidth, latency) are available by default and broken down by storage group and volume.
+
+To disable these metrics, set the ```performanceMetricsEnabled``` field under ```karaviMetricsPowermax``` to false in helm/values.yaml.
+
+The following I/O performance metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../../) for more information on deploying and configuring the OpenTelemetry collector.
+
+| Metric | Description |
+|-------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
+| powermax_storage_group_read_bw_megabytes_per_second | The storage group read bandwidth (MB/s) |
+| powermax_storage_group_write_bw_megabytes_per_second | The storage group write bandwidth (MB/s) |
+| powermax_storage_group_read_latency_milliseconds | The time (in ms) to complete read operations within PowerMax system by the storage group |
+| powermax_storage_group_write_latency_milliseconds | The time (in ms) to complete write operations within PowerMax system by the storage group |
+| powermax_storage_group_read_iops_per_second | The number of read operations performed by a storage group (per second) |
+| powermax_storage_group_write_iops_per_second | The number of write operations performed by a storage group (per second) |
+| powermax_storage_group_average_io_size_megabytes_per_second | The storage group average IO sizes (MB/s) |
+| powermax_volume_read_bw_megabytes_per_second | The volume read bandwidth (MB/s) |
+| powermax_volume_write_bw_megabytes_per_second | The volume write bandwidth (MB/s) |
+| powermax_volume_read_latency_milliseconds | The time (in ms) to complete read operations to a volume |
+| powermax_volume_write_latency_milliseconds | The time (in ms) to complete write operations to a volume |
+| powermax_volume_read_iops_per_second | The number of read operations performed against a volume (per second) |
+| powermax_volume_write_iops_per_second | The number of write operations performed against a volume (per second) |
+
+## Storage Capacity Metrics
+
+Provides visibility into the total, used, and available capacity for a storage class and associated underlying storage construct.
+
+To disable these metrics, set the ```capacityMetricsEnabled``` field under ```karaviMetricsPowermax``` to false in helm/values.yaml.
+
+The following storage capacity metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../../) for more information on deploying and configuring the OpenTelemetry collector.
+
+| Metric | Description |
+|-------------------------------------------------|-------------------------------------------------------------------------|
+| powermax_storage_class_total_capacity_gigabytes | Total capacity for a given storage class (GB) |
+| powermax_storage_class_used_capacity_gigabytes | Total used capacity for a given storage class (GB) |
+| powermax_storage_class_used_capacity_percentage | Used capacity of a storage class in percent |
+| powermax_array_total_capacity_gigabytes | Total capacity on a given array managed by CSI driver (GB) |
+| powermax_array_used_capacity_gigabytes | Total used capacity on a given array managed by CSI driver (GB) |
+| powermax_array_used_capacity_percentage | Total used capacity on a given array managed by CSI driver in percent |
+| powermax_storage_group_total_capacity_gigabytes | Total capacity for a given storage group (GB) |
+| powermax_storage_group_used_capacity_gigabytes | Total used capacity for a given storage group (GB) |
+| powermax_storage_group_used_capacity_percentage | Used capacity of a storage group in percent |
+| powermax_srp_total_capacity_gigabytes | Total capacity of the storage resource pool in GB managed by CSI driver |
+| powermax_srp_used_capacity_gigabytes | Used capacity of a storage resource pool in GB managed by CSI driver |
+| powermax_srp_used_capacity_percentage | Used capacity of a storage resource pool in percent |
+| powermax_volume_total_capacity_gigabytes | Total capacity of the volume in GB |
+| powermax_volume_used_capacity_gigabytes | Used capacity of a volume in GB |
+| powermax_volume_used_capacity_percentage | Used capacity of a volume in percent |
diff --git a/content/docs/observability/release/_index.md b/content/docs/observability/release/_index.md
index b3c3ee9fce..4bc32fe60c 100644
--- a/content/docs/observability/release/_index.md
+++ b/content/docs/observability/release/_index.md
@@ -6,15 +6,13 @@ Description: >
Dell Container Storage Modules (CSM) release notes for observability
---
-## Release Notes - CSM Observability 1.4.0
+## Release Notes - CSM Observability 1.5.0
### New Features/Changes
-- [CSM support for Kubernetes 1.25](https://github.com/dell/csm/issues/478)
-- [CSM support for Openshift 4.11](https://github.com/dell/csm/issues/480)
-- [CSM support for PowerFlex 4.0](https://github.com/dell/csm/issues/476)
-- [Observability - Improve Grafana dashboard](https://github.com/dell/csm/issues/519)
+- [CSM support for Kubernetes 1.26](https://github.com/dell/csm/issues/597)
+- [Support PowerMax in CSM Observability](https://github.com/dell/csm/issues/586)
### Fixed Issues
-- [step_error: command not found in karavi-observability-install.sh](https://github.com/dell/csm/issues/479)
+- [Observability - Improve Grafana dashboards for PowerFlex/PowerStore](https://github.com/dell/csm/issues/640)
### Known Issues
\ No newline at end of file
diff --git a/content/docs/observability/troubleshooting/_index.md b/content/docs/observability/troubleshooting/_index.md
index 7a5fbac6d7..fb928ade46 100644
--- a/content/docs/observability/troubleshooting/_index.md
+++ b/content/docs/observability/troubleshooting/_index.md
@@ -14,6 +14,8 @@ Description: >
5. [How can I troubleshoot latency problems with CSM for Observability?](#how-can-i-troubleshoot-latency-problems-with-csm-for-observability)
6. [Why does the Observability installation timeout with pods stuck in 'ContainerCreating'/'CrashLoopBackOff'/'Error' stage?](#why-does-the-observability-installation-timeout-with-pods-stuck-in-containercreatingcrashloopbackofferror-stage)
7. [Why do I see FailedMount warnings when describing pods in my cluster?](#why-do-i-see-failedmount-warnings-when-describing-pods-in-my-cluster)
+8. [Why do I see 'Failed calling webhook' error when reinstalling CSM for Observability?](#why-do-i-see-failed-calling-webhook-error-when-reinstalling-CSM-for-Observability)
+
### Why do I see a certificate problem when accessing the topology service outside of my Kubernetes cluster?
@@ -242,4 +244,17 @@ The warning can arise when a self-signed certificate for otel-collector is issue
[root@:~]$ kubectl describe pod -n $namespace $pod
MountVolume.SetUp failed for volume "tls-secret" : secret "otel-collector-tls" not found
Unable to attach or mount volumes: unmounted volumes=[tls-secret], unattached volumes=[vxflexos-config-params vxflexos-config tls-secret karavi-metrics-powerflex-configmap kube-api-access-4fqgl karavi-authorization-config proxy-server-root-certificate]: timed out waiting for the condition
-```
\ No newline at end of file
+```
+
+### Why do I see 'Failed calling webhook' error when reinstalling CSM for Observability?
+This warning can occur when a user uninstalls Observability by deleting the Kubernetes namespace before properly cleaning up by running `helm delete` on the Observability Helm installation. This results in the credential manager failing to properly integrate with Observability on future installations. The user may see the following error in the module pods upon reinstallation:
+
+```console
+Error: INSTALLATION FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://karavi-observability-cert-manager-webhook.karavi-observability.svc:443/mutate?timeout=10s": dial tcp 10.106.44.80:443: connect: connection refused
+```
+
+To resolve this, leave the CSM namespace in place after a failed installation, and run the below command:
+
+ `helm delete karavi-observability --namespace [CSM_NAMESPACE]`
+
+Then delete the namespace `kubectl delete ns [CSM_NAMESPACE]`. Wait until namespace is fully deleted, recreate the namespace, and reinstall Observability again.
diff --git a/content/docs/observability/upgrade/_index.md b/content/docs/observability/upgrade/_index.md
index 8812473a64..a98aa10762 100644
--- a/content/docs/observability/upgrade/_index.md
+++ b/content/docs/observability/upgrade/_index.md
@@ -26,7 +26,7 @@ Check if the latest Helm chart version is available:
```
helm search repo dell
NAME CHART VERSION APP VERSION DESCRIPTION
-dell/karavi-observability 1.4.0 1.4.0 CSM for Observability is part of the [Container...
+dell/karavi-observability 1.5.0 1.5.0 CSM for Observability is part of the [Container...
```
>Note: If using cert-manager CustomResourceDefinitions older than v1.5.3, delete the old CRDs and install v1.5.3 of the CRDs prior to upgrade. See [Prerequisites](../deployment/helm#prerequisites) for location of CRDs.
diff --git a/content/docs/references/FAQ/_index.md b/content/docs/references/FAQ/_index.md
index 7613fe8cb0..4d2eba1713 100644
--- a/content/docs/references/FAQ/_index.md
+++ b/content/docs/references/FAQ/_index.md
@@ -95,7 +95,8 @@ It is advised to comply with the support matrices (links below) and not deviate
- [Dell CSI Drivers](../../csidriver/#supported-operating-systemscontainer-orchestrator-platforms).
### Can I run Container Storage Modules in a production environment?
-As of CSM 1.5, the Container Storage Modules Authorization, Observability, Replication and Resiliency are GA and ready for production systems. The modules Encryption and Application Mobility are launched for Tech Preview Release and it is not intended to use in the Production systems.
+
+Currently, the Container Storage Modules Authorization, Observability, Replication, and Resiliency are GA and ready for production systems. The modules Encryption and Application Mobility are launched for Tech Preview Release and it is not intended to use in the Production systems.
### Is Dell Container Storage Modules (CSM) supported by Dell Technologies?
Yes!
diff --git a/content/docs/references/cli/_index.md b/content/docs/references/cli/_index.md
index d631a60a35..84ba623159 100644
--- a/content/docs/references/cli/_index.md
+++ b/content/docs/references/cli/_index.md
@@ -35,7 +35,7 @@ This document outlines all dellctl commands, their intended use, options that ca
## Installation instructions
-1. Download `dellctl` from [here](https://github.com/dell/csm/releases/tag/v1.5.1).
+1. Download `dellctl` from [here](https://github.com/dell/csm/releases/tag/v1.6.0).
2. chmod +x dellctl
3. Move `dellctl` to `/usr/local/bin` or add `dellctl`'s containing directory path to PATH environment variable.
4. Run `dellctl --help` to know available commands or run `dellctl command --help` to know more about a specific command.
diff --git a/content/docs/release/_index.md b/content/docs/release/_index.md
index ffb7d086c3..c4cb2d1e7a 100644
--- a/content/docs/release/_index.md
+++ b/content/docs/release/_index.md
@@ -21,3 +21,5 @@ Release notes for Container Storage Modules:
[CSM for Encryption](../secure/encryption/release)
[CSM for Application Mobility](../applicationmobility/release)
+
+[CSM Operator](../deployment/csmoperator/release)
diff --git a/content/docs/replication/_index.md b/content/docs/replication/_index.md
index 2c45ef2807..42bafa104a 100644
--- a/content/docs/replication/_index.md
+++ b/content/docs/replication/_index.md
@@ -18,40 +18,40 @@ CSM for Replication provides the following capabilities:
{{
}}
| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
| ----------------------------------------------------------------------------------------------------------------------------------- | :------: | :--------: | :--------: | :-------: | :---: |
-| Replicate data using native storage array based replication | yes | yes | yes | no | no |
+| Replicate data using native storage array based replication | yes | yes | yes | yes | no |
| Asynchronous file volume replication | no | no | yes | no | no |
-| Asynchronous block volume replication | yes | yes | n/a | no | no |
+| Asynchronous block volume replication | yes | yes | n/a | yes | no |
| Synchronous file volume replication | no | no | no | no | no |
| Synchronous block volume replication | yes | no | n/a | no | no |
| Active-Active (Metro) block volume replication | yes | no | n/a | no | no |
| Active-Active (Metro) file volume replication | no | no | no | no | no |
-| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no |
-| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no |
-| Failover & Reprotect applications using the replicated volumes | yes | yes | no | no | no |
-| Online Volume Expansion for replicated volumes | yes | no | no | no | no |
-| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no |
+| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | yes | no |
+| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | yes | no |
+| Failover & Reprotect applications using the replicated volumes | yes | yes | yes | yes | no |
+| Online Volume Expansion for replicated volumes | yes | no | no | yes | no |
+| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | yes | no |
{{
}}
## Supported CSI Drivers
@@ -63,7 +63,9 @@ CSM for Replication supports the following CSI drivers and versions.
| CSI Driver for Dell PowerMax | [csi-powermax](https://github.com/dell/csi-powermax) | v2.0 + |
| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + |
| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.2 + |
+| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.6 + |
{{}}
+For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities)
## Details
@@ -85,18 +87,6 @@ the objects still exist in pairs.
* Different namespaces cannot share the same RDF group for creating volumes with ASYNC mode for PowerMax.
* Same RDF group cannot be shared across different replication modes for PowerMax.
-### Supported Platforms
-
-The following matrix provides a list of all supported versions for each Dell Storage product.
-
-| Platforms | PowerMax | PowerStore | PowerScale |
-| ---------------- | ------------------------------ | ---------------- | ---------------- |
-| Kubernetes | 1.23, 1.24, 1.25 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 |
-| RedHat Openshift | 4.10, 4.11 | 4.9, 4.10 | 4.9, 4.10 |
-| CSI Driver | 2.x(k8s), 2.2+(OpenShift) | 2.x | 2.2+ |
-
-For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities)
-
### QuickStart
1. Install all required components:
* Enable replication during CSI driver installation
diff --git a/content/docs/replication/architecture.md b/content/docs/replication/architecture/_index.md
similarity index 83%
rename from content/docs/replication/architecture.md
rename to content/docs/replication/architecture/_index.md
index a935607c44..648e4e0e8c 100644
--- a/content/docs/replication/architecture.md
+++ b/content/docs/replication/architecture/_index.md
@@ -11,23 +11,23 @@ description: >
Container Storage Modules (CSM) for Replication project consists of the following components:
-* DellCSIReplicationGroup - A Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
-* CSM Replication controller which replicates the resources across(or within) Kubernetes clusters.
-* CSM Replication sidecar container which is part of the CSI driver controller pod
-* repctl - Multi cluster Kubernetes client for managing replication related objects
+* `DellCSIReplicationGroup`, a Kubernetes [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
+* CSM Replication controller which replicates the resources across (or within) Kubernetes clusters.
+* CSM Replication sidecar container which is part of each CSI driver controller pod.
+* repctl - Multi cluster Kubernetes client for managing replication related objects.
### DellCSIReplicationGroup
`DellCSIReplicationGroup` (RG) is a cluster scoped Custom Resource that represents a protection group on the backend storage array.
It is used to group volumes with the same replication related properties together.
`DellCSIReplicationGroup`'s spec contains an _action_ field which can be used to perform replication related operations on the backing protection groups on the storage arrays.
-This includes operations like _Failover_, _Reprotect_, _Suspend_, _Synchronize_ e.t.c.
+This includes operations like _Failover_, _Reprotect_, _Suspend_, _Synchronize_, etc.
Any replication related operation is always carried out on all the volumes present in the group.
#### Specification
```yaml
kind: DellCSIReplicationGroup
-apiVersion: replication.storage.dell.com/v1alpha1
+apiVersion: replication.storage.dell.com/v1
metadata:
name: rg-e6be24c0-145d-4b62-8674-639282ebdd13
spec:
@@ -72,16 +72,16 @@ status:
state: Ready
```
-Here is a diagram representing how the _state_ of the CustomResource changes based on actions
+Here is a diagram representing how the _state_ of the CustomResource changes based on actions:
![state](../state.png)
-### CSM Replication sidecar
+### CSM Replication Sidecar
![sidecar](../sidecar.png)
-CSM Replication sidecar is deployed as sidecar container in the CSI driver controller pod. This container is similar to Kubernetes CSI Sidecar
+CSM Replication sidecar is deployed as sidecar container in _each_ CSI driver's controller pod. This container is similar to Kubernetes CSI Sidecar
[containers](https://kubernetes-csi.github.io/docs/sidecar-containers.html) and runs a Controller Manager
-which manages the following controllers -
+which manages the following controllers:
* PersistentVolume(PV) Controller
* PersistentVolumeClaim(PVC) Controller
* DellCSIReplicationGroup(RG) Controller
@@ -103,7 +103,7 @@ It is primarily responsible for the following:
![common](../common.png)
CSM Replication Controller is a Kubernetes application deployed independently of CSI drivers and is responsible for
-the communication between Kubernetes clusters.
+the communication between Kubernetes clusters. _One_ CSM Replication Controller manages replication operations for _all_ CSI driver installations on the Kubernetes cluster.
The details about the clusters it needs to connect to are provided in the form of a ConfigMap with references to secrets
containing the details(KubeConfig/ServiceAccount tokens) required to connect to the respective clusters.
@@ -115,16 +115,16 @@ It consists of Controller Manager which manages the following controllers:
The PV controller is responsible for creating PV objects (representing the replicated volumes on the backend storage array) in the remote
Kubernetes cluster.
-This controller also enables deletion of the remote PV object in case it is desired by propagating the deletion request across clusters.
+This controller also enables deletion of the remote PV object, if enabled through the storage class' `RemotePVRetentionPolicy`, by propagating the deletion request across clusters.
Similarly, the RG controller is responsible for creating RG objects in the remote Kubernetes cluster. These RG objects represent the
-remote protection groups on the backend storage array. This controller can also propagate the deletion request of RG objects across clusters.
+remote protection groups on the backend storage array. This controller can also propagate the deletion request of RG objects across clusters, if enabled through the storage class' `RemoteRGRetentionPolicy`.
Both the PV & RG objects in the remote cluster have extra metadata associated with them in form of annotations & labels. This metadata includes
information about the respective objects in the source cluster.
The PVC objects are never replicated across the clusters. Instead, the remote PV objects have annotations related to the
-source PVC objects. This information can be easily used to create the PVCs whenever required using `repctl` or even `kubectl`
+source PVC objects. This information can be easily used to create the PVCs whenever required using `repctl` or `kubectl`.
### Supported Cluster Topologies
Click [here](../cluster-topologies) for details for the various types of supported cluster topologies
diff --git a/content/docs/replication/architecture/powerscale.md b/content/docs/replication/architecture/powerscale.md
new file mode 100644
index 0000000000..439d8ce03c
--- /dev/null
+++ b/content/docs/replication/architecture/powerscale.md
@@ -0,0 +1,65 @@
+---
+title: PowerScale
+linktitle: PowerScale
+weight: 2
+description: >
+ Platform-Specific Architecture for CSI PowerScale
+---
+
+### SyncIQ Policy Architecture
+When creating `DellCSIReplicationGroup` (RG) objects on the Kubernetes cluster(s) used for replication, a SyncIQ policy to facilitate this replication is created *only* on the source PowerScale storage array.
+
+This singular SyncIQ policy on the source storage array and its matching Local Target policy on the target storage array provide information for the RGs to determine their status. Upon creation, the SyncIQ policy is set to a schedule of `When source is modified`. The SyncIQ policy is `Enabled` when the RG is created. The directory that is being replicated is *read-write accessible* on the source storage array, and is restricted to *read-only* on the target.
+
+### Replication Group Deletion
+When deleting `DellCSIReplicationGroup` (RG) objects on the Kubernetes cluster(s) used for replication, deletion should only be performed on an empty RG. If there is any user-created or Kubernetes PV-generated data left inside of the replication group, the RG object will be held in a `Deleting` state until all user data has been cleared out on **both** source and target storage arrays.
+
+If the RG's folder on both source and target storage arrays is empty and the RG is given a delete command, it will perform a sync, then remove its SyncIQ policy from the source storage array, then delete the RG object on both source and target Kubernetes clusters.
+
+If irregular Kubernetes cluster/storage array behavior causes the source and target to fall out-of-sync (ex: one of the sides is down), the RG deletion will become stuck. If forced removal of the RG is necessary, the finalizers can be removed manually to allow for deletion, but data and SyncIQ policies may remain on the storage arrays and require manual deletion. See [this Knowledge Base Article](https://www.dell.com/support/kbdoc/en-us/000206294/dell-csm-replication-powerscale-replication-artifacts-remain-after-deletion) for further information on manual deletion.
+
+### Performing Failover/Failback/Reprotect on PowerScale
+
+Failover, Failback, and Reprotect one-step operations are not natively supported on PowerScale, and are performed as a series of steps in CSM replication. When any of these operations are triggered, through the use of `repctl` or by editing the RG, the steps below are performed on the PowerScale storage arrays.
+
+#### Failover - Halt Replication and Allow Writes on Target
+
+Steps for performing Failover can be found in the Tools page under [Executing Actions.](https://dell.github.io/csm-docs/docs/replication/tools/#executing-actions) There are some PowerScale-specific considerations to keep in mind:
+- Failover on PowerScale does NOT halt writes on the source side. It is recommended that the storage administrator or end user manually **stop writes** to ensure no data is lost on the source side in the event of future failback.
+- In the case of unplanned failover, the SyncIQ policy on the source PowerScale array will be left enabled and set to its previously defined `When source is modified` sync schedule. Storage admins **must** manually disable this SyncIQ policy when bringing the failed-over source array back online, or unexpected behavior may occur.
+
+The below steps are performed by CSM replication to perform a failover.
+
+1. Syncing data from source to target one final time before transition. *(planned failover only)*
+2. Disabling the SyncIQ policy on the source PowerScale storage array. *(planned failover only)*
+3. Enabling writes on the target PowerScale array's Local Target policy.
+
+#### Failback - Discard Target
+
+Performing failback and discarding changes made to the target is to simply resume synchronization from the source. The steps CSM replication is following to perform this operation are as follows:
+
+1. Editing the SyncIQ policy on the source PowerScale array's schedule from `When source is modified` to `Manual`.
+2. Performing `Actions > Disallow writes` on the target PowerScale array's Local Target policy that matches the SyncIQ policy undergoing failback.
+3. Editing the SyncIQ policy's schedule from `Manual` to `When source is modified` and setting the time delay for synchronization as appropriate.
+4. Enabling the source PowerScale array's SyncIQ policy.
+
+
+#### Failback - Discard Source
+
+Information on the methodology for performing a failback while taking changes made to the original target can be found in relevant PowerScale SyncIQ documentation. The steps CSM replication is following to perform this operation are as follows:
+
+1. Editing the SyncIQ policy on the source PowerScale array's schedule from `When source is modified` to `Manual`.
+2. Enabling the SyncIQ policy that is undergoing failback, if it isn't already enabled.
+3. Performing the `Resync-prep` action on the SyncIQ policy. This will create a new SyncIQ policy on the target PowerScale array, matching the original SyncIQ policy with an appended *_mirror* to its name.
+4. Starting a synchronization job on the target PowerScale array's newly created *_mirror* policy.
+5. Running the `Allow writes` operation on the Local Target on the source PowerScale array that was created by the *_mirror* policy.
+6. Performing the `Resync-prep` action on the target PowerScale array's *_mirror* policy.
+7. Deleting the *_mirror* SyncIQ policy.
+8. Editing the SyncIQ policy on the source PowerScale array's schedule from `Manual` to `When source is modified` and setting the time delay for synchronization as appropriate.
+
+#### Reprotect - Set Original Target as New Source
+
+A reprotect operation is, in essence, doing away with the original source-target relationship and establishing a new one in the reverse direction. This is done **only after** failing over to the original target array is complete, and the original source array is up and ready to be made into a new replication destination. To accomplish this, CSM replication performs the following steps:
+
+1. Deleting the SyncIQ policy on the original source PowerScale array.
+2. Creating a new SyncIQ policy on the original target PowerScale array. This policy establishes the original target as a new *source*, and sets its replication destination to the original source (which can be considered the new *target*.)
\ No newline at end of file
diff --git a/content/docs/replication/cluster-topologies.md b/content/docs/replication/cluster-topologies.md
index adba46d212..2a51c9f0ab 100644
--- a/content/docs/replication/cluster-topologies.md
+++ b/content/docs/replication/cluster-topologies.md
@@ -16,7 +16,7 @@ Each cluster should be assigned the unique identifier `clusterId`. The rules for
* must begin and end with an alphanumeric character ([a-z, 0-9, A-Z])
* could contain dashes (-), underscores (_), dots (.), and alphanumerics between
* must be unique across clusters
-``
+
### Single Cluster Replication
#### Cluster Configuration
@@ -38,7 +38,7 @@ Note that the `targets` parameter is left empty since we don't require any targe
This also means that you don't need to create any Secrets that contain connection information to such clusters, since in this use case, we
are limited to a single cluster.
-You can find more info about configs and secrets for cluster communication in [configmaps-secrets](../deployment/configmap-secrets/)
+You can find more info about configs and secrets for cluster communication in [configmaps-secrets](../deployment/configmap-secrets/).
#### Storage Class Configuration
@@ -48,14 +48,12 @@ be set to `self` to indicate that we want to replicate the volume inside the cur
Also, you would need to create another storage class in the same cluster that would serve as a `target` storage class. This means that all replicated volumes would be derived from it. Its `replication.storage.dell.com/remoteClusterID` parameter should be also set to `self`.
-You can find out more about replication StorageClasses and replication specific parameters in [storageclasses](../deployment/storageclasses)
+You can find out more about replication StorageClasses and replication specific parameters in [storageclasses](../deployment/storageclasses).
#### Replicated Resources
When creating PersistentVolumeClaims using StorageClass for a single cluster replication, replicated resources (PersistentVolumes,
-ReplicationGroups) would be created in the same cluster with the `replicated-` prefix added to them.
-
-Example:
+ReplicationGroups) would be created in the same cluster with the `replicated-` prefix added to them. For example:
```shell
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS STORAGECLASS AGE
@@ -72,7 +70,7 @@ rg-240721b0-12fb-4151-8dd8-94794ae2493e 34s Ready SYNCHRONIZED
#### Cluster Configuration
Similar to a single cluster scenario, you need to create ConfigMap, but this time you need to provide at least one target
-cluster. You can provide as many as you like but be mindful that a single volume can be replicated to only one of them.
+cluster. You can provide as many as you like, but be mindful that a single volume can be replicated to only one of them.
For example:
```yaml
@@ -91,7 +89,7 @@ metadata:
```
Note that target cluster information contains a field called `secretRef`. This field points to a secret available in the current cluster that contains connection information of `cluster-B` in the form of a kubeconfig file.
-You can find more information about how to create such secrets in [configmaps-secrets](../deployment/configmap-secrets/#communication-between-clusters)
+You can find more information about how to create such secrets in [configmaps-secrets](../deployment/configmap-secrets/#communication-between-clusters).
#### Storage Class Configuration
@@ -102,14 +100,12 @@ want to replicate your volumes.
For multi-cluster replication, we can choose one of the target cluster ids we specified in
ConfigMap. In our example replication parameter, the target cluster id should be equal to `cluster-B`.
-You can find more information about other replication parameters available in storage classes [here](../deployment/storageclasses/#common-parameters)
+You can find more information about other replication parameters available in storage classes [here](../deployment/storageclasses/#common-parameters).
#### Replicated Resources
When creating PersistentVolumeClaims using StorageClass for a multi-cluster replication, replicated resources would be
-created in both `source` and `target` clusters under the same names.
-
-Example:
+created in both `source` and `target` clusters under the same names. For example:
```shell
[CLUSTER-A]
diff --git a/content/docs/replication/deployment/configmap-secrets.md b/content/docs/replication/deployment/configmap-secrets.md
index b93d82e71b..9bebc22f87 100644
--- a/content/docs/replication/deployment/configmap-secrets.md
+++ b/content/docs/replication/deployment/configmap-secrets.md
@@ -7,7 +7,7 @@ description: >
---
## Communication between clusters
-Container Storage Modules (CSM) for Replication Controller requires access to remote clusters for replicating various objects. There are two ways to set up this communication -
+Container Storage Modules (CSM) for Replication Controller requires access to remote clusters for replicating various objects. There are two ways to set up this communication:
1. Using Normal Kubernetes users
2. Using ServiceAccount token
@@ -16,7 +16,7 @@ the respective CSM Replication Controllers.
>Important: Direct network visibility between clusters required for CSM-Replication to work.
> Cluster-1's API URL has to be pingable from cluster-2 pods and vice versa. If private networks are used and/or DNS is not set up properly - you may need to modify `/etc/hosts` file from within controller's pod.
-> This can be achieved by using helm installation method. Refer to the [link](../installation/#using-the-installation-script)
+> This can be achieved by using helm installation method. Refer to this [link.](../installation/#using-the-installation-script)
>Note: If you are using a single stretched cluster, then you can skip all the following steps
@@ -27,17 +27,17 @@ This is the simplest way to configure CSM Replication Controller.
#### Recommended method
Use `repctl` to create secrets using service account tokens and update ConfigMaps in multiple clusters in one command.
-Run the following command -
+Run the following command:
```shell
repctl cluster inject --use-sa
```
This will create secrets using the token for the `dell-replication-controller-sa` ServiceAccount and update the ConfigMap in all the clusters
-which have been configured for `repctl`
+which have been configured for `repctl`.
#### Inject KubeConfigs from repctl configuration
`repctl` is usually configured to communicate with multiple Kubernetes clusters and is provided with a set of KubeConfig files for each cluster.
You can use `repctl` to inject secrets created using these files in each of the configured cluster.
-Run the following command -
+Run the following command:
```shell
repctl cluster inject
```
@@ -45,7 +45,7 @@ repctl cluster inject
>Note: For a detailed walkthrough of the simplified installation process using `repctl`, please refer this [link](../install-repctl)
### Understanding the Config file
-If you are setting up replication between two clusters - Cluster A & Cluster B, then the configuration file (deploy/config.yaml) should look like this:
+If you are setting up replication between two clusters (ex: Cluster A & Cluster B), a suitable configuration file (deploy/config.yaml) should look like this:
#### Cluster A
```yaml
@@ -86,7 +86,7 @@ Kubernetes cluster and use it for inter cluster communication. The process of c
Once you have the user created, you can provide it the RBAC privileges required by the controller.
##### Example
-Continuing from our earlier example with Cluster A & Cluster B -
+Continuing from our earlier example with Cluster A & Cluster B:
1. Create a user in _Cluster B_ & generate a kubeconfig file for it using the helper script
2. Create a ClusterRole in _Cluster B_ using the following command:
```shell
@@ -96,10 +96,10 @@ Continuing from our earlier example with Cluster A & Cluster B -
```shell
kubectl create clusterrolebinding --clusterrole=dell-replication-manager-role --user=
```
-4. Create a secret in _Cluster A_ using the kubeconfig file for this user
-```shell
-kubectl create secret generic --from-file=data= --namespace dell-replication-controller
-```
+4. Create a secret in _Cluster A_ using the kubeconfig file for this user:
+ ```shell
+ kubectl create secret generic --from-file=data= --namespace dell-replication-controller
+ ```
#### Secrets using ServiceAccount tokens
You can use service account tokens to establish communication between various clusters.
@@ -107,7 +107,7 @@ We recommend using the token for the `dell-replication-controller-sa` service ac
already has all the required RBAC privileges.
##### Example
-Use the following command to first create a KubeConfig file using the helper script in _Cluster B_ -
+Use the following command to first create a KubeConfig file using the helper script in _Cluster B_:
```shell
./gen-kubeconfig.sh -s dell-replication-controller-sa -n dell-replication-controller
```
diff --git a/content/docs/replication/deployment/install-repctl.md b/content/docs/replication/deployment/install-repctl.md
index dba7a88ef2..56fb24f1fd 100644
--- a/content/docs/replication/deployment/install-repctl.md
+++ b/content/docs/replication/deployment/install-repctl.md
@@ -10,11 +10,11 @@ description: Installation of CSM for Replication using repctl
You can start using Container Storage Modules (CSM) for Replication with help from `repctl` using these simple steps:
1. Prepare admin Kubernetes clusters configs
-2. Add admin configs as clusters to `repctl`
+2. Add admin configs as clusters to `repctl`:
```shell
./repctl cluster add -f "/root/.kube/config-1","/root/.kube/config-2" -n "cluster-1","cluster-2"
```
-3. Install replication controller and CRDs
+3. Install replication controller and CRDs:
```shell
./repctl create -f ../deploy/replicationcrds.all.yaml
./repctl create -f ../deploy/controller.yaml
@@ -22,26 +22,26 @@ You can start using Container Storage Modules (CSM) for Replication with help fr
> **_NOTE:_** The controller will report that configmap is invalid. This is expected behavior.
> The message should disappear once you inject the kubeconfigs (next step).
4. (Choose one)
- 1. (More secure) Inject service accounts' configs into clusters
+ 1. (More secure) Inject service accounts' configs into clusters:
```shell
./repctl cluster inject --use-sa
```
- 2. (Less secure) Inject admin configs into clusters
+ 2. (Less secure) Inject admin configs into clusters:
```shell
./repctl cluster inject
```
5. Modify `examples/_example_values.yaml` config with replication
- information
+ information:
> **_NOTE:_** `clusterID` should match names you gave to clusters in step 2
-6. Create replication storage classes using config
+6. Create replication storage classes using config:
```shell
./repctl create sc --from-config ./examples/_example_values.yaml
```
7. Install CSI driver for your chosen storage in source cluster and provision replicated volumes
-8. (optional) Create PVCs on target cluster from Replication Group
+8. (optional) Create PVCs on target cluster from Replication Group:
```shell
./repctl create pvc --rg -t --dry-run=false
```
-> Note: all `repctl` output is saved alongside with `repctl` binary in the `repctl.log` file and can be attached to any installation troubleshooting requests
+> Note: all `repctl` output is saved alongside the `repctl` binary in a `repctl.log` file and can be attached to any installation troubleshooting requests.
diff --git a/content/docs/replication/deployment/installation.md b/content/docs/replication/deployment/installation.md
index 6bbabeee29..dc7d9e8cc2 100644
--- a/content/docs/replication/deployment/installation.md
+++ b/content/docs/replication/deployment/installation.md
@@ -17,7 +17,7 @@ Please read this [document](../configmap-secrets) before proceeding with the ins
clusters which will be required during or after the installation.
### Install repctl
-You can download pre-built repctl binary from our [Releases](https://github.com/dell/csm-replication/releases) page.
+You can download a pre-built repctl binary from our [Releases](https://github.com/dell/csm-replication/releases) page.
Alternately, if you want to build the binary yourself, you can follow these steps:
```shell
git clone github.com/dell/csm-replication
@@ -26,25 +26,25 @@ make build
```
### Installing CSM Replication Controller
-You can use one of the following methods to install CSM Replication Controller
+You can use one of the following methods to install CSM Replication Controller:
* Using repctl
* Installation script (Helm chart)
-We recommend using repctl for the installation as it simplifies the installation workflow. This process also helps configure `repctl`
+We recommend using repctl for the installation, as it simplifies the installation workflow. This process also helps configure `repctl`
for future use during management operations.
#### Using repctl
Please follow the steps [here](../install-repctl) to install & configure Dell Replication Controller
#### Using the installation script
-Repeat the following steps on all clusters where you want to configure replication
+Repeat the following steps on all clusters where you want to configure replication:
```shell
git clone github.com/dell/csm-replication
cd csm-replication
kubectl create ns dell-replication-controller
# Copy and modify values.yaml file if you wish to customize your deployment in any way
-cp ../helm/csm-replication/values.yaml ./myvalues.yaml
+cp ./helm/csm-replication/values.yaml ./myvalues.yaml
bash scripts/install.sh --values ./myvalues.yaml
```
>Note: Current installation method allows you to specify custom `:` entries to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address.
@@ -75,15 +75,15 @@ The following CSI drivers support replication:
1. CSI driver for PowerMax
2. CSI driver for PowerStore
3. CSI driver for PowerScale
-4. CSI driver for Unity XT
+4. CSI driver for PowerFlex
-Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore), [PowerScale](../powerscale) or [Unity](../unity) pages during the driver installation.
+Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore), [PowerScale](../powerscale), or [PowerFlex](../powerflex) pages during the driver installation.
>Note: Please ensure that replication CRDs are installed in the clusters where you are installing the CSI drivers. These CRDs are generally installed as part of the CSM Replication controller installation process.
### Dynamic Log Level Change
CSM Replication Controller can dynamically change its logs' verbosity level.
-To set log level in runtime you need to edit the controllers ConfigMap:
+To set log level in runtime, you need to edit the controllers ConfigMap:
```shell
kubectl edit cm dell-replication-controller-config -n dell-replication-controller
```
diff --git a/content/docs/replication/deployment/powerflex.md b/content/docs/replication/deployment/powerflex.md
new file mode 100644
index 0000000000..9da76724d4
--- /dev/null
+++ b/content/docs/replication/deployment/powerflex.md
@@ -0,0 +1,268 @@
+---
+title: PowerFlex
+linktitle: PowerFlex
+weight: 6
+description: Enabling Replication feature for CSI PowerFlex
+---
+## Enabling Replication In CSI PowerFlex
+
+Container Storage Modules (CSM) Replication sidecar is a helper container that
+is installed alongside a CSI driver to facilitate replication functionality.
+Such CSI drivers must implement `dell-csi-extensions` calls.
+
+CSI driver for Dell PowerFlex supports necessary extension calls from
+`dell-csi-extensions`. To be able to provision replicated volumes you would need
+to do the steps described in the following sections.
+
+### Before Installation
+
+#### On Storage Array
+
+Be sure to configure replication between multiple PowerFlex instances using instructions provided by PowerFlex storage.
+
+Ensure that the remote systems are configured by navigating to the `Protection` tab and choosing `Peer Systems` in the UI of the PowerFlex instance.
+
+There should be a list of remote systems with the `State` fields set to `Connected`.
+
+#### In Kubernetes
+Ensure you installed CRDs and replication controller in your clusters.
+
+Run the following commands to verify that everything is installed correctly:
+
+* Check controller pods
+ ```shell
+ kubectl get pods -n dell-replication-controller
+ ```
+ Pods should be `READY` and `RUNNING`
+* Check that the controller config map is properly populated
+ ```shell
+ kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml
+ ```
+ `data` field should be properly populated with cluster-id of your choosing
+ and, if using multi-cluster installation, your `targets:` parameter should be
+ populated by a list of target cluster IDs.
+
+
+If you don't have something installed or something is out-of-place, please refer
+to installation instructions in [installation-repctl](../install-repctl) or
+[installation](../installation).
+
+### Installing Driver With Replication Module
+
+To install the driver with replication enabled, you need to ensure you have set
+helm parameter `replication.enabled` in your copy of example `values.yaml` file
+(usually called `my-powerflex-settings.yaml`, `myvalues.yaml` etc.).
+
+Here is an example of how that would look:
+```yaml
+...
+# Set this to true to enable replication
+replication:
+ enabled: true
+ image: dellemc/dell-csi-replicator:v1.2.0
+ replicationContextPrefix: "powerflex"
+ replicationPrefix: "replication.storage.dell.com"
+...
+```
+You can leave other parameters like `image`, `replicationContextPrefix`, and
+`replicationPrefix` as they are.
+
+After enabling the replication module you can continue to install the CSI driver
+for PowerFlex following the usual installation procedure, just ensure you've added
+necessary array connection information to secret.
+
+> **_NOTE:_** you need to install your driver at least on the source cluster,
+> but it is recommended to install drivers on all clusters you will use for
+> replication.
+
+### Creating Storage Classes
+
+To be able to provision replicated volumes you need to create properly
+configured storage classes on both source and target clusters.
+
+Pair of storage classes on the source and target clusters would be essentially
+`mirrored` copies of one another. You can create them manually or with help from
+`repctl`.
+
+#### Manual Storage Class Creation
+
+You can find a sample of a replication enabled storage class in the driver repository
+[here](https://github.com/dell/csi-powerflex/blob/main/samples/storageclass/vxflexos-replication.yaml).
+
+It will look like this:
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: vxflexos-replication
+provisioner: csi-vxflexos.dellemc.com
+reclaimPolicy: Retain
+allowVolumeExpansion: true
+volumeBindingMode: Immediate
+parameters:
+ replication.storage.dell.com/isReplicationEnabled: "true"
+ replication.storage.dell.com/remoteStorageClassName: "vxflexos-replication"
+ replication.storage.dell.com/remoteClusterID:
+ replication.storage.dell.com/remoteSystem:
+ replication.storage.dell.com/remoteStoragePool:
+ replication.storage.dell.com/rpo: 60
+ replication.storage.dell.com/volumeGroupPrefix: "csi"
+ replication.storage.dell.com/consistencyGroupName:
+ replication.storage.dell.com/protectionDomain:
+ systemID:
+ storagepool:
+ protectiondomain:
+```
+
+Let's go through each parameter and what it means:
+* `replication.storage.dell.com/isReplicationEnabled` if set to `true` will mark
+ this storage class as replication enabled, just leave it as `true`.
+* `replication.storage.dell.com/remoteStorageClassName` points to the name of
+ the remote storage class. If you are using replication with the multi-cluster
+ configuration you can make it the same as the current storage class name.
+* `replication.storage.dell.com/remoteClusterID` represents the ID of a remote
+ cluster. It is the same id you put in the replication controller config map.
+* `replication.storage.dell.com/remoteSystem` is the name of the remote system
+ as seen from the current PowerFlex instance. This parameter is the systemID of
+ the array.
+* `replication.storage.dell.com/remoteStoragePool` is the name of the storage
+ pool on the remote system to be used for creating the remote volumes.
+* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is
+ measured in units of time, that may be lost due to a failure.
+* `replication.storage.dell.com/volumeGroupPrefix` represents what string would
+ be appended to the volume group name to differentiate it from other volume groups.
+* `replication.storage.dell.com/consistencyGroupName` represents the desired
+ name to give the consistency group on the PowerFlex array. If omitted, the
+ driver will generate a name for the consistency group.
+* `replication.storage.dell.com/protectionDomain` represents the remote array's
+ protection domain to use.
+* `systemID` represents the systemID of the PowerFlex array.
+* `storagepool` represents the name of the storage pool to be used on the
+ PowerFlex array.
+* `protectiondomain` represents the array's protection domain to be used.
+
+Let's follow up that with an example. Let's assume we have two Kubernetes
+clusters and two PowerFlex storage arrays:
+* Clusters have IDs of `cluster-1` and `cluster-2`
+* Cluster `cluster-1` connected to array `000000000001`
+* Cluster `cluster-2` connected to array `000000000002`
+* For `cluster-1` we plan to use storage pool `pool1` and protection domain `domain1`
+* For `cluster-2` we plan to use storage pool `pool1` and protection domain `domain1`
+
+And this is how our pair of storage classes would look:
+
+StorageClass to be created in `cluster-1`:
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: "vxflexos-replication"
+provisioner: "csi-vxflexos.dellemc.com"
+reclaimPolicy: Retain
+volumeBindingMode: Immediate
+allowVolumeExpansion: true
+parameters:
+ replication.storage.dell.com/isReplicationEnabled: "true"
+ replication.storage.dell.com/remoteStorageClassName: "vxflexos-replication"
+ replication.storage.dell.com/remoteClusterID: "cluster-2"
+ replication.storage.dell.com/remoteSystem: "000000000002"
+ replication.storage.dell.com/remoteStoragePool: pool1
+ replication.storage.dell.com/protectionDomain: domain1
+ replication.storage.dell.com/rpo: 60
+ replication.storage.dell.com/volumeGroupPrefix: "csi"
+ arrayID: "000000000001"
+ storagepool: "pool1"
+ protectiondomain: "domain1"
+```
+
+StorageClass to be created in `cluster-2`:
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: "vxflexos-replication"
+provisioner: "csi-vxflexos.dellemc.com"
+reclaimPolicy: Retain
+volumeBindingMode: Immediate
+allowVolumeExpansion: true
+parameters:
+ replication.storage.dell.com/isReplicationEnabled: "true"
+ replication.storage.dell.com/remoteStorageClassName: "vxflexos-replication"
+ replication.storage.dell.com/remoteClusterID: "cluster-1"
+ replication.storage.dell.com/remoteSystem: "000000000001"
+ replication.storage.dell.com/remoteStoragePool: pool1
+ replication.storage.dell.com/protectionDomain: domain1
+ replication.storage.dell.com/rpo: 60
+ replication.storage.dell.com/volumeGroupPrefix: "csi"
+ arrayID: "000000000002"
+ storagepool: "pool1"
+ protectiondomain: "domain1"
+```
+
+After figuring out how storage classes would look, you just need to go and apply
+them to your Kubernetes clusters with `kubectl`.
+
+#### Storage Class Creation With repctl
+
+`repctl` can simplify storage class creation by creating a pair of mirrored
+storage classes in both clusters (using a single storage class configuration) in
+one command.
+
+To create storage classes with `repctl` you need to fill up the config with
+the necessary information. You can find an example in
+[here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powerflex_example_values.yaml),
+copy it, and modify it to your needs.
+
+If you open this example you can see a lot of similar fields and parameters you
+can modify in the storage class.
+
+Let's use the same example from the manual installation and see how the config would
+look
+```yaml
+sourceClusterID: "cluster-1"
+targetClusterID: "cluster-2"
+name: "vxflexos-replication"
+driver: "vxflexos"
+reclaimPolicy: "Retain"
+replicationPrefix: "replication.storage.dell.com"
+parameters:
+ storagePool: # populate with storage pool to use of arrays
+ source: "pool1"
+ target: "pool1"
+ protectionDomain: # populate with protection domain to use of arrays
+ source: "domain1"
+ target: "domain1"
+ arrayID: # populate with unique ids of storage arrays
+ source: "0000000000000001"
+ target: "0000000000000002"
+ rpo: "60"
+ volumeGroupPrefix: "csi"
+ consistencyGroupName: "" # optional name to be given to the rcg
+```
+
+After preparing the config you can apply it to both clusters with repctl. Just
+make sure you've added your clusters to repctl via the `add` command before.
+
+To create storage classes, run `./repctl create sc --from-config ` and storage classes will be applied to both clusters.
+
+After creating storage classes you can make sure they are in place by using the
+`./repctl get storageclasses` command.
+
+### Provisioning Replicated Volumes
+
+After installing the driver and creating storage classes you are good to create
+volumes using newly created storage classes.
+
+On your source cluster, create a PersistentVolumeClaim using one of the
+replication enabled Storage Classes. The CSI PowerFlex driver will create a
+volume on the array, add it to a VolumeGroup and configure replication using the
+parameters provided in the replication enabled Storage Class.
+
+### Supported Replication Actions
+The CSI PowerFlex driver supports the following list of replication actions:
+- FAILOVER_REMOTE
+- UNPLANNED_FAILOVER_LOCAL
+- REPROTECT_LOCAL
+- SUSPEND
+- RESUME
+- SYNC
diff --git a/content/docs/replication/deployment/powermax.md b/content/docs/replication/deployment/powermax.md
index 06dc2ec149..b06ddf71ff 100644
--- a/content/docs/replication/deployment/powermax.md
+++ b/content/docs/replication/deployment/powermax.md
@@ -17,18 +17,18 @@ Configure SRDF connection between multiple PowerMax instances. Follow instructio
You can ensure that you configured remote arrays by navigating to the `Data Protection` tab and choosing `SRDF Groups` on the managing Unisphere of your array. You should see a list of remote systems with the SRDF Group number that is configured and the Online field set to a green tick.
-While using any SRDF groups, ensure that they are for exclusive use by the CSI PowerMax driver -
+While using any SRDF groups, ensure that they are for exclusive use by the CSI PowerMax driver:
* Any SRDF group which will be used by the driver is not in use by any other application
* If an SRDF group is already in use by a CSI driver, don't use it for provisioning replicated volumes outside CSI provisioning workflows.
-There are some important limitations that apply to how CSI PowerMax driver uses SRDF groups -
+There are some important limitations that apply to how CSI PowerMax driver uses SRDF groups:
* One replicated storage group using Async/Sync __always__ contains volumes provisioned from a single namespace.
* While using SRDF mode Async, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels.
But all these storage classes will be restricted to provisioning volumes within a single namespace.
* When using SRDF mode Sync/Metro, a single SRDF group can be used to provision volumes from multiple namespaces.
#### Automatic creation of SRDF Groups
-CSI Driver for Powermax supports automatic creation of SRDF Groups starting **v2.4.0** with help of **10.0** REST endpoints.
+CSI Driver for Powermax supports automatic creation of SRDF Groups as of **v2.4.0** with help of **10.0** REST endpoints.
To use this feature:
* Remove _replication.storage.dell.com/RemoteRDFGroup_ and _replication.storage.dell.com/RDFGroup_ params from the storage classes before creating first replicated volume.
* Driver will check next available RDF pair and use them to create volumes.
@@ -47,8 +47,8 @@ To verify you have everything in order you can execute the following commands:
```shell
kubectl get pods -n dell-replication-controller
```
- Pods should be `READY` and `RUNNING`
-* Check that controller config map is properly populated
+ Pods should be `READY` and `RUNNING`.
+* Check that controller config map is properly populated:
```shell
kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml
```
@@ -62,10 +62,10 @@ in [installation-repctl](../install-repctl) or [installation](../installation).
### Installing Driver With Replication Module
To install the driver with replication enabled you need to ensure you have set
-helm parameter `replication.enabled` in your copy of example `values.yaml` file
+Helm parameter `replication.enabled` in your copy of example `values.yaml` file
(usually called `my-powermax-settings.yaml`, `myvalues.yaml` etc.).
-Here is an example of how that would look like
+Here is an example of what that would look like:
```yaml
...
# Set this to true to enable replication
@@ -81,7 +81,7 @@ You can leave other parameters like `image`, `replicationContextPrefix`, and `re
After enabling the replication module you can continue to install the CSI driver for PowerMax following
usual installation procedure, just ensure you've added necessary array connection information to secret.
-> **_NOTE:_** you need to install your driver at least on the source cluster, but it is recommended to install
+> **_NOTE:_** You need to install your driver at least on the source cluster, but it is recommended to install
> drivers on all clusters you will use for replication.
@@ -90,7 +90,7 @@ usual installation procedure, just ensure you've added necessary array connectio
To be able to provision replicated volumes you need to create properly configured storage
classes on both source and target clusters.
-Pair of storage classes on the source and target clusters would be essentially `mirrored` copies of one another.
+A pair of storage classes on the source and target clusters would be essentially `mirrored` copies of one another.
You can create them manually or with help from `repctl`.
#### Manual Storage Class Creation
@@ -126,8 +126,8 @@ Let's go through each parameter and what it means:
* `replication.storage.dell.com/isReplicationEnabled` if set to `true`, will mark this storage class as replication enabled,
just leave it as `true`.
* `replication.storage.dell.com/RemoteStorageClassName` points to the name of the remote storage class, if you are using replication with the multi-cluster configuration you can make it the same as the current storage class name.
-* `replication.storage.dell.com/RemoteClusterID` represents the ID of a remote cluster, it is the same id you put in the replication controller config map.
-* `replication.storage.dell.com/RemoteSYMID` is the Symmetrix id of the remote array.
+* `replication.storage.dell.com/RemoteClusterID` represents the ID of a remote cluster, it is the same ID you put in the replication controller config map.
+* `replication.storage.dell.com/RemoteSYMID` is the Symmetrix ID of the remote array.
* `replication.storage.dell.com/RemoteSRP` is the storage pool of the remote array.
* `replication.storage.dell.com/RemoteServiceLevel` is the service level that will be assigned to remote volumes.
* `replication.storage.dell.com/RdfMode` points to the RDF mode you want to use. It should be one out of "ASYNC", "METRO" and "SYNC". If mode is set to
@@ -198,20 +198,19 @@ parameters:
replication.storage.dell.com/remoteClusterID: "cluster-1"
```
-After figuring out how storage classes would look like you just need to go and apply them to
-your Kubernetes clusters with `kubectl`.
+After creating storage class YAML files, they must be applied to your Kubernetes clusters with `kubectl`.
#### Storage Class Creation With repctl
`repctl` can simplify storage class creation by creating a pair of mirrored storage classes in both clusters
(using a single storage class configuration) in one command.
-To create storage classes with `repctl` you need to fill up the config with necessary information.
+To create storage classes with `repctl` you need to fill the config with necessary information.
You can find an example [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powermax_example_values.yaml), copy it, and modify it to your needs.
-If you open this example you can see a lot of similar fields and parameters you can modify in the storage class.
+If you open this example you can see similar fields and parameters to what was seen in manual storage class creation.
-Let's use the same example from manual installation and see how config would look like
+Let's use the same example from manual installation and see what its repctl config file would look like:
```yaml
sourceClusterID: "cluster-1"
targetClusterID: "cluster-2"
@@ -239,13 +238,13 @@ After preparing the config you can apply it to both clusters with repctl, just m
added your clusters to repctl via the `add` command before.
To create storage classes just run `./repctl create sc --from-config ` and storage classes
-would be applied to both clusters.
+will be applied to both clusters.
After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command.
### Provisioning Replicated Volumes
-After installing the driver and creating storage classes you are good to create volumes using newly
+After installing the driver and creating storage classes you are good to create volumes using the newly
created storage classes.
On your source cluster, create a PersistentVolumeClaim using one of the replication enabled Storage Classes.
@@ -254,7 +253,7 @@ using the parameters provided in the replication-enabled Storage Class.
#### Provisioning Metro Volumes
-Here is an example of a storage class configured for Metro mode,
+Here is an example of a storage class configured for Metro mode:
```yaml
apiVersion: storage.k8s.io/v1
@@ -285,7 +284,7 @@ On your cluster, create a PersistentVolumeClaim using this storage class. The CS
### Supported Replication Actions
The CSI PowerMax driver supports the following list of replication actions:
-#### Basic Site Specific Actions -
+#### Basic Site Specific Actions
- FAILOVER_LOCAL
- FAILOVER_REMOTE
- UNPLANNED_FAILOVER_LOCAL
@@ -293,8 +292,8 @@ The CSI PowerMax driver supports the following list of replication actions:
- REPROTECT_LOCAL
- REPROTECT_REMOTE
-#### Advanced Site Specific Actions -
-In this section we are going to refer to "Site A" as the original source site & "Site B" as the original target site.
+#### Advanced Site Specific Actions
+In this section, we are going to refer to "Site A" as the original source site & "Site B" as the original target site.
Any action with the LOCAL suffix means, do this action for the local site. Any action with the REMOTE suffix means do this action for the remote site.
- FAILOVER_WITHOUT_SWAP_LOCAL
- You can use this action to do a failover when you are at Site B, and don't want to swap the replication direction.
@@ -321,7 +320,7 @@ Any action with the LOCAL suffix means, do this action for the local site. Any a
- On Site B, run `kubectl edit rg ` and edit the 'action' in `spec` with `SWAP_REMOTE`.
- After receiving this request the CSI driver will attempt to do SWAP at Site B which is the remote site.
-#### Maintenance Actions -
+#### Maintenance Actions
- SUSPEND
- RESUME
- ESTABLISH
diff --git a/content/docs/replication/deployment/powerscale.md b/content/docs/replication/deployment/powerscale.md
index 1d8c61c44f..86fe3ef03c 100644
--- a/content/docs/replication/deployment/powerscale.md
+++ b/content/docs/replication/deployment/powerscale.md
@@ -33,12 +33,12 @@ Ensure you installed CRDs and replication controller in your clusters.
To verify you have everything in order you can execute the following commands:
-* Check controller pods
+* Check controller pods:
```shell
kubectl get pods -n dell-replication-controller
```
- Pods should be `READY` and `RUNNING`
-* Check that controller config map is properly populated
+ Pods should be `READY` and `RUNNING`.
+* Check that controller config map is properly populated:
```shell
kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml
```
@@ -70,13 +70,13 @@ controller:
```
You can leave other parameters like `image`, `replicationContextPrefix`, and `replicationPrefix` as they are.
-After enabling the replication module, you can continue to install the CSI driver for PowerScale following the usual installation procedure. Just ensure you've added the necessary array connection information to secret.
+After enabling the replication module, you can continue to install the CSI driver for PowerScale following the usual installation procedure. Just ensure you've added the necessary array connection information to the Kubernetes secret for the PowerScale driver.
##### SyncIQ encryption
If you plan to use encryption, you need to set `replicationCertificateID` in the array connection secret. To check the ID of the certificate for the cluster, you can navigate to `Data protection->SyncIQ->Settings,` find your certificate in the `Server Certificates` section and then push the `View/Edit` button. It will open a dialog that should contain the `Id` field. Use the value of that field to set `replicationCertificateID`.
-> **_NOTE:_** you need to install your driver on ALL clusters where you want to use replication. Both arrays must be accessible from each cluster.
+> **_NOTE:_** You need to install your driver on ALL clusters where you want to use replication. Both arrays must be accessible from each cluster.
### Creating Storage Classes
@@ -122,25 +122,26 @@ Let's go through each parameter and what it means:
* `replication.storage.dell.com/isReplicationEnabled` if set to `true`, will mark this storage class as replication enabled,
just leave it as `true`.
* `replication.storage.dell.com/remoteStorageClassName` points to the name of the remote storage class. If you are using replication with the multi-cluster configuration you can make it the same as the current storage class name.
-* `replication.storage.dell.com/remoteClusterID` represents the ID of a remote cluster. It is the same id you put in the replication controller config map.
+* `replication.storage.dell.com/remoteClusterID` represents the ID of a remote cluster. It is the same ID you put in the replication controller config map.
* `replication.storage.dell.com/remoteSystem` is the name of the remote system that should match whatever `clusterName` you called it in `isilon-creds` secret.
* `replication.storage.dell.com/remoteAccessZone` is the name of the access zone a remote volume can be created in.
* `replication.storage.dell.com/remoteAzServiceIP` AccessZone groupnet service IP. It is optional and can be provided if different than the remote system endpoint.
* `replication.storage.dell.com/remoteRootClientEnabled` determines whether the driver should enable root squashing or not for the remote volume.
* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure.
-> NOTE: Available RPO values "Five_Minutes", "Fifteen_Minutes", "Thirty_Minutes", "One_Hour", "Six_Hours", "Twelve_Hours", "One_Day"
+> **_NOTE_**: Available RPO values "Five_Minutes", "Fifteen_Minutes", "Thirty_Minutes", "One_Hour", "Six_Hours", "Twelve_Hours", "One_Day"
* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerScale driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
-* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them.
+* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. It is important to not use the same prefix for different kubernetes clusters, otherwise any action on a replication group in one kubernetes cluster will impact the other.
-> NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\\' cannot be more than 63 characters.
+> **_NOTE_**: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\\' cannot be more than 63 characters.
* `Accesszone` is the name of the access zone a volume can be created in.
* `AzServiceIP` AccessZone groupnet service IP. It is optional and can be provided if different than the PowerScale cluster endpoint.
-* `IsiPath` is the base path for the volumes to be created on the PowerScale cluster.
+* `IsiPath` is the base path for the volumes to be created on the PowerScale cluster. If not specified in the storage class, the IsiPath defined in the storage array's secret will be used. If that is not specified, the IsiPath defined in the values.yaml file used for driver installation is used as the lowest-priority. IsiPath between source and target Replication Groups **must** be consistent.
* `RootClientEnabled` determines whether the driver should enable root squashing or not.
* `ClusterName` name of PowerScale cluster, where PV will be provisioned, specified as it was listed in `isilon-creds` secret.
-After figuring out how storage classes would look, you just need to go and apply them to your Kubernetes clusters with `kubectl`.
+After creating storage class YAML files, they must be applied to
+your Kubernetes clusters with `kubectl`.
#### Storage Class creation with `repctl`
@@ -150,9 +151,9 @@ After figuring out how storage classes would look, you just need to go and apply
To create storage classes with `repctl` you need to fill up the config with necessary information.
You can find an example [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powerscale_example_values.yaml), copy it, and modify it to your needs.
-If you open this example you can see a lot of similar fields and parameters you can modify in the storage class.
+If you open this example you can see similar fields and parameters to what was seen in manual storage class creation.
-Let's use the same example from manual installation and see what config would look like:
+Let's use the same example from manual installation and see what its repctl config file would look like:
```yaml
sourceClusterID: "source"
targetClusterID: "target"
@@ -184,66 +185,26 @@ parameters:
After preparing the config, you can apply it to both clusters with `repctl`. Before you do this, ensure you've added your clusters to `repctl` via the `add` command.
-To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters.
+To create storage classes just run `./repctl create sc --from-config ` and storage classes
After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command.
### Provisioning Replicated Volumes
-After installing the driver and creating storage classes, you are good to create volumes using newly
+After installing the driver and creating storage classes, you are good to create volumes using the newly
created storage classes.
On your source cluster, create a PersistentVolumeClaim using one of the replication-enabled Storage Classes.
The CSI PowerScale driver will create a volume on the array, add it to a VolumeGroup and configure replication
using the parameters provided in the replication enabled Storage Class.
-### SyncIQ Policy Architecture
-When creating `DellCSIReplicationGroup` (RG) objects on the Kubernetes cluster(s) used for replication, matching SyncIQ policies are created on *both* the source and target PowerScale storage arrays.
-
-This is done so that the RG objects can communicate with a relative 'local' and 'remote' set of policies to query for current synchronization status and perform replication actions; on the *source* Kubernetes cluster's RG, the *source* PowerScale array is seen as 'local' and the *target* PowerScale array is seen as remote. The inverse relationship exists on the *target* Kubernetes cluster's RG, which sees the *target* PowerScale array as 'local' and the *source* PowerScale array as 'remote'.
-
-Upon creation, both SyncIQ policies (source and target) are set to a schedule of `When source is modified`. The source PowerScale array's SyncIQ policy is `Enabled` when the RG is created, and the target array's policy is `Disabled`. Similarly, the directory that is being replicated is *read-write accessible* on the source storage array, and is restricted to *read-only* on the target.
-
-### Performing Failover on PowerScale
-
-Steps for performing Failover can be found in the Tools page under [Executing Actions.](https://dell.github.io/csm-docs/docs/replication/tools/#executing-actions) There are some PowerScale-specific considerations to keep in mind:
-- Failover on PowerScale does NOT halt writes on the source side. It is recommended that the storage administrator or end user manually stop writes to ensure no data is lost on the source side in the event of future failback.
-- In the case of unplanned failover, the source-side SyncIQ policy will be left enabled and set to its previously defined `When source is modified` sync schedule. It is recommended for storage admins to manually disable the source-side SyncIQ policy when bringing the failed-over source array back online.
-
-### Performing Failback on PowerScale
-
-Failback operations are not presently supported for PowerScale. In the event of a failover, failback can be performed manually using the below methodologies.
-#### Failback - Discard Target
-
-Performing failback and discarding changes made to the target is to simply resume synchronization from the source. The steps to perform this operation are as follows:
-1. Log in to the source PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
-2. Edit the source-side SyncIQ policy's schedule from `When source is modified` to `Manual`.
-3. Log in to the target PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Local targets` tab.
-4. Perform `Actions > Disallow writes` on the target-side Local Target policy that matches the SyncIQ policy undergoing failback.
-5. Return to the source array. Enable the source-side SyncIQ policy. Edit its schedule from `Manual` to `When source is modified`. Set the time delay for synchronization as appropriate.
-#### Failback - Discard Source
-
-Information on the methodology for performing a failback while taking changes made to the original target can be found in relevant PowerScale SyncIQ documentation. The detailed steps are as follows:
-
-1. Log in to the source PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
-2. Edit the source-side SyncIQ policy's schedule from `When source is modified` to `Manual`.
-3. Log in to the target PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
-4. Delete the target-side SyncIQ policy that has a name matching the SyncIQ policy undergoing failback. This is necessary to prevent conflicts when running resync-prep in the next step.
-5. On the source PowerScale array, enable the SyncIQ policy that is undergoing failback. On this policy, perform `Actions > Resync-prep`. This will create a new SyncIQ policy on the target PowerScale array, matching the original SyncIQ policy with an appended *_mirror* to its name. Wait until the policy being acted on is disabled by the resync-prep operation before continuing.
-6. On the target PowerScale array's `Policies` tab, perform `Actions > Start job` on the *_mirror* policy. Wait for this synchronization to complete.
-7. On the source PowerScale array, switch from the `Policies` tab to the `Local targets` tab. Find the local target policy that matches the SyncIQ policy undergoing failback and perform `Actions > Allow writes`.
-8. On the target PowerScale array, perform `Actions > Resync-prep` on the *_mirror* policy. Wait until the policy on the source side is re-enabled by the resync-prep operation before continuing.
-9. On the target PowerScale array, delete the *_mirror* SyncIQ policy.
-10. On the target PowerScale array, manually recreate the original SyncIQ policy that was deleted in step 4. This will require filepaths, RPO, and other details that can be obtained from the source-side SyncIQ policy. Its name **must** match the source-side SyncIQ policy. Its source directory will be the source-side policy's *target* directory, and vice-versa. Its target host will be the source PowerScale array endpoint.
-11. Ensure that the target-side SyncIQ policy that was just created is **Enabled.** This will create a Local Target policy on the source side. If it was not created as Enabled, enable it now.
-12. On the source PowerScale array, select the `Local targets` tab. Perform `Actions > Allow writes` on the source-side Local Target policy that matches the SyncIQ policy undergoing failback.
-13. Disable the target-side SyncIQ policy.
-14. On the source PowerScale array, edit the SyncIQ policy's schedule from `Manual` to `When source is modified`. Set the time delay for synchronization as appropriate.
-
### Supported Replication Actions
The CSI PowerScale driver supports the following list of replication actions:
- FAILOVER_REMOTE
- UNPLANNED_FAILOVER_LOCAL
+- FAILBACK_LOCAL
+- ACTION_FAILBACK_DISCARD_CHANGES_LOCAL
+- REPROTECT_LOCAL
- SUSPEND
- RESUME
- SYNC
diff --git a/content/docs/replication/deployment/powerstore.md b/content/docs/replication/deployment/powerstore.md
index dfde098928..1f0dc63424 100644
--- a/content/docs/replication/deployment/powerstore.md
+++ b/content/docs/replication/deployment/powerstore.md
@@ -27,12 +27,12 @@ Ensure you installed CRDs and replication controller in your clusters.
To verify you have everything in order you can execute the following commands:
-* Check controller pods
+* Check controller pods:
```shell
kubectl get pods -n dell-replication-controller
```
- Pods should be `READY` and `RUNNING`
-* Check that controller config map is properly populated
+ Pods should be `READY` and `RUNNING`.
+* Check that controller config map is properly populated:
```shell
kubectl get cm -n dell-replication-controller dell-replication-controller-config -o yaml
```
@@ -45,10 +45,10 @@ If you don't have something installed or something is out-of-place, please refer
### Installing Driver With Replication Module
To install the driver with replication enabled you need to ensure you have set
-helm parameter `controller.replication.enabled` in your copy of example `values.yaml` file
+Helm parameter `controller.replication.enabled` in your copy of example `values.yaml` file
(usually called `my-powerstore-settings.yaml`, `myvalues.yaml` etc.).
-Here is an example of what that would look like
+Here is an example of what that would look like:
```yaml
...
# controller: configure controller specific parameters
@@ -108,15 +108,14 @@ Let's go through each parameter and what it means:
* `replication.storage.dell.com/isReplicationEnabled` if set to `true` will mark this storage class as replication enabled,
just leave it as `true`.
* `replication.storage.dell.com/remoteStorageClassName` points to the name of the remote storage class. If you are using replication with the multi-cluster configuration you can make it the same as the current storage class name.
-* `replication.storage.dell.com/remoteClusterID` represents ID of a remote cluster. It is the same id you put in the replication controller config map.
+* `replication.storage.dell.com/remoteClusterID` represents ID of a remote cluster. It is the same ID you put in the replication controller config map.
* `replication.storage.dell.com/remoteSystem` is the name of the remote system as seen from the current PowerStore instance.
* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time,
that may be lost due to a failure.
* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerStore driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
-* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name
- to differentiate them.
+* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them. It is important to not use the same prefix for different kubernetes clusters, otherwise any action on a replication group in one kubernetes cluster will impact the other.
->NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters.
+> _**NOTE**_: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters.
* `arrayID` is a unique identifier of the storage array you specified in array connection secret.
@@ -171,20 +170,19 @@ parameters:
arrayID: "PS000000002"
```
-After figuring out how storage classes would look, you just need to go and apply them to
-your Kubernetes clusters with `kubectl`.
+After creating storage class YAML files, they must be applied to your Kubernetes clusters with `kubectl`.
#### Storage Class Creation With repctl
`repctl` can simplify storage class creation by creating a pair of mirrored storage classes in both clusters
(using a single storage class configuration) in one command.
-To create storage classes with `repctl` you need to fill up the config with necessary information.
+To create storage classes with `repctl` you need to fill the config with necessary information.
You can find an example in [here](https://github.com/dell/csm-replication/blob/main/repctl/examples/powerstore_example_values.yaml), copy it, and modify it to your needs.
-If you open this example you can see a lot of similar fields and parameters you can modify in the storage class.
+If you open this example you can see similar fields and parameters to what was seen in manual storage class creation.
-Let's use the same example from manual installation and see how config would look like
+Let's use the same example from manual installation and see what its repctl config file would look like:
```yaml
sourceClusterID: "cluster-1"
targetClusterID: "cluster-2"
@@ -208,13 +206,13 @@ After preparing the config you can apply it to both clusters with repctl. Just m
added your clusters to repctl via the `add` command before.
To create storage classes just run `./repctl create sc --from-config ` and storage classes
-would be applied to both clusters.
+will be applied to both clusters.
After creating storage classes you can make sure they are in place by using `./repctl get storageclasses` command.
### Provisioning Replicated Volumes
-After installing the driver and creating storage classes you are good to create volumes using newly
+After installing the driver and creating storage classes you are good to create volumes using the newly
created storage classes.
On your source cluster, create a PersistentVolumeClaim using one of the replication enabled Storage Classes.
diff --git a/content/docs/replication/deployment/storageclasses.md b/content/docs/replication/deployment/storageclasses.md
index 042d351d72..6421c01abe 100644
--- a/content/docs/replication/deployment/storageclasses.md
+++ b/content/docs/replication/deployment/storageclasses.md
@@ -13,7 +13,7 @@ Replication enabled storage classes are always created in pairs within/across cl
Before provisioning replicated volumes, make sure that these pairs of storage classes are created properly.
### Common Parameters
-There are 3 mandatory key/value pairs which should always be present in the storage class parameters -
+There are 3 mandatory key/value pairs which should always be present in the storage class parameters:
```yaml
replication.storage.dell.com/isReplicationEnabled: 'true'
replication.storage.dell.com/remoteClusterID:
@@ -22,20 +22,20 @@ replication.storage.dell.com/remoteStorageClassName:
#### remoteClusterID
This should contain the Cluster ID of the remote cluster where the replicated volume is going to be created.
-In case of a single stretched cluster, it should be always set to `self`
+In the case of a single stretched cluster, it should be always set to `self`.
#### remoteStorageClassName
This should contain the name of the storage class on the remote cluster which is used to create the remote `PersistentVolume`.
->Note: You still need to create a pair of storage classes even while using a single stretched cluster
+>**_NOTE_**: You still need to create a pair of storage classes even while using a single stretched cluster.
### Driver specific parameters
-Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes), [PowerScale](../powerscale/#creating-storage-classes) or [Unity](../unity/#creating-storage-classes) for a detailed list of parameters.
+Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes), [PowerScale](../powerscale/#creating-storage-classes) or [PowerFlex](../powerflex#creating-storage-classes) for a detailed list of parameters.
### PV sync Deletion
The dell-csm-replicator supports 'sync deletion' of replicated PV resources i.e when a replication enabled PV is deleted its corresponding source or target PV can also be deleted.
-The decision to whether or not sync delete the corresponding PV depends on a Storage Class parameter which can be configured by the user.
+The decision to whether or not sync delete the corresponding PV depends on a Storage Class parameter which can be configured by the user:
```
replication.storage.dell.com/remotePVRetentionPolicy: 'delete' | 'retain'
@@ -66,7 +66,7 @@ By default, if the remoteRGRetentionPolicy is not specified in the Storage Class
### Example
If you are setting up replication between two clusters with ClusterID set to Cluster A & Cluster B,
-then the storage class definitions in both the clusters would look like -
+then the storage class definitions in both the clusters would look like:
#### Cluster A
```yaml
diff --git a/content/docs/replication/disaster-recovery.md b/content/docs/replication/disaster-recovery.md
index e66b5d9b9a..98d0d43d17 100644
--- a/content/docs/replication/disaster-recovery.md
+++ b/content/docs/replication/disaster-recovery.md
@@ -8,10 +8,10 @@ description: >
## Disaster Recovery Workflows
-Once the DellCSIReplicationGroup & PersistentVolume objects have been replicated across clusters (or within the same cluster), users can exercise the general Disaster Recovery workflows.
+Once the `DellCSIReplicationGroup` & `PersistentVolume` objects have been replicated across clusters (or within the same cluster), users can exercise the general Disaster Recovery workflows.
### Planned Migration to the target cluster/array
-This scenario is the choice when you want to try your disaster recovery plan or you need to switch activities from one site to another.
+This scenario is the typical choice when you want to try your disaster recovery plan or you need to switch activities from one site to another:
a. Execute "failover" action on selected ReplicationGroup using the cluster name
@@ -24,7 +24,7 @@ This scenario is the choice when you want to try your disaster recovery plan or
![state_changes1](../state_changes1.png)
### Unplanned Migration to the target cluster/array
-This scenario is the choice when you lost a site.
+This scenario is the typical choice when a site goes down:
a. Execute "failover" action on selected ReplicationGroup using the cluster name
@@ -43,6 +43,6 @@ This scenario is the choice when you lost a site.
![state_changes2](../state_changes2.png)
->Note: When users do Failover and Failback, the tests pods on the source cluster may go "CrashLoopOff" state since it will try to remount the same volume which is already mounted. To get around this problem bring down the number of replicas to 0 and then after that is done, bring it up to 1.
+> _**NOTE**_: When users do Failover and Failback, the tests pods on the source cluster may go "CrashLoopOff" state since it will try to remount the same volume which is already mounted. To get around this problem, bring down the number of replicas to 0 and then after that is done, bring it up to 1.
diff --git a/content/docs/replication/high-availability.md b/content/docs/replication/high-availability.md
index 9a4f8f3b37..15210a538f 100644
--- a/content/docs/replication/high-availability.md
+++ b/content/docs/replication/high-availability.md
@@ -5,7 +5,7 @@ weight: 5
description: >
High Availability support for CSI PowerMax
---
-One of the goals of high availability is to eliminate single points of failure in a storage system. In Kubernetes, this can mean that a single PV represents multiple read/write enabled volumes on different arrays, located at reasonable distances with both the volumes in sync with each other. If one of the volumes goes down, there will still be another volume available for read and write. This kind of high availability can be achieved by using SRDF Metro replication mode supported only by Powermax arrays.
+One of the goals of high availability is to eliminate single points of failure in a storage system. In Kubernetes, this can mean that a single PV represents multiple read/write enabled volumes on different arrays, located at reasonable distances with both the volumes in sync with each other. If one of the volumes goes down, there will still be another volume available for read and write. This kind of high availability can be achieved by using SRDF Metro replication mode, supported only by PowerMax arrays.
## SRDF Metro Architecture
@@ -20,7 +20,7 @@ In SRDF metro configurations:
With respect to Kubernetes, the SRDF metro mode works in single cluster scenarios. In the metro, both the arrays—[arrays with SRDF metro link setup between them](../deployment/powermax/#on-storage-array)—involved in the replication are managed by the same `csi-powermax` driver. The replication is triggered by creating a volume using a `StorageClass` with metro-related parameters.
The driver on receiving the metro-related parameters in the `CreateVolume` call creates a metro replicated volume and the details about both the volumes are returned in the volume context to the Kubernetes cluster. So, the `PV` created in the process represents a pair of metro replicated volumes. When a `PV`, representing a pair of metro replicated volumes, is claimed by a pod, the host treats each of the volumes represented by the single `PV` as a separate data path. The switching between the paths, to read and write the data, is managed by the multipath driver. The switching happens automatically, as configured by the user—in round-robin fashion or otherwise—or it can happen if one of the paths goes down. For details on Linux multipath driver setup, [click here](../../csidriver/installation/helm/powermax/#linux-multipathing-requirements).
-The creation of volumes in SRDF metro mode doesn't involve the replication sidecar or the common controller, nor does it cause the creation of any replication related custom resources; it just needs a `csi-powermax` driver that implements the `CreateVolume` grpc endpoint with SRDF metro capability for it to work.
+The creation of volumes in SRDF metro mode doesn't involve the replication sidecar or the common controller, nor does it cause the creation of any replication related custom resources; it just needs a `csi-powermax` driver that implements the `CreateVolume` gRPC endpoint with SRDF metro capability for it to work.
### Usage
The metro replicated volumes are created just like the normal volumes, but the `StorageClass` contains some
@@ -46,12 +46,12 @@ reclaimPolicy: Delete
volumeBindingMode: Immediate
```
-> Note: Different namespaces can share the same RDF group for creating volumes.
+> _**NOTE**_: Different namespaces can share the same RDF group for creating volumes.
### Snapshots on SRDF Metro volumes
A snapshot can be created on either of the volumes in the metro volume pair depending on the parameters in the `VolumeSnapshotClass`.
-The snapshots are by default created on the volumes on the R1 side of the SRDF metro pair, but if a Symmetrix id is specified in the `VolumeSnapshotClass` parameters, the driver creates the snapshot on the specified array; the specified array can either be the R1 or the R2 array. A `VolumeSnapshotClass` with symmetrix id specified in parameters may look as follows:
+The snapshots are by default created on the volumes on the R1 side of the SRDF metro pair, but if a Symmetrix ID is specified in the `VolumeSnapshotClass` parameters, the driver creates the snapshot on the specified array; the specified array can either be the R1 or the R2 array. A `VolumeSnapshotClass` with symmetrix ID specified in parameters may look as follows:
```yaml
apiVersion: snapshot.storage.k8s.io/v1
@@ -63,5 +63,3 @@ deletionPolicy: Delete
parameters:
SYMID: '000000000001'
```
-
->Note: Restoring snapshots to a metro volume is currently not supported.
diff --git a/content/docs/replication/migration/_index.md b/content/docs/replication/migration/_index.md
new file mode 100644
index 0000000000..a365dad51f
--- /dev/null
+++ b/content/docs/replication/migration/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Migration"
+linkTitle: "Migration"
+weight: 6
+Description: >
+ Support for Array Migration of Volumes
+---
\ No newline at end of file
diff --git a/content/docs/replication/migration/migrating-volumes-diff-array.md b/content/docs/replication/migration/migrating-volumes-diff-array.md
new file mode 100644
index 0000000000..6d23a32c06
--- /dev/null
+++ b/content/docs/replication/migration/migrating-volumes-diff-array.md
@@ -0,0 +1,130 @@
+---
+title: Between Storage Arrays
+linktitle: Between Storage Arrays
+weight: 1
+description: >
+ Support for Array Migration of Volumes between arrays
+---
+
+User can migrate existing pre-provisioned volumes to another storage array by using the array migration feature.
+
+> _**NOTE**_: Currently only migration of standalone volumes is supported.
+
+## Prerequisites
+
+This feature needs to be planned in a controlled host environment.
+
+If the user have native multipathing, the user has to run multipath list commands on all nodes to ensure that there are no faulty paths on the host. If any faulty paths exist, the user has to flush the paths, and have a clean setup before migration is triggered using the following command:
+
+`rescan-scsi-bus.sh --remove`
+
+#### On Storage Array
+
+User has to configure physical SRDF connection between source array (where the volumes are currently provisioned) and target array (where the volumes should be migrated).
+
+#### In Kubernetes
+
+User need to ensure that migration group CRD is installed.
+
+To install CRD, user can run the command as below:
+
+`kubectl create -f deploy/replicationcrds.all.yaml`
+
+## Support Matrix
+
+| PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
+| - | - | - | - | - |
+| Yes | No | No | No | No |
+
+## Installing Driver With sidecars
+
+Dell-csi-migrator and dell-csi-node-rescanner sidecars are installed alongside with the driver, the user can enable it in the driver's myvalues.yaml file.
+
+#### Sample:
+
+```yaml
+# CSM module attributes
+# Set this to true to enable migration
+# Allowed values:
+# "true" - migration is enabled
+# "false" - migration is disabled
+# Default value: "false"
+migration:
+ enabled: true
+ # Change this to use any specific version of the dell-csi-migrator sidecar
+ # Default value: None
+ nodeRescanSidecarImage: dellemc/dell-csi-node-rescanner:v1.0.0
+ image: dellemc/dell-csi-migrator:v1.1.0
+ # migrationPrefix: Determine if migration is enabled
+ # Default value: "migration.storage.dell.com"
+ # Examples: "migration.storage.dell.com"
+ migrationPrefix: "migration.storage.dell.com"
+```
+
+Target array configuration and endpoint needs to be updated in the driver's [myvalues.yaml](../../../csidriver/installation/helm/powermax/#csi-powermax-driver-with-proxy-in-standalone-mode) file as shown below:
+
+```yaml
+ ##########################
+ # PLATFORM ATTRIBUTES
+ ##########################
+ # The CSI PowerMax ReverseProxy section to fill out the required configuration
+ defaultCredentialsSecret: powermax-creds
+ storageArrays:
+ - storageArrayId: "000000000000"
+ endpoint: https://00.000.000.00:0000
+# backupEndpoint: https://backup-1.unisphe.re:8443
+ - storageArrayId: "000120001178"
+ endpoint: https://00.000.000.00:0000
+# backupEndpoint: https://backup-2.unisphe.re:8443
+```
+
+After enabling the migration module the user can continue to install the CSI driver following usual installation procedure.
+
+## PowerMax Support
+
+CSM for PowerMax supports the following migrations:
+
+- From a VMAX3 array to VMAX All Flash, or PowerMax array.
+
+- From a PowerMax array to another PowerMax array.
+
+#### Basic Usage
+
+To trigger array migration procedure, the user need to create the migration group for the required source and target array.
+
+Creating the migration group will trigger reconcile action on the migrator sidecar that will call ArrayMigrate() on the CSI driver with actions `migrate` or `commit`. After the migrated state, the migration group will trigger reconcile on the node-rescanner sidecar.
+
+#### Manual Migration Group Creation
+
+User can find sample migration group manifest in the driver repository [here](https://github.com/dell/csi-powermax/tree/main/samples/migrationgroup). A sample is provided below, for convenience:
+
+``` yaml
+apiVersion: "replication.storage.dell.com/v1"
+kind: DellCSIMigrationGroup
+metadata:
+ # custom name of the migration group
+ # Default value: pmax-migration
+ name: pmax-migration
+spec:
+ # driverName: exact name of CSI Powermax driver
+ driverName: "csi-powermax.dellemc.com"
+ # sourceID: source ArrayID
+ sourceID: "000000001234"
+ # targetID: target ArrayID
+ targetID: "000000005678"
+ migrationGroupAttributes:
+ action: "migrate"
+ ```
+ To create the migration group, use the below command:
+
+ `kubectl -create -f `
+
+After completion of migration, the migration group comes to deleting state after which the admin can manually delete the migration group with the below command:
+
+`kubectl -delete -f `
+
+## Post migration
+
+The PV/PVCs will be mounted, up and running after migration and the pod can continue service as before.
+
+> _**LIMITATION**_: Any control operations like expansion, snapshot creation, replication workflows on migrated PV/PVCs will not be supported.
diff --git a/content/docs/replication/migration/migrating-volumes-same-array.md b/content/docs/replication/migration/migrating-volumes-same-array.md
new file mode 100644
index 0000000000..534698ce24
--- /dev/null
+++ b/content/docs/replication/migration/migrating-volumes-same-array.md
@@ -0,0 +1,145 @@
+---
+title: Between Storage classes
+linktitle: Between Storage classes
+weight: 2
+description: >
+ Support for Array Migration of Volumes between storage classes
+---
+
+You can migrate existing pre-provisioned volumes to another storage class by using volume migration feature.
+
+Currently two versions of migration are supported:
+- To replicated storage class from NON replicated one.
+- To NON replicated storage class from replicated one.
+
+## Prerequisites
+- Original volume is from the one of currently supported CSI drivers (see Support Matrix)
+- Migrated sidecar is installed alongside with the driver, you can enable it in your `myvalues.yaml` file
+```yaml
+migration:
+ enabled: true
+```
+
+## Support Matrix
+| Migration Type | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
+| - | - | - | - | - | - |
+| NON_REPL_TO_REPL | Yes | No | No | No | No |
+| REPL_TO_NON_REPL | Yes | No | No | No | No |
+
+
+## Basic Usage
+
+To trigger migration procedure, you need to patch existing PersistentVolume with migration annotation (by default `migration.storage.dell.com/migrate-to`) and in value of said annotation specify StorageClass name you want to migrate to.
+
+For example, if we have PV named `test-pv` already provisioned and we want to migrate it to replicated storage class named `powermax-replication` we can run:
+
+```shell
+kubectl patch pv test-pv -p '{"metadata": {"annotations":{"migration.storage.dell.com/migrate-to":"powermax-replication"}}}'
+```
+
+Patching PV resource will trigger migration sidecar that will call `VolumeMigrate` call from the CSI driver. After migration is finished new PersistentVolume will be created in cluster with name of original PV plus `-to-` appended to it.
+
+In our example, we will see this when running `kubectl get pv`:
+```shell
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+test-pv 1Gi RWO Retain Bound default/test-pvc powermax 5m
+test-pv-to-powermax-replication 1Gi RWO Retain Available powermax-replication 10s
+
+```
+
+When Volume Migration is finished, source PV will be updated with an `EVENT` that denotes that this has taken place.
+
+Newly created PV (`test-pv-to-powermax-replication` in our example) is available for consumption via static provisioning by any PVC that requests it.
+
+
+## Namespace Considerations For Replication
+
+Replication Groups in CSM Replication can be made namespaced, meaning that one SC will generate one Replication Group per namespace. This is also important when migrating volumes from/to replcation storage class.
+
+"When just setting one annotation migration.storage.dell.com/migrate-to migrated volume is assumed to be used in same namespace as original PV and it’s PVC. In the case of being migrated to replication enabled storage class will be inserted in namespaced Replication Group inside PVC namespace."
+
+However, you can define in which namespace migrated volume must be used after migration by setting `migration.storage.dell.com/namespace`. You can use the same annotation in a scenario where you only have a statically provisioned PV, and you don't have it bound to any PVC, and you want to migrate it to another storage class.
+
+
+## Non Disruptive Migration
+
+You can migrate your PVs without disrupting workflows if you use StatefulSet with multiple replicas to deploy application.
+
+Instruction (you can also use `repctl` for convenience):
+
+1. Find every PV for your StatefulSet and patch it with `migration.storage.dell.com/migrate-to` annotation that points to new storage class:
+ ```shell
+ kubectl patch pv -p '{"metadata": {"annotations":{"migration.storage.dell.com/migrate-to":"powermax-replication"}}}'
+ ```
+
+2. Ensure you have a copy of StatefulSet manifest somewhere ready, we will need it later. If you don't have it, you can get it from cluster:
+ ```shell
+ kubectl get sts -n -o yaml > sts-manifest.yaml
+ ```
+
+3. To not disrupt any workflows, we will need to delete StatefulSet without deleting any pods, to do so you can use the `--cascade` flag:
+ ```shell
+ kubectl delete sts -n --cascade=orphan
+ ```
+
+4. Change the StorageClass in your manifest of StatefulSet to point to a new storage class, then apply it to the cluster:
+ ```shell
+ kubectl apply -f sts-manifest.yaml
+ ```
+
+5. Find a PVC and pod of one replica of StatefulSet delete PVCs first and Pod after it:
+ ```shell
+ kubectl delete pvc -n
+ ```
+ ```shell
+ kubectl delete pod -n
+ ```
+
+ Wait for new pod to be created by StatefulSet, it should create new PVC that will use migrated PV.
+
+6. Repeat step 5 until all replicas use new PVCs.
+
+
+## Using repctl
+
+You can use `repctl` CLI tool to help you simplify running migration specific commands.
+
+### Single PV
+
+In its most simple usage, repctl can do the same operations as kubectl, for example, migrating the single PV 'test-pv' from our example will look like:
+
+```shell
+./repctl migrate pv test-pv --to-sc powermax-replication
+```
+
+`repctl` will go and patch the resource for you. You can also provide `--wait` flag for it to wait until migrated PV is created in the cluster.
+`repctl` also can set `migration.storage.dell.com/namespace` for you if you provide `--target-ns` flag.
+
+
+Aside from just migrating single PVs repctl can migrate PVCs and StatefulSets.
+
+### PVC
+
+`repctl` can find PV for any given PVC for you and patch it.
+This could be done with similar command to single PV migration:
+
+```shell
+./repctl migrate pvc test-pvc --to-sc powermax-replication -n default
+```
+
+Notice that we provide original namespace (`default` in our example) for this command because PVCs are namespaced resource and we need namespace to be able to find it.
+
+
+### StatefulSet
+
+
+`repctl` can help you migrate entire StatefulSet by automating migration process.
+
+You can use this command to do so:
+```shell
+./repctl migrate sts test-sts --to-sc powermax-replication -n default
+```
+
+By default, it will find every Pod, PVC and PV for provided StatefulSet and patch every PV with annotation.
+
+You can also optionally provide `--ndu` flag, with this flag provided repctl will do steps provided in [Non Disruptive Migration](#non-disruptive-migration) section automatically.
diff --git a/content/docs/replication/monitoring.md b/content/docs/replication/monitoring.md
index 7dd38f6a5a..b0cfb4954e 100644
--- a/content/docs/replication/monitoring.md
+++ b/content/docs/replication/monitoring.md
@@ -6,14 +6,14 @@ description: >
DellCSIReplicationGroup Monitoring
---
-The dell-csm-replicator supports monitoring of DellCSIReplicationGroup Custom Resources (CRs).
+The dell-csm-replicator supports monitoring of `DellCSIReplicationGroup` Custom Resources (CRs).
Each RG is polled at a pre-defined interval and for each RG, a gRPC call is made to the driver which returns the status of
the protection group on the array.
If an RG doesn't have any PVs associated with it, the driver will not receive any monitoring request for that RG.
-This status can be obtained from the RG as under:
+This status can be obtained from the RG using a standard `kubectl get` call on the resource name:
```
NAME AGE STATE LINK STATE LAST LINKSTATE UPDATE
diff --git a/content/docs/replication/release/_index.md b/content/docs/replication/release/_index.md
index 33d56c7cf5..a27503d49a 100644
--- a/content/docs/replication/release/_index.md
+++ b/content/docs/replication/release/_index.md
@@ -6,16 +6,22 @@ Description: >
Dell Container Storage Modules (CSM) release notes for replication
---
-## Release Notes - CSM Replication 1.3.1
+## Release Notes - CSM Replication 1.4.0
### New Features/Changes
-There are no new features in this release.
+
+ - [PowerScale - Implement Failback functionality](https://github.com/dell/csm/issues/558)
+ - [PowerScale - Implement Reprotect functionality](https://github.com/dell/csm/issues/532)
+ - [PowerScale - SyncIQ policy improvements](https://github.com/dell/csm/issues/573)
+ - [PowerFlex - Initial Replication Support](https://github.com/dell/csm/issues/618)
+ - [Replication APIs to be moved from alpha phase](https://github.com/dell/csm/issues/432)
### Fixed Issues
-- [PowerScale Replication - Replicated PV has the wrong AzServiceIP](https://github.com/dell/csm/issues/514)
-- ["repctl cluster inject --use-sa" doesn't work for Kubernetes 1.24 and above](https://github.com/dell/csm/issues/463)
+
+| Github ID | Description |
+| --------------------------------------------- | ------------------------------------------------------------------ |
+| [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. |
### Known Issues
-| Github ID | Description |
-| --------------------------------------------- | --------------------------------------------------------------------------------------- |
-| [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. |
+
+There are no known issues at this time.
diff --git a/content/docs/replication/replication-actions.md b/content/docs/replication/replication-actions.md
index 00a31ab560..6af98b6540 100644
--- a/content/docs/replication/replication-actions.md
+++ b/content/docs/replication/replication-actions.md
@@ -6,9 +6,9 @@ description: >
DellCSIReplicationGroup Actions
---
-You can exercise native replication control operations from Dell storage arrays by performing "Actions" on the replicated group of volumes using the DellCSIReplicationGroup object.
+You can exercise native replication control operations from Dell storage arrays by performing "Actions" on the replicated group of volumes using the `DellCSIReplicationGroup` (RG) object.
-You can patch the DellCSIReplicationGroup Custom Resource and set the action field in the spec to one of the allowed values (refer to tables in this document).
+You can patch the `DellCSIReplicationGroup` Custom Resource (CR) and set the action field in the spec to one of the allowed values (refer to tables in this document).
When you set the action field in the Custom Resource object, the following happens:
@@ -17,47 +17,47 @@ When you set the action field in the Custom Resource object, the following happe
* Once the CSI driver has completed the operation, State of the RG CR goes back to Ready
While the action is in progress, you shouldn't update the action field. Any attempt to change the action field will be rejected and it will be reset to empty.
-There are certain pre-requisites that have to be fulfilled before any action can be done on the RG CR. For e.g. - you can't perform a Reprotect without doing a Failover first. There are some "Workflows" defined in Section 2 of this document which provide a sequence of operations for some common use-cases. An important exception to these rules is the action UNPLANNED_FAILOVER which can be run at any time.
+There are certain pre-requisites that have to be fulfilled before any action can be done on the RG CR. For example, you can't perform a Reprotect without doing a Failover first. There are some "Workflows" defined in [Disaster Recovery](../disaster-recovery) which provide a sequence of operations for some common use-cases. An important exception to these rules is the action UNPLANNED_FAILOVER, which can be run at any time.
->Note - Throughout this document, we are going to refer to "Hopkinton" as the original source site & "Durham" as the original target site.
+> _**NOTE**_: Throughout this document, we are going to refer to "Site A" as the original source site & "Site B" as the original target site.
### Site Specific Actions
These actions can be run at any site, but they have some site-specific context included.
Any action with the __LOCAL__ suffix means, do this action for the local site. Any action with the __REMOTE__ suffix means do this action for the remote site.
-For e.g. -
-* If the CR at `Hopkinton` is patched with action FAILOVER_REMOTE, it means that the driver will attempt to `Fail Over` to __Durham__ which is the remote site.
-* If the CR at `Durham` is patched with action FAILOVER_LOCAL, it means that the driver will attempt to `Fail Over` to __Durham__ which is the local site.
-* If the CR at `Durham` is patched with REPROTECT_LOCAL, it means that the driver will `Re-protect` the volumes at __Durham__ which is the local site.
+For example:
+* If the CR at `Site A` is patched with action FAILOVER_REMOTE, it means that the driver will attempt to `Fail Over` to __Site B__ which is the remote site.
+* If the CR at `Site B` is patched with action FAILOVER_LOCAL, it means that the driver will attempt to `Fail Over` to __Site B__ which is the local site.
+* If the CR at `Site B` is patched with REPROTECT_LOCAL, it means that the driver will `Re-protect` the volumes at __Site B__ which is the local site.
The following table lists details of what actions should be used in different Disaster Recovery workflows & the equivalent operation done on the storage array:
{{
}}
### Maintenance Actions
These actions can be run at any site and are used to change the replication link state for maintenance activities.
-The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays
+The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays:
{{
}}
### How to perform actions
-We strongly recommend using `repctl` to perform any actions on `DellCSIReplicationGroup` objects. You can find detailed steps [here](../tools/#executing-actions)
+We strongly recommend using `repctl` to perform any actions on `DellCSIReplicationGroup` objects. You can find detailed steps [here](../tools/#executing-actions).
If you wish to use `kubectl` to perform actions, then use kubectl edit/patch operations and set the `action` field in the Custom Resource.
While performing site-specific actions, please consult each driver's documentation to get an exhaustive list of all the supported actions.
-For a brief guide on using actions for various DR workflows, please refer to this [document](../disaster-recovery)
+For a brief guide on using actions for various DR workflows, please refer to this [document](../disaster-recovery).
diff --git a/content/docs/replication/tools.md b/content/docs/replication/tools.md
index 9fde9bb07b..f2a0d34c5f 100644
--- a/content/docs/replication/tools.md
+++ b/content/docs/replication/tools.md
@@ -16,32 +16,34 @@ and managing replicated resources between multiple Kubernetes clusters.
### Managing Clusters
To begin managing replication with `repctl` you need to add your Kubernetes
-clusters, you can do that using `cluster add` command
+clusters, you can do that using `cluster add` command:
```shell
./repctl cluster add -f -n
```
You can view clusters that are currently being managed by `repctl`
-by running `cluster get` command
+by running `cluster get` command:
+
```shell
./repctl cluster get
```
-Or, alternatively, using `get cluster` command
+Or, alternatively, using `get cluster` command:
+
```shell
./repctl get cluster
```
Also, you can inject information about all of your current clusters as
-config maps into the same clusters, so it can be used by `dell-csi-replicator`
+config maps into the same clusters, so it can be used by `dell-csi-replicator`:
```shell
./repctl cluster inject
```
-You can also generate kubeconfigs from existing replication service accounts and inject them in config maps by providing `--use-sa` flag
+You can also generate kubeconfigs from existing replication service accounts and inject them in config maps by providing `--use-sa` flag:
```shell
./repctl cluster inject --use-sa
@@ -53,14 +55,14 @@ After adding clusters you want to manage with `repctl` you can query
resources from multiple clusters at once using `get` command.
For example, this command will list all storage classes in all clusters
-that currently are being managed by `repctl`
+that currently are being managed by `repctl`:
```shell
./repctl get storageclasses --all
```
-If you want to query some particular clusters you can do that by specifying
-`clusters` flag
+If you want to query some particular clusters you can do that by specifying with the
+`clusters` flag:
```shell
./repctl get pv --clusters cluster-1,cluster-3
@@ -73,7 +75,7 @@ included into the tool help flag `-h`.
#### Generic
Generic `create` command allows you to apply provided config file into
-multiple clusters at once
+multiple clusters at once:
```shell
/repctl create -f
@@ -81,7 +83,7 @@ multiple clusters at once
#### PersistentVolumeClaims
You can use `repctl` to create PVCs from Replication Group's PVs
-on the target cluster
+on the target cluster:
```shell
./repctl create pvc --rg -t --dry-run=false
@@ -93,7 +95,7 @@ re-run the command by turning off the dry-run flag to false.
#### Storage Classes
`repctl` can create special `replication enabled` storage classes from
-provided config, you can find example configs in `examples` folder
+provided config, you can find example configs in `examples` folder. The command would look similar to below:
```shell
./repctl create sc --from-config `
@@ -110,9 +112,9 @@ so you can easily differentiate them.
You can also differentiate between single cluster replication configured StorageClasses and ReplicationGroups and multi-cluster ones
by checking `remoteClusterID` field, for a single cluster the field would be set to `self`.
-To create replication enabled storage classes for single cluster replication using `create sc` command
+To create replication enabled storage classes for single cluster replication using `create sc` command,
be sure to set both `sourceClusterID` and `targetClusterID` to the same `clusterID` and continue as usual with executing the command.
-Name of StorageClass resource that created as "target" will be appended with `-tgt`.
+The name of the StorageClass resource that is created as the "target" will be appended with `-tgt`.
### Executing Actions
`repctl` can be used to execute various replication actions on ReplicationGroups.
@@ -121,33 +123,33 @@ Name of StorageClass resource that created as "target" will be appended with `-t
This command will perform a planned `failover` to a cluster or an RG.
-When working with multiple clusters, you can perform failover by specifying the target _cluster ID_. To do that use `--target ` parameter.
+When working with multiple clusters, you can perform failover by specifying the target _cluster ID_. To do that, use `--target ` parameter:
```shell
./repctl --rg failover --target
```
-When working with replication within a single cluster, you can perform failover by specifying the target _replication group ID_. To do that use `--target ` parameter.
+When working with replication within a single cluster, you can perform failover by specifying the target _replication group ID_. To do that, use `--target ` parameter:
```shell
./repctl --rg failover --target
```
-In both scenarios `repctl` will patch the CR at the source site with action **FAILOVER_REMOTE**.
+In both scenarios, `repctl` will patch the CR at the source site with action **FAILOVER_REMOTE**.
-You can also provide `--unplanned` parameter, then `repctl` will perform an unplanned failover to a given cluster or an RG, instead of **FAILOVER_REMOTE** `repctl` will patch CR at target cluster with action **UNPLANNED_FAILOVER_LOCAL**.
+You can also provide `--unplanned` parameter, then `repctl` will perform an unplanned failover to a given cluster or an RG. Instead of **FAILOVER_REMOTE** on the source cluster's CR, `repctl` will patch CR at target cluster with action **UNPLANNED_FAILOVER_LOCAL**.
#### Reprotect
This command will perform a `reprotect` at the specified cluster or the RG.
-When working with multiple clusters, you can perform reprotect by specifying the _cluster ID_. To do that use `--at ` parameter.
+When working with multiple clusters, you can perform reprotect by specifying the _cluster ID_. To do that, use `--at ` parameter:
```shell
./repctl --rg reprotect --at
```
-When working with replication within a single cluster, you can perform reprotect by specifying the _replication group ID_. To do that use `--rg ` parameter.
+When working with replication within a single cluster, you can perform reprotect by specifying the _replication group ID_. To do that, use `--rg ` parameter:
```shell
./repctl --rg reprotect
@@ -159,19 +161,19 @@ In both scenarios `repctl` will patch the CR at the source site with action **RE
This command will perform a planned `failback` to a cluster or an RG.
-When working with multiple clusters, you can perform failback by specifying the _cluster ID_, to do that use `--target ` parameter.
+When working with multiple clusters, you can perform failback by specifying the _cluster ID_. To do that, use `--target ` parameter:
```shell
./repctl --rg failback --target
```
-When working with replication within a single cluster, you can perform failback by specifying the _replication group ID_. To do that use `--target ` parameter.
+When working with replication within a single cluster, you can perform failback by specifying the _replication group ID_. To do that, use `--target ` parameter:
```shell
./repctl --rg failback --target
```
-In both scenarios `repctl` will patch the CR at the source site with action **FAILBACK_LOCAL**.
+In both scenarios, `repctl` will patch the CR at the source site with action **FAILBACK_LOCAL**.
You can also provide `--discard` parameter, then `repctl` will perform a failback but discard any writes at target, instead of **FAILBACK_LOCAL** `repctl` will patch CR at target cluster with action **ACTION_FAILBACK_DISCARD_CHANGES_LOCAL**.
@@ -179,26 +181,27 @@ You can also provide `--discard` parameter, then `repctl` will perform a failbac
This command will perform a `swap` at a specified cluster or an RG.
-When working with multiple clusters, you can perform swap by specifying the _cluster ID_. To do that use `--at ` parameter.
+When working with multiple clusters, you can perform swap by specifying the _cluster ID_. To do that, use `--at ` parameter:
```shell
./repctl --rg swap --at
```
-When working with replication within a single cluster, you can perform swap by specifying the _replication group ID_. To do that use `--rg ` parameter.
+When working with replication within a single cluster, you can perform swap by specifying the _replication group ID_. To do that, use `--rg ` parameter:
```shell
./repctl --rg swap
```
-repctl will patch CR at the source cluster with action `SWAP_LOCAL`.
+`repctl` will patch CR at the source cluster with action `SWAP_LOCAL`.
#### Wait For Completion
When executing actions you can provide `--wait` argument to make `repctl` wait for completion of specified action.
-For example when executing `failover`:
+For example when executing `failover`:
+
```shell
./repctl --rg failover --target --wait
```
@@ -213,7 +216,7 @@ For single or multi-cluster config:
```
Where `` can be one of the following:
-* `suspend` will suspend replication, changes will no longer be synced between replication sites
-* `resume` will resume replication, canceling the effect of `suspend` action
-* `sync` will force synchronization of change between replication sites
+* `suspend` will suspend replication, changes will no longer be synced between replication sites.
+* `resume` will resume replication, canceling the effect of `suspend` action.
+* `sync` will force synchronization of change between replication sites.
diff --git a/content/docs/replication/troubleshooting.md b/content/docs/replication/troubleshooting.md
index 2f98d03009..8dbb425d28 100644
--- a/content/docs/replication/troubleshooting.md
+++ b/content/docs/replication/troubleshooting.md
@@ -8,10 +8,11 @@ description: >
| Symptoms | Prevention, Resolution or Workaround |
| --- | --- |
-| Persistent volumes don't get created on the target cluster. | Run `kubectl describe` on one of the pods of replication controller and see if event says `Config update won't be applied because of invalid configmap/secrets. Please fix the invalid configuration`. If it does then ensure you correctly populated replication ConfigMap. You can check the current status by running `kubectl describe cm -n dell-replication-controller dell-replication-controller-config`. If ConfigMap is empty please edit it yourself or use `repctl cluster inject` command. |
+| Persistent volumes don't get created on the target cluster. | Run `kubectl describe` on one of the pods of replication controller and see if event says `Config update won't be applied because of invalid configmap/secrets. Please fix the invalid configuration`. If it does, then ensure you correctly populated replication ConfigMap. You can check the current status by running `kubectl describe cm -n dell-replication-controller dell-replication-controller-config`. If ConfigMap is empty, please edit it yourself or use `repctl cluster inject` command. |
| Persistent volumes don't get created on the target cluster. You don't see any events on the replication-controller pod. | Check logs of replication controller by running `kubectl logs -n dell-replication-controller dell-replication-controller-manager-`. If you see `clusterId - not found` errors then be sure to check if you specified the same clusterIDs in both your ConfigMap and replication enabled StorageClass. |
-| You apply replication action by manually editing ReplicationGroup resource field `spec.action` and don't see any change of ReplicationGroup state after a while. | Check events of the replication-controller pod, if it says `Cannot proceed with action . [unsupported action]` then check spelling of your action and consult [replication-actions](../replication-actions) page. Alternatively, you can use `repctl` instead of manually editing ReplicationGroup resources. |
-| You execute failover action using `repctl failover` command and see `failover: error executing failover to source site`. | This means you tried to failover to a cluster that is already marked source. If you still want to execute failover for RG just choose another cluster. |
+| You apply replication action by manually editing ReplicationGroup resource field `spec.action` and don't see any change of ReplicationGroup state after a while. | Check events of the replication-controller pod, if it says `Cannot proceed with action . [unsupported action]` then check spelling of your action and consult the [Replication Actions](../replication-actions) page. Alternatively, you can use `repctl` instead of manually editing ReplicationGroup resources. |
+| You execute failover action using `repctl failover` command and see `failover: error executing failover to source site`. | This means you tried to failover to a cluster that is already marked source. If you still want to execute failover for RG, just choose another cluster. |
| You've created PersistentVolumeClaim using replication enabled StorageClass but don't see any RGs created in the source cluster. | Check annotations of created PersistentVolumeClaim. If it doesn't have `annotations` that start with `replication.storage.dell.com` then please wait for a couple of minutes for them to be added and RG to be created. |
| When installing common replication controller using helm you see an error that states `invalid ownership metadata` and `missing key "app.kubernetes.io/managed-by": must be set to "Helm"` | This means that you haven't fully deleted the previous release, you can fix it by either deleting entire manifest by using `kubectl delete -f deploy/controller.yaml` or manually deleting conflicting resources (ClusterRoles, ClusterRoleBinding, etc.) |
-| PV and/or PVCs are not being created at the source/target cluster. If you check the controller's logs you can see `no such host` errors| Make sure cluster-1's API is pingable from cluster-2 and vice versa. If one of your clusters is OpenShift located in a private network and needs records in /etc/hosts - `exec` into controller pod and modify `/etc/hosts` manually. |
+| PV and/or PVCs are not being created at the source/target cluster. If you check the controller's logs you can see `no such host` errors| Make sure cluster-1's API is pingable from cluster-2 and vice versa. If one of your clusters is OpenShift located in a private network and needs records in /etc/hosts, `exec` into controller pod and modify `/etc/hosts` manually. |
+| After upgrading to Replication v1.4.0, if `kubectl get rg` returns an error `Unable to list "replication.storage.dell.com/v1alpha1, Resource=dellcsireplicationgroups"`| This means `kubectl` still doesn't recognize the new version of CRD `dellcsireplicationgroups.replication.storage.dell.com` after upgrade. Running the command `kubectl get DellCSIReplicationGroup.v1.replication.storage.dell.com/ -o yaml` will resolve the issue. |
diff --git a/content/docs/replication/uninstall.md b/content/docs/replication/uninstall.md
index 7ea7935157..8c1ef4f067 100644
--- a/content/docs/replication/uninstall.md
+++ b/content/docs/replication/uninstall.md
@@ -11,28 +11,35 @@ This section outlines the uninstallation steps for Container Storage Modules (CS
## Uninstalling common replication controller
-To uninstall the common replication controller you can use script `uninstall.sh` located in `scripts` folder:
+To uninstall the common replication controller, you can use the script `uninstall.sh` located in the `scripts` folder:
```shell
./uninstall.sh
```
-This script will automatically detect how current version is installed (with repctl or with helm) and use the correct method to delete it.
+This script will automatically detect how the current version was installed (repctl or Helm) and use the correct method to delete it.
-You can also manually uninstall replication controller using method that depends on how you installed replication controller.
+You can also manually uninstall the replication controller using a method that depends on how you installed it.
+
+If replication controller was installed using `helm`, use this command:
-If replication controller was installed using `helm` use this command:
```shell
helm delete -n dell-replication-controller replication
```
-If you used `controller.yaml` manifest with either `kubectl` or `repctl` use this:
+If you used `controller.yaml` manifest with either `kubectl` or `repctl`, use this:
+
```shell
kubectl delete -f deploy/controller.yaml
```
-> NOTE: Be sure to run chosen command on all clusters where you want to uninstall replication controller.
+To delete the replication CRD, you can run the command:
-## Uninstalling the replication sidecar
+```shell
+kubectl delete crd dellcsireplicationgroups.replication.storage.dell.com
+```
+
+> _**NOTE**_: Be sure to run the chosen command on all clusters where you want to uninstall the replication controller/CRD.
+## Uninstalling the replication sidecar
-To uninstall the replication sidecar you need to uninstall the CSI Driver, please view the [uninstall](../../csidriver/uninstall) page of the driver.
+To uninstall the replication sidecar, you need to uninstall the CSI Driver. Please view the [uninstall](../../csidriver/uninstall) page for the driver itself.
diff --git a/content/docs/replication/upgrade.md b/content/docs/replication/upgrade.md
index c422759b2e..9ab279cafb 100644
--- a/content/docs/replication/upgrade.md
+++ b/content/docs/replication/upgrade.md
@@ -13,39 +13,70 @@ CSM Replication module consists of two components:
Those two components should be upgraded separately. When upgrading them ensure that you use the same versions for both sidecar and
controller, because different versions could be incompatible with each other.
-> Note: While upgrading the module via helm, the `replicas` variable in `myvalues.yaml` can be at most one less than the number of worker nodes.
+> _**Note**_: While upgrading the module via helm, the `replicas` variable in `myvalues.yaml` can be at most one less than the number of worker nodes.
## Updating CSM Replication sidecar
To upgrade the CSM Replication sidecar that is installed along with the driver, the following steps are required.
->Note: These steps refer to the values file and `csi-install.sh` script that was used during the initial installation of the Dell CSI driver.
+> _**Note**_: These steps refer to the values file and `csi-install.sh` script that was used during the initial installation of the Dell CSI driver.
+
**Steps**
1. Update the `controller.replication.image` value in the values files to reference the new CSM Replication sidecar image.
-2. Run the csi-install script with the option `--upgrade` by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace --values ./myvalues.yaml --upgrade`
+2. Run the csi-install script with the option `--upgrade` by running:
+`cd ../dell-csi-helm-installer && ./csi-install.sh --namespace --values ./myvalues.yaml --upgrade`
3. Run the same command on the second Kubernetes cluster if you use multi-cluster replication topology
+>For more information on upgrading the CSI driver, please visit the [CSI driver upgrade page](../../csidriver/upgradation).
+
+### PowerScale
+
+On PowerScale systems, an additional step is needed when upgrading to CSM Replication v1.4.0 or later. Because the SyncIQ policy created on the target-side storage array is no longer used, it must be deleted for any existing `DellCSIReplicationGroup` objects after performing the upgrade to the CSM Replication sidecar and PowerScale CSI driver. These steps should be performed before the `DellCSIReplicationGroup` objects are used with the new version of the CSI driver. Until this step is performed, existing `DellCSIReplicationGroup` objects will display an UNKNOWN link state.
+
+1. Log in to the target PowerScale array.
+2. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
+3. Delete disabled, target-side SyncIQ policies that are used for CSM Replication. Such policies will be distinguished by their names, of the format `---`.
## Updating CSM Replication controller
+Make sure the appropriate release branch is available on the machine performing the upgrade by running:
+
+`git clone -b https://github.com/dell/csm-replication.git`
+
### Upgrading with Helm
-This option will only work if you have previously installed replication with helm chart available since version 1.1. If you used simple manifest or `repctl` please use [upgrading with repctl](#upgrading-with-repctl)
+This option will only work if you have previously installed replication via Helm chart, available since version 1.1. If you used simple manifest or `repctl` please use [upgrading with repctl](#upgrading-with-repctl)
**Steps**
-1. Update the `image` value in the values files to reference the new CSM Replication sidecar image or use a new version of the csm-replication helm chart
-2. Run the install script with the option `--upgrade` by running: `cd ./scripts && ./install.sh --values ./myvalues.yaml --upgrade`
-3. Run the same command on the second Kubernetes cluster if you use multi-cluster replication topology
+1. Update the `image` value in the values files to reference the new CSM Replication controller image or use a new version of the csm-replication Helm chart.
+2. Run the install script with the option `--upgrade` by running:
+
+ `cd ./scripts && ./install.sh --values ./myvalues.yaml --upgrade`
-> Note: Upgrade won't override currently existing ConfigMap, even if you change templated values in myvalues.yaml file. If you want to change the logLevel - edit ConfigMap from within the cluster using `kubectl edit cm -n dell-replication-controller dell-replication-controller-config`
+3. Run the same command on the second Kubernetes cluster if you use multi-cluster replication topology.
+
+> _**Note**_: Upgrade won't override currently existing ConfigMap, even if you change templated values in myvalues.yaml file. If you want to change the logLevel - edit ConfigMap from within the cluster using `kubectl edit cm -n dell-replication-controller dell-replication-controller-config`
### Upgrading with repctl
-> Note: These steps assume that you already have `repctl` configured to use correct clusters, if you don't know how to do that please refer to [installing with repctl](../deployment/install-repctl)
+> _**Note**_: These steps assume that you already have `repctl` configured to use correct clusters, if you don't know how to do that please refer to [installing with repctl](../deployment/install-repctl)
**Steps**
-1. Find a new version of deployment manifest that can be found in `deploy/controller.yaml`, with newer `image` pointing to the version of CSM Replication controller you want to upgrade to
-2. Apply said manifest using the usual `repctl create` command like so
-`./repctl create -f ./deploy/controller.yaml`. The output should have this line `Successfully updated existing deployment: dell-replication-controller-manager`
-3. Check if everything is OK by querying your Kubernetes clusters using `kubectl` like this `kubectl get pods -n dell-replication-controller`, your pods should READY and RUNNING
+1. Find a new version of deployment manifest that can be found in `deploy/controller.yaml`, with newer `image` pointing to the version of CSM Replication controller you want to upgrade to.
+2. Apply said manifest using the usual `repctl create` command like so:
+
+ `./repctl create -f ../deploy/controller.yaml`.
+
+ The output should have this line `Successfully updated existing deployment: dell-replication-controller-manager`
+3. Check if everything is OK by querying your Kubernetes clusters using `kubectl` using a `kubectl get`:
+
+ `kubectl get pods -n dell-replication-controller`
+
+ Your pods should be `READY` and `RUNNING`.
+
+### Replication CRD version update
+
+CRD `dellcsireplicationgroups.replication.storage.dell.com` has been updated to version `v1` in CSM Replication v1.4.0. To facilitate the continued use of existing `DellCSIReplicationGroup` CR objects after upgrading to CSM Replication v1.4.0 or later, an `init container` will be deployed during upgrade. The `init container` updates the existing CRs with necessary steps for their continued use.
+
+> _**Note**_: Do not update the CRD as part of upgrade. An `init container` included in the replication controller pod takes care of updating existing CRD and CR versions.
diff --git a/content/docs/replication/volume_expansion.md b/content/docs/replication/volume_expansion.md
index 464811d519..712ae378f4 100644
--- a/content/docs/replication/volume_expansion.md
+++ b/content/docs/replication/volume_expansion.md
@@ -9,16 +9,16 @@ description: >
Starting in v2.4.0, the CSI PowerMax driver supports the expansion of Replicated Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node.
## Prerequisites
-- To use this feature, enable resizer in values.yaml.
+- To use this feature, enable resizer in values.yaml:
```yaml
resizer:
enabled: true
```
-- To use this feature, the storage class that is used to create the PVC must have the attribute allowVolumeExpansion set to true.
+- To use this feature, the storage class that is used to create the PVC must have the attribute allowVolumeExpansion set to `true`.
## Basic Usage
-To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC - pmax-pvc-demo of size 5 Gi, then you can resize it to 10 Gi by updating the PVC.
+To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC `pmax-pvc-demo` of size 5 Gi, then you can resize it to 10 Gi by updating the PVC:
```yaml
kind: PersistentVolumeClaim
@@ -37,8 +37,8 @@ spec:
```
Update remote PVC with expanded size:
-1. Update the remote PVC size with the same size as on local PVC
+1. Update the remote PVC size with the same size as on local PVC.
2. After sync with remote CSI driver, volume size will be updated to show new size.
-*NOTE*: The Kubernetes Volume Expansion feature can only be used to increase the size of the volume, it cannot be used to shrink a volume.
+> _**NOTE**_: The Kubernetes Volume Expansion feature can only be used to increase the size of the volume, it cannot be used to shrink a volume.
diff --git a/content/docs/resiliency/_index.md b/content/docs/resiliency/_index.md
index e945bea855..a3fba04dfc 100644
--- a/content/docs/resiliency/_index.md
+++ b/content/docs/resiliency/_index.md
@@ -14,6 +14,8 @@ For the complete discussion and rationale, you can read the [pod-safety design p
For more background on the forced deletion of Pods in a StatefulSet, please visit [Force Delete StatefulSet Pods](https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#:~:text=In%20normal%20operation%20of%20a,1%20are%20alive%20and%20ready).
+CSM for Resiliency and [Non graceful node shutdown](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown) are mutually exclusive. One shall use either CSM for Resiliency or Non graceful node shutdown feature provided by Kubernetes.
+
## CSM for Resiliency High-Level Description
CSM for Resiliency is designed to make Kubernetes Applications, including those that utilize persistent storage, more resilient to various failures. The first component of the Resiliency module is a pod monitor that is specifically designed to protect stateful applications from various failures. It is not a standalone application, but rather is deployed as a _sidecar_ to CSI (Container Storage Interface) drivers, in both the driver's controller pods and the driver's node pods. Deploying CSM for Resiliency as a sidecar allows it to make direct requests to the driver through the Unix domain socket that Kubernetes sidecars use to make CSI requests.
@@ -29,9 +31,9 @@ CSM for Resiliency provides the following capabilities:
{{
}}
| Capability | PowerScale | Unity XT | PowerStore | PowerFlex | PowerMax |
| --------------------------------------- | :--------: | :------: | :--------: | :-------: | :------: |
-| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | yes | yes | no | yes | no |
-| Cleanup pod artifacts from failed nodes | yes | yes | no | yes | no |
-| Revoke PV access from failed nodes | yes | yes | no | yes | no |
+| Detect pod failures when: Node failure, K8S Control Plane Network failure, K8S Control Plane failure, Array I/O Network failure | yes | yes | yes | yes | no |
+| Cleanup pod artifacts from failed nodes | yes | yes | yes | yes | no |
+| Revoke PV access from failed nodes | yes | yes | yes | yes | no |
{{
}}
## Supported Operating Systems/Container Orchestrator Platforms
@@ -39,8 +41,8 @@ CSM for Resiliency provides the following capabilities:
{{
}}
## Supported CSI Drivers
@@ -62,6 +64,7 @@ CSM for Resiliency supports the following CSI drivers and versions.
| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0.0 + |
| CSI Driver for Dell Unity XT | [csi-unity](https://github.com/dell/csi-unity) | v2.0.0 + |
| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.3.0 + |
+| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.6.0 + |
{{}}
### PowerFlex Support
@@ -93,6 +96,14 @@ PowerScale is a highly scalable NFS array that is very well suited to Kubernetes
* A robust mechanism to detect if Nodes are actively doing I/O to volumes.
* Low latency REST API supports fast CSI provisioning and de-provisioning operations.
+### PowerStore Support
+
+PowerStore is a highly scalable array that is very well suited to Kubernetes deployments. The CSM for Resiliency support for PowerStore leverages the following PowerStore features:
+
+* Detection of Array I/O Network Connectivity status changes.
+* A robust mechanism to detect if Nodes are actively doing I/O to volumes.
+* Low latency REST API supports fast CSI provisioning and de-provisioning operations.
+
## Limitations and Exclusions
This file contains information on Limitations and Exclusions that users should be aware of. Additionally, there are driver specific limitations and exclusions that may be called out in the [Deploying CSM for Resiliency](deployment) page.
diff --git a/content/docs/resiliency/deployment.md b/content/docs/resiliency/deployment.md
index ec7c5fb61b..6505d7ec4c 100644
--- a/content/docs/resiliency/deployment.md
+++ b/content/docs/resiliency/deployment.md
@@ -14,6 +14,8 @@ For information on the Unity XT CSI driver, see [Unity XT CSI Driver](https://gi
For information on the PowerScale CSI driver, see [PowerScale CSI Driver](https://github.com/dell/csi-powerscale).
+For information on the PowerStore CSI driver, see [PowerStore CSI Driver](https://github.com/dell/csi-powerstore).
+
Configure all the helm chart parameters described below before installing the drivers.
## Helm Chart Installation
@@ -165,6 +167,39 @@ podmon:
- "--ignoreVolumelessPods=false"
```
+## PowerStore Specific Recommendations
+
+Here is a typical installation used for testing:
+
+```yaml
+podmon:
+ enabled: true
+ image: dellemc/podmon
+ controller:
+ args:
+ - "--csisock=unix:/var/run/csi/csi.sock"
+ - "--labelvalue=csi-powerstore"
+ - "--arrayConnectivityPollRate=60"
+ - "--driverPath=csi-powerstore.dellemc.com"
+ - "--mode=controller"
+ - "--skipArrayConnectionValidation=false"
+ - "--driver-config-params=/powerstore-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
+
+ node:
+ args:
+ - "--csisock=unix:/var/lib/kubelet/plugins/csi-powerstore.dellemc.com/csi_sock"
+ - "--labelvalue=csi-powerstore"
+ - "--arrayConnectivityPollRate=60"
+ - "--driverPath=csi-powerstore.dellemc.com"
+ - "--mode=node"
+ - "--leaderelection=false"
+ - "--driver-config-params=/powerstore-config-params/driver-config-params.yaml"
+ - "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
+```
+
## Dynamic parameters
CSM for Resiliency has configuration parameters that can be updated dynamically, such as the logging level and format. This can be
diff --git a/content/docs/resiliency/release/_index.md b/content/docs/resiliency/release/_index.md
index 96d9a62f47..eefb31ddfa 100644
--- a/content/docs/resiliency/release/_index.md
+++ b/content/docs/resiliency/release/_index.md
@@ -6,13 +6,13 @@ Description: >
Dell Container Storage Modules (CSM) release notes for resiliency
---
-## Release Notes - CSM Resiliency 1.3.0
+## Release Notes - CSM Resiliency 1.5.0
### New Features/Changes
-
+- Add CSM Resiliency support for PowerStore. ([#587](https://github.com/dell/csm/issues/587))
+- Update to the latest UBI/UBI Minimal images for CSM. ([#612](https://github.com/dell/csm/issues/612))
+- CSM 1.6 release specific changes. ([#583](https://github.com/dell/csm/issues/583))
### Fixed Issues
-- Documentation improvement to identify all requirements when building the service and running unit tests for CSM Authorization and CSM Resiliency repository (https://github.com/dell/karavi-resiliency/pull/131).
-
### Known Issues
\ No newline at end of file
diff --git a/content/docs/resiliency/upgrade.md b/content/docs/resiliency/upgrade.md
index a8cc56a9c2..a258208ed3 100644
--- a/content/docs/resiliency/upgrade.md
+++ b/content/docs/resiliency/upgrade.md
@@ -14,6 +14,8 @@ For information on the Unity XT CSI driver upgrade process, see [Unity XT CSI Dr
For information on the PowerScale CSI driver upgrade process, see [PowerScale CSI Driver](../../csidriver/upgradation/drivers/isilon).
+For information on the PowerStore CSI driver upgrade process, see [PowerStore CSI Driver](../../csidriver/upgradation/drivers/powerstore).
+
## Helm Chart Upgrade
To upgrade CSM for Resiliency with the driver, the following steps are required.
diff --git a/content/docs/secure/encryption/_index.md b/content/docs/secure/encryption/_index.md
index f206e7e453..61358dc195 100644
--- a/content/docs/secure/encryption/_index.md
+++ b/content/docs/secure/encryption/_index.md
@@ -68,7 +68,7 @@ the CSI driver must be restarted to pick up the change.
{{
}}
| COP/OS | Supported Versions |
|-|-|
-| Kubernetes | 1.22, 1.23, 1.24, 1.25 |
+| Kubernetes | 1.24, 1.25, 1.26 |
| Red Hat OpenShift | 4.10, 4.11 |
| RHEL | 7.9, 8.4 |
| Ubuntu | 18.04, 20.04 |
@@ -79,7 +79,7 @@ the CSI driver must be restarted to pick up the change.
{{
}}
diff --git a/content/v1/applicationmobility/deployment.md b/content/v1/applicationmobility/deployment.md
index d5ffb3e8fd..7950919f77 100644
--- a/content/v1/applicationmobility/deployment.md
+++ b/content/v1/applicationmobility/deployment.md
@@ -39,7 +39,7 @@ This table lists the configurable parameters of the Application Mobility Helm ch
| - | - | - | - |
| `replicaCount` | Number of replicas for the Application Mobility controllers | Yes | `1` |
| `image.pullPolicy` | Image pull policy for the Application Mobility controller images | Yes | `IfNotPresent` |
-| `controller.image` | Location of the Application Mobility Docker image | Yes | `dell/csm-application-mobility-controller:v0.1.0` |
+| `controller.image` | Location of the Application Mobility Docker image | Yes | `dellemc/csm-application-mobility-controller:v0.2.0` |
| `cert-manager.enabled` | If set to true, cert-manager will be installed during Application Mobility installation | Yes | `false` |
| `veleroNamespace` | If Velero is already installed, set to the namespace where Velero is installed | No | `velero` |
| `licenseName` | Name of the Secret that contains the License for Application Mobility | Yes | `license` |
@@ -57,6 +57,7 @@ This table lists the configurable parameters of the Application Mobility Helm ch
| `velero.configuration.backupStorageLocation.config` | Additional provider-specific configuration. See https://velero.io/docs/v1.9/api-types/backupstoragelocation/ for specific details. | Yes | ` ` |
| `velero.initContainers` | List of plugins used by Velero. Dell Velero plugin is required and plugins for other providers can be added. | Yes | ` ` |
| `velero.initContainers[0].name` | Name of the Dell Velero plugin. | Yes | `dell-custom-velero-plugin` |
-| `velero.initContainers[0].image` | Location of the Dell Velero plugin image. | Yes | `dellemc/csm-application-mobility-velero-plugin:v0.1.0` |
+| `velero.initContainers[0].image` | Location of the Dell Velero plugin image. | Yes | `dellemc/csm-application-mobility-velero-plugin:v0.2.0` |
| `velero.initContainers[0].volumeMounts[0].mountPath` | Mount path of the volume mount. | Yes | `/target` |
-| `velero.initContainers[0].volumeMounts[0].name` | Name of the volume mount. | Yes | `plugins` |
\ No newline at end of file
+| `velero.initContainers[0].volumeMounts[0].name` | Name of the volume mount. | Yes | `plugins` |
+| `velero.restic.privileged` | If set to true, Restic Pods will be run in privileged mode. Note: Set to true when using Red Hat OpenShift | No | `false` |
diff --git a/content/v1/applicationmobility/release.md b/content/v1/applicationmobility/release.md
index f9076b4b80..5eefd6b36a 100644
--- a/content/v1/applicationmobility/release.md
+++ b/content/v1/applicationmobility/release.md
@@ -7,6 +7,20 @@ Description: >
---
+
+## Release Notes - CSM Application Mobility 0.2.0
+### New Features/Changes
+
+- [Scheduled Backups for Application Mobility](https://github.com/dell/csm/issues/551)
+
+### Fixed Issues
+
+There are no fixed issues in this release.
+
+### Known Issues
+
+There are no known issues in this release.
+
## Release Notes - CSM Application Mobility 0.1.0
### New Features/Changes
diff --git a/content/v1/authorization/_index.md b/content/v1/authorization/_index.md
index 33a7425eda..12808a5a7e 100644
--- a/content/v1/authorization/_index.md
+++ b/content/v1/authorization/_index.md
@@ -70,6 +70,8 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t
| dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 |
| dellemc/csm-authorization-sidecar:v1.3.0 | v1.1.0, v1.2.0, v1.3.0 |
| dellemc/csm-authorization-sidecar:v1.4.0 | v1.1.0, v1.2.0, v1.3.0, v1.4.0 |
+| dellemc/csm-authorization-sidecar:v1.5.0 | v1.1.0, v1.2.0, v1.3.0, v1.4.0, v1.5.0 |
+| dellemc/csm-authorization-sidecar:v1.5.1 | v1.1.0, v1.2.0, v1.3.0, v1.4.0, v1.5.0, v1.5.1 |
{{}}
## Roles and Responsibilities
diff --git a/content/v1/authorization/cli.md b/content/v1/authorization/cli.md
index cb0b5242fc..eee82e73bd 100644
--- a/content/v1/authorization/cli.md
+++ b/content/v1/authorization/cli.md
@@ -21,7 +21,7 @@ If you feel that something is unclear or missing in this document, please open u
| [karavictl role get](#karavictl-role-get) | Get role |
| [karavictl role list](#karavictl-role-list) | List roles |
| [karavictl role create](#karavictl-role-create) | Create one or more CSM roles |
-| [karavictl role update](#karavictl-role-update) | Update one or more CSM roles |
+| [karavictl role update](#karavictl-role-update) | Update the quota of one or more CSM roles |
| [karavictl role delete](#karavictl-role-delete ) | Delete role |
| [karavictl rolebinding](#karavictl-rolebinding) | Manage role bindings |
| [karavictl rolebinding create](#karavictl-rolebinding-create) | Create a rolebinding between role and tenant |
@@ -402,11 +402,11 @@ $ karavictl role create --role=role-name=system-type=000000000001=mypool=2000000
### karavictl role update
-Update one or more CSM roles
+Update the quota of one or more CSM roles
##### Synopsis
-Updates one or more CSM roles
+Updates the quota of one or more CSM roles
```
karavictl role update [flags]
diff --git a/content/v1/authorization/configuration/_index.md b/content/v1/authorization/configuration/_index.md
new file mode 100644
index 0000000000..ce03f60cec
--- /dev/null
+++ b/content/v1/authorization/configuration/_index.md
@@ -0,0 +1,8 @@
+---
+title: Configuration
+linktitle: Configuration
+weight: 2
+description: Configure CSM Authorization
+---
+
+This section provides the details and instructions on how to configure CSM Authorization.
\ No newline at end of file
diff --git a/content/v1/authorization/configuration/powerflex/_index.md b/content/v1/authorization/configuration/powerflex/_index.md
new file mode 100644
index 0000000000..d3f122dd68
--- /dev/null
+++ b/content/v1/authorization/configuration/powerflex/_index.md
@@ -0,0 +1,63 @@
+---
+title: PowerFlex
+linktitle: PowerFlex
+description: >
+ Enabling CSM Authorization for PowerFlex CSI Driver
+---
+
+## Configuring PowerFlex CSI Driver with CSM for Authorization
+
+Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow these steps to configure the CSI Drivers to work with the Authorization sidecar:
+
+1. Apply the secret containing the token data into the driver namespace. It's assumed that the Kubernetes administrator has the token secret manifest saved in `/tmp/token.yaml`.
+
+ ```console
+ # It is assumed that array type powerflex has the namepace "vxflexos".
+ kubectl apply -f /tmp/token.yaml -n vxflexos
+ ```
+
+2. Edit these parameters in `samples/secret/karavi-authorization-config.json` file in the [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver.
+
+ | Parameter | Description | Required | Default |
+ | --------- | ----------- | -------- |-------- |
+ | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - |
+ | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - |
+ | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
+ | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
+ | systemID | System ID of the backend storage array. | Yes | " " |
+ | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
+ | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
+
+
+Create the karavi-authorization-config secret using this command:
+
+`kubectl -n vxflexos create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -`
+
+>__Note__:
+> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password.
+
+3. Create the proxy-server-root-certificate secret.
+
+ If running in *insecure* mode, create the secret with empty data:
+
+ `kubectl -n vxflexos create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -`
+
+ Otherwise, create the proxy-server-root-certificate secret with the appropriate file:
+
+ `kubectl -n vxflexos create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -`
+
+4. Please refer to step 4 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex/#install-the-driver) to edit the parameters in `samples/config.yaml` file to communicate with the sidecar.
+
+ Update *endpoint* to match the endpoint in `samples/secret/karavi-authorization-config.json`
+
+5. Create the vxflexos-config secret using this command:
+
+ `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -`
+
+6. Please refer to step 9 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex/#install-the-driver) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar.
+
+ Enable CSM for Authorization and provide the *proxyHost* address
+
+7. Install the CSI PowerFlex driver
+
+8. (Optional) Install [dellctl](../../../references/cli) to perform Kubernetes administrator commands for additional capabilities (e.g., list volumes). Please refer to the [dellctl documentation page](../../../references/cli) for the installation steps and command list.
\ No newline at end of file
diff --git a/content/v1/authorization/configuration/powermax/_index.md b/content/v1/authorization/configuration/powermax/_index.md
new file mode 100644
index 0000000000..254651da72
--- /dev/null
+++ b/content/v1/authorization/configuration/powermax/_index.md
@@ -0,0 +1,55 @@
+---
+title: PowerMax
+linktitle: PowerMax
+description: >
+ Enabling CSM Authorization for PowerMax CSI Driver
+---
+
+## Configuring PowerMax CSI Driver with CSM for Authorization
+
+Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow these steps to configure the CSI Drivers to work with the Authorization sidecar:
+
+1. Apply the secret containing the token data into the driver namespace. It's assumed that the Kubernetes administrator has the token secret manifest saved in `/tmp/token.yaml`.
+
+ ```console
+ # It is assumed that array type powermax has the namespace "powermax".
+ kubectl apply -f /tmp/token.yaml -n powermax
+ ```
+
+2. Edit these parameters in `samples/secret/karavi-authorization-config.json` file in [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver.
+
+ | Parameter | Description | Required | Default |
+ | --------- | ----------- | -------- |-------- |
+ | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - |
+ | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - |
+ | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
+ | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
+ | systemID | System ID of the backend storage array. | Yes | " " |
+ | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
+ | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
+
+
+Create the karavi-authorization-config secret using this command:
+
+`kubectl -n powermax create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -`
+
+>__Note__:
+> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password
+
+3. Create the proxy-server-root-certificate secret.
+
+ If running in *insecure* mode, create the secret with empty data:
+
+ `kubectl -n powermax create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -`
+
+ Otherwise, create the proxy-server-root-certificate secret with the appropriate file:
+
+ `kubectl -n powermax create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -`
+
+4. Please refer to step 8 in the [installation steps for PowerMax](../../../csidriver/installation/helm/powermax/#install-the-driver) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar.
+
+ Update *endpoint* to match the endpoint in `samples/secret/karavi-authorization-config.json`.
+
+5. Enable CSM for Authorization and provide the *proxyHost* address
+
+6. Install the CSI PowerMax driver
\ No newline at end of file
diff --git a/content/v1/authorization/configuration/powerscale/_index.md b/content/v1/authorization/configuration/powerscale/_index.md
new file mode 100644
index 0000000000..f98b44f3a2
--- /dev/null
+++ b/content/v1/authorization/configuration/powerscale/_index.md
@@ -0,0 +1,72 @@
+---
+title: PowerScale
+linktitle: PowerScale
+description: >
+ Enabling CSM Authorization for PowerScale CSI Driver
+---
+
+## Configuring PowerScale CSI Driver with CSM for Authorization
+
+Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow these steps to configure the CSI Drivers to work with the Authorization sidecar:
+
+1. Apply the secret containing the token data into the driver namespace. It's assumed that the Kubernetes administrator has the token secret manifest saved in `/tmp/token.yaml`.
+
+ ```console
+ # It is assumed that array type powerscale has the namespace "isilon".
+ kubectl apply -f /tmp/token.yaml -n isilon
+ ```
+
+2. Edit these parameters in `samples/secret/karavi-authorization-config.json` file in [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver.
+
+ | Parameter | Description | Required | Default |
+ | --------- | ----------- | -------- |-------- |
+ | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - |
+ | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - |
+ | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
+ | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
+ | systemID | System ID of the backend storage array. | Yes | " " |
+ | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
+ | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
+
+
+Create the karavi-authorization-config secret using this command:
+
+`kubectl -n isilon create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -`
+
+>__Note__:
+> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password
+> - The *systemID* will be the *clusterName* of the array.
+> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
+
+3. Create the proxy-server-root-certificate secret.
+
+ If running in *insecure* mode, create the secret with empty data:
+
+ `kubectl -n isilon create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -`
+
+ Otherwise, create the proxy-server-root-certificate secret with the appropriate file:
+
+ `kubectl -n isilon create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -`
+
+4. Please refer to step 5 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon/#install-the-driver) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar.
+
+ Update *endpointPort* to match the endpoint port number in `samples/secret/karavi-authorization-config.json`
+
+*Notes:*
+> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of `samples/secret/secret.yaml`.
+
+5. Enable CSM for Authorization and provide the *proxyHost* address
+
+6. Please refer to step 6 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon/#install-the-driver) to edit the parameters in `samples/secret/secret.yaml` file to communicate with the sidecar.
+
+ Update *endpoint* to match the endpoint in `samples/secret/karavi-authorization-config.json`
+
+*Notes:*
+> - Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*.
+> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
+
+7. Create the isilon-creds secret using this command:
+
+ `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
+
+8. Install the CSI PowerScale driver
\ No newline at end of file
diff --git a/content/v1/authorization/configuration/proxy-server/_index.md b/content/v1/authorization/configuration/proxy-server/_index.md
new file mode 100644
index 0000000000..5b41203732
--- /dev/null
+++ b/content/v1/authorization/configuration/proxy-server/_index.md
@@ -0,0 +1,128 @@
+---
+title: Proxy Server
+linktitle: Proxy Server
+description: >
+ Configuring the CSM for Authorization Proxy Server
+---
+
+## Configuring the CSM for Authorization Proxy Server
+
+The storage administrator must first configure the proxy server with the following:
+- Storage systems
+- Tenants
+- Roles
+- Bind roles to tenants
+
+>__Note__:
+> - The `RPM deployment` will use the address and port of the server (i.e. grpc.DNS-hostname:443).
+> - The `Helm deployment` will use the address and port of the Ingress hosts for the storage, tenant, and role services.
+
+### Configuring Storage
+
+A `storage` entity in CSM Authorization consists of the storage type (PowerFlex, PowerMax, PowerScale), the system ID, the API endpoint, and the credentials. For example, to create PowerFlex storage:
+
+
+```yaml
+# RPM Deployment
+karavictl storage create --type powerflex --endpoint https://10.0.0.1 --system-id ${systemID} --user ${user} --password ${password} --array-insecure
+
+# Helm Deployment
+karavictl storage create --type powerflex --endpoint https://10.0.0.1 --system-id ${systemID} --user ${user} --password ${password} --insecure --array-insecure --addr storage.csm-authorization.com:
+```
+
+>__Note__:
+> - The `insecure` flag specifies to skip certificate validation when connecting to the CSM Authorization storage service.
+> - The `array-insecure` flag specifies to skip certificate validation when proxy-service connects to the backend storage array. Run `karavictl storage create --help` for help.
+
+### Configuring Tenants
+
+A `tenant` is a Kubernetes cluster that a role will be bound to. For example, to create a tenant named `Finance`:
+
+```yaml
+# RPM Deployment
+karavictl tenant create --name Finance --insecure --addr grpc.DNS-hostname:443
+
+# Helm Deployment
+karavictl tenant create --name Finance --insecure --addr tenant.csm-authorization.com:
+```
+
+>__Note__:
+> - The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl tenant create --help` for help.
+
+### Configuring Roles
+
+A `role` consists of a name, the storage to use, and the quota limit for the storage pool to be used. For example, to create a role named `FinanceRole` using the PowerFlex storage created above with a quota limit of 100GB in storage pool `myStoragePool`:
+
+```yaml
+# RPM Deployment
+karavictl role create --role=FinanceRole=powerflex=${systemID}=myStoragePool=100GB
+
+# Helm Deployment
+karavictl role create --insecure --addr role.csm-authorization.com:30016 --role=FinanceRole=powerflex=${systemID}=myStoragePool=100GB
+```
+
+>__Note__:
+> - The `insecure` flag specifies to skip certificate validation when connecting to the role service. Run `karavictl role create --help` for help.
+
+### Configuring Role Bindings
+
+A `role binding` binds a role to a tenant. For example, to bind the `FinanceRole` to the `Finance` tenant:
+
+```yaml
+# RPM Deployment
+karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr grpc.DNS-hostname:443
+
+# Helm Deployment
+karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr tenant.csm-authorization.com:
+```
+
+>__Note__:
+> - The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl rolebinding create --help` for help.
+
+### Generate a Token
+
+- [RPM Deployment](#rpm)
+- [Helm Deployment](#helm)
+
+#### RPM
+After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin.
+
+>__Note__:
+> - The `--insecure` flag is required if certificates were not provided in `$HOME/.karavi/config.json`.
+> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin.
+
+ ```
+ echo === Generating token ===
+ karavictl generate token --tenant ${tenantName} --insecure --addr grpc.DNS-hostname:443 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' > token.yaml
+
+ echo === Copy token to Driver Host ===
+ sshpass -p ${DriverHostPassword} scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
+ ```
+
+#### Helm
+
+Now that the tenant is bound to a role, a JSON Web Token can be generated for the tenant. For example, to generate a token for the `Finance` tenant:
+
+```
+karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:
+
+{
+ "Token": "\napiVersion: v1\nkind: Secret\nmetadata:\n name: proxy-authz-tokens\ntype: Opaque\ndata:\n access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUXhPRFlzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLmJIODN1TldmaHoxc1FVaDcweVlfMlF3N1NTVnEyRzRKeGlyVHFMWVlEMkU=\n refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWXhNallzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkxNbWVUSkZlX2dveXR0V0lUUDc5QWVaTy1kdmN5SHAwNUwyNXAtUm9ZZnM=\n"
+}
+```
+
+Process the above response to filter the secret manifest. For example using sed you can run the following:
+
+```
+karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com: | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g'
+apiVersion: v1
+kind: Secret
+metadata:
+ name: proxy-authz-tokens
+type: Opaque
+data:
+ access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUTFOekVzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLk4tNE42Q1pPbUptcVQtRDF5ZkNGdEZqSmRDRjcxNlh1SXlNVFVyckNOS1U=
+ refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWTFNVEVzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkVxb3lXNld5ZEFLdU9mSmtkMkZaMk9TVThZMzlKUFc0YmhfNHc5R05ZNmM=
+```
+
+This secret must be applied in the driver namespace.
diff --git a/content/v1/authorization/deployment/helm/_index.md b/content/v1/authorization/deployment/helm/_index.md
index d1dd59e0de..cae4f14c66 100644
--- a/content/v1/authorization/deployment/helm/_index.md
+++ b/content/v1/authorization/deployment/helm/_index.md
@@ -87,8 +87,8 @@ The following third-party components are optionally installed in the specified n
| redis.images.commander | The image to use for Redis Commander. | Yes | rediscommander/redis-commander:latest |
| redis.storageClass | The storage class for Redis to use for persistence. If not supplied, the default storage class is used. | No | - |
- *NOTE*:
-- The tenant, role, and storage services use GRPC. If the Ingress Controller requires annotations to support GRPC, they must be supplied.
+>__Note__:
+> - The tenant, role, and storage services use GRPC. If the Ingress Controller requires annotations to support GRPC, they must be supplied.
6. Install the driver using `helm`:
@@ -133,13 +133,9 @@ Karavictl commands and intended use can be found [here](../../cli/).
## Configuring the CSM Authorization Proxy Server
-The storage administrator must first configure the proxy server with the following:
-- Storage systems
-- Tenants
-- Roles
-- Role bindings
+The first part of CSM for Authorization deployment is to configure the proxy server. This is controlled by the Storage Administrator.
-This is done using `karavictl` to connect to the storage, tenant, and role services. In this example, we will be referencing an installation using `csm-authorization.com` as the authorization.hostname value and the NGINX Ingress Controller accessed via the cluster's master node.
+Configuration is achieved by using `karavictl` to connect to the storage, tenant, and role services. In this example, we will be referencing an installation using `csm-authorization.com` as the authorization.hostname value and the NGINX Ingress Controller accessed via the cluster's master node.
Run `kubectl -n authorization get ingress` and `kubectl -n authorization get service` to see the Ingress rules for these services and the exposed port for accessing these services via the LoadBalancer. For example:
@@ -175,177 +171,14 @@ On the machine running `karavictl`, the `/etc/hosts` file needs to be updated wi
The port that exposes these services is `30016`.
-
-### Configure Storage
-
-A `storage` entity in CSM Authorization consists of the storage type (PowerFlex, PowerMax, PowerScale), the system ID, the API endpoint, and the credentials. For example, to create PowerFlex storage:
-
-```
-karavictl storage create --type powerflex --endpoint https://10.0.0.1 --system-id ${systemID} --user ${user} --password ${password} --insecure --array-insecure --addr storage.csm-authorization.com:30016
-```
-
- *NOTE*:
-- The `insecure` flag specifies to skip certificate validation when connecting to the CSM Authorization storage service. The `array-insecure` flag specifies to skip certificate validation when proxy-service connects to the backend storage array. Run `karavictl storage create --help` for help.
-
-### Configuring Tenants
-
-A `tenant` is a Kubernetes cluster that a role will be bound to. For example, to create a tenant named `Finance`:
-
-```
-karavictl tenant create --name Finance --insecure --addr tenant.csm-authorization.com:30016
-```
-
- *NOTE*:
-- The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl tenant create --help` for help.
-
-### Configuring Roles
-
-A `role` consists of a name, the storage to use, and the quota limit for the storage pool to be used. For example, to create a role named `FinanceRole` using the PowerFlex storage created above with a quota limit of 100GB in storage pool `myStoragePool`:
-
-```
-karavictl role create --insecure --addr role.csm-authorization.com:30016 --role=FinanceRole=powerflex=${systemID}=myStoragePool=100GB
-```
-
- *NOTE*:
-- The `insecure` flag specifies to skip certificate validation when connecting to the role service. Run `karavictl role create --help` for help.
-
-### Configuring Role Bindings
-
-A `role binding` binds a role to a tenant. For example, to bind the `FinanceRole` to the `Finance` tenant:
-
-```
-karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --addr tenant.csm-authorization.com:30016
-```
-
- *NOTE*:
-- The `insecure` flag specifies to skip certificate validation when connecting to the tenant service. Run `karavictl rolebinding create --help` for help.
-
-### Generating a Token
-
-Now that the tenant is bound to a role, a JSON Web Token can be generated for the tenant. For example, to generate a token for the `Finance` tenant:
-
-```
-karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:30016
-
-{
- "Token": "\napiVersion: v1\nkind: Secret\nmetadata:\n name: proxy-authz-tokens\ntype: Opaque\ndata:\n access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUXhPRFlzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLmJIODN1TldmaHoxc1FVaDcweVlfMlF3N1NTVnEyRzRKeGlyVHFMWVlEMkU=\n refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWXhNallzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkxNbWVUSkZlX2dveXR0V0lUUDc5QWVaTy1kdmN5SHAwNUwyNXAtUm9ZZnM=\n"
-}
-```
-
-Process the above response to filter the secret manifest. For example using sed you can run the following:
-
-```
-karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:30016 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g'
-apiVersion: v1
-kind: Secret
-metadata:
- name: proxy-authz-tokens
-type: Opaque
-data:
- access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUTFOekVzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLk4tNE42Q1pPbUptcVQtRDF5ZkNGdEZqSmRDRjcxNlh1SXlNVFVyckNOS1U=
- refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWTFNVEVzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkVxb3lXNld5ZEFLdU9mSmtkMkZaMk9TVThZMzlKUFc0YmhfNHc5R05ZNmM=
-```
-
-This secret must be applied in the driver namespace. Continue reading the next section for configuring the driver to use CSM Authorization.
+Please continue following the steps outlined in the [proxy server](../../configuration/proxy-server) configuration.
## Configuring a Dell CSI Driver with CSM for Authorization
-The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin.
-
-### Configuring a Dell CSI Driver
+The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin.
-Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar:
+Please follow the steps outlined in [PowerFlex](../../configuration/powerflex), [PowerMax](../../configuration/powermax), or [PowerScale](../../configuration/powerscale) to configure the CSI Driver to work with the Authorization sidecar.
-1. Apply the secret containing the token data into the driver namespace. It's assumed that the Kubernetes administrator has the token secret manifest saved in `/tmp/token.yaml`.
-
- ```console
- # It is assumed that array type powermax has the namespace "powermax", powerflex has the namepace "vxflexos", and powerscale has the namespace "isilon".
- kubectl apply -f /tmp/token.yaml -n powermax
- kubectl apply -f /tmp/token.yaml -n vxflexos
- kubectl apply -f /tmp/token.yaml -n isilon
- ```
-
-2. Edit the following parameters in samples/secret/karavi-authorization-config.json file in [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples), [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret), or [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver.
-
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - |
- | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - |
- | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
- | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
- | systemID | System ID of the backend storage array. | Yes | " " |
- | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
- | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
-
-
-Create the karavi-authorization-config secret using the following command:
-
-`kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -`
-
->__Note__:
-> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password
-> - For PowerScale, the *systemID* will be the *clusterName* of the array.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
-3. Create the proxy-server-root-certificate secret.
-
- If running in *insecure* mode, create the secret with empty data:
-
- `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -`
-
- Otherwise, create the proxy-server-root-certificate secret with the appropriate file:
-
- `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -`
-
-
->__Note__: Follow the steps below for additional configurations to one or more of the supported CSI drivers.
-#### PowerFlex
-
-Please refer to step 5 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in samples/config.yaml file to communicate with the sidecar.
-
-1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
-2. Create vxflexos-config secret using the following command:
-
- `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -`
-
-Please refer to step 9 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar.
-
-3. Enable CSM for Authorization and provide *proxyHost* address
-
-4. Install the CSI PowerFlex driver
-#### PowerMax
-
-Please refer to step 7 in the [installation steps for PowerMax](../../../csidriver/installation/helm/powermax) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar.
-
-1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
-2. Enable CSM for Authorization and provide *proxyHost* address
-
-3. Install the CSI PowerMax driver
-
-#### PowerScale
-
-Please refer to step 5 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar.
-
-1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json
-
-*Notes:*
-> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
-
-2. Enable CSM for Authorization and provide *proxyHost* address
-
-Please refer to step 6 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in samples/secret/secret.yaml file to communicate with the sidecar.
-
-3. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
->__Note__: Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*.
-
-4. Create the isilon-creds secret using the following command:
-
- `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
-
-5. Install the CSI PowerScale driver
## Updating CSM for Authorization Proxy Server Configuration
CSM for Authorization has a subset of configuration parameters that can be updated dynamically:
diff --git a/content/v1/authorization/deployment/rpm/_index.md b/content/v1/authorization/deployment/rpm/_index.md
index 9e9d413db2..ddddc0ed30 100644
--- a/content/v1/authorization/deployment/rpm/_index.md
+++ b/content/v1/authorization/deployment/rpm/_index.md
@@ -17,54 +17,47 @@ The CSM for Authorization proxy server requires a Linux host with the following
- 4 CPU
- 200 GB local storage
-These packages need to be installed on the Linux host:
+The following package needs to be installed on the Linux host:
- container-selinux
-- k3s-selinux-0.4-1
-Use the appropriate package manager on the machine to install the packages.
+Use the appropriate package manager on the machine to install the package.
### Using yum on CentOS/RedHat 7:
yum install -y container-selinux
-yum install -y https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm
-
### Using yum on CentOS/RedHat 8:
yum install -y container-selinux
-yum install -y https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm
-
### Dark Sites
For environments where `yum` will not work, obtain the supported version of container-selinux for your OS version and install it.
The container-selinux RPMs for CentOS/RedHat 7 and 8 can be downloaded from [https://centos.pkgs.org/7/centos-extras-x86_64/](https://centos.pkgs.org/7/centos-extras-x86_64/) and [https://centos.pkgs.org/8/centos-appstream-x86_64/](https://centos.pkgs.org/8/centos-appstream-x86_64/), respectively.
-The k3s-selinux-0.4-1 RPM can be obtained from [https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm](https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm) or [https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm](https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm) for CentOS/RedHat 7 and 8, respectively. Download the supported version of k3s-selinux-0.4-1 for your OS version and install it.
-
## Deploying the CSM Authorization Proxy Server
The first part of deploying CSM for Authorization is installing the proxy server. This activity and the administration of the proxy server will be owned by the storage administrator.
-The CSM for Authorization proxy server is installed using a single binary installer.
+The CSM for Authorization proxy server is installed using a shell script after extracting from a tar archive.
If CSM for Authorization is being installed on a system where SELinux is enabled, you must ensure the proper SELinux policies have been installed.
-### Single Binary Installer
+### Shell Script Installer
-The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
+The easiest way to obtain the tar archive with the shell script installer is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
-Alternatively, the single binary installer can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer:
+Alternatively, the tar archive can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer:
```
-make dist build-installer rpm
+make dist build-installer rpm package
```
-The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`.
+The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`. The `package` step bundles the install script, authorization package, pre-downloaded K3s-SELinux packages, and policies folder together for the installation in the `packages/` directory.
This allows CSM for Authorization to be installed in network-restricted environments.
-A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`.
+A Storage Administrator can execute the shell script, install_karavi_auth.sh as a root user or via `sudo`.
### Installing the RPM
@@ -118,212 +111,35 @@ A Storage Administrator can execute the installer or rpm package as a root user
$ openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out DNS-hostname.com.crt -days 365 -sha256
```
-3. To install the rpm package on the system, run the below command:
+3. To install the rpm package on the system, you must first extract the contents of the tar file with the command:
+
+ ```shell
+ tar -xvf karavi_authorization_
+ ```
+
+4. Afterwards, you must enter the extracted folder's directory and run the shell script:
```shell
- rpm -ivh
+ cd karavi_authorization_
+ sh install_karavi_auth.sh
```
-4. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`.
+5. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`.
If errors occur during installation, review the [Troubleshooting](../../troubleshooting) section.
## Configuring the CSM for Authorization Proxy Server
-The storage administrator must first configure the proxy server with the following:
-- Storage systems
-- Tenants
-- Roles
-- Bind roles to tenants
-
-Run the following commands on the Authorization proxy server:
->__Note__: The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`.
-
- ```console
- # Specify any desired name
- export RoleName=""
- export RoleQuota=""
- export TenantName=""
-
- # Specify info about Array1
- export Array1Type=""
- export Array1SystemID=""
- export Array1User=""
- export Array1Password=""
- export Array1Pool=""
- export Array1Endpoint=""
-
- # Specify info about Array2
- export Array2Type=""
- export Array2SystemID=""
- export Array2User=""
- export Array2Password=""
- export Array2Pool=""
- export Array2Endpoint=""
-
- # Specify IPs
- export DriverHostVMIP=""
- export DriverHostVMPassword=""
- export DriverHostVMUser=""
-
- # Specify Authorization proxy host address. NOTE: this is not the same as IP
- export AuthorizationProxyHost=""
-
- echo === Creating Storage(s) ===
- # Add array1 to authorization
- karavictl storage create \
- --type ${Array1Type} \
- --endpoint ${Array1Endpoint} \
- --system-id ${Array1SystemID} \
- --user ${Array1User} \
- --password ${Array1Password} \
- --array-insecure
-
- # Add array2 to authorization
- karavictl storage create \
- --type ${Array2Type} \
- --endpoint ${Array2Endpoint} \
- --system-id ${Array2SystemID} \
- --user ${Array2User} \
- --password ${Array2Password} \
- --array-insecure
-
- echo === Creating Tenant ===
- karavictl tenant create -n $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}"
-
- echo === Creating Role ===
- karavictl role create \
- --role=${RoleName}=${Array1Type}=${Array1SystemID}=${Array1Pool}=${RoleQuota} \
- --role=${RoleName}=${Array2Type}=${Array2SystemID}=${Array2Pool}=${RoleQuota}
-
- echo === === Binding Role ===
- karavictl rolebinding create --tenant $TenantName --role $RoleName --insecure --addr "grpc.${AuthorizationProxyHost}"
- ```
-
-### Generate a Token
-
-After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin.
-
->__Note__:
-> - The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`.
-> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin.
-
- ```
- echo === Generating token ===
- karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' > token.yaml
-
- echo === Copy token to Driver Host ===
- sshpass -p ${DriverHostPassword} scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
- ```
-
-### Copy the karavictl Binary to the Kubernetes Master Node
-
-The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell CSI driver with CSM for Authorization.
-
-```
-sshpass -p ${DriverHostPassword} scp bin/karavictl root@{DriverHostVMIP}:/tmp/karavictl
-```
+The first part of CSM for Authorization deployment is to configure the proxy server. This is controlled by the Storage Administrator.
->__Note__: The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin.
+Please follow the steps outlined in the [proxy server](../../configuration/proxy-server) configuration.
## Configuring a Dell CSI Driver with CSM for Authorization
-The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin.
-
-### Configuring a Dell CSI Driver
-
-Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar:
-
-1. Create the secret token in the namespace of the driver.
-
- ```console
- # It is assumed that array type powermax has the namespace "powermax", powerflex has the namepace "vxflexos", and powerscale has the namespace "isilon".
- kubectl apply -f /tmp/token.yaml -n powermax
- kubectl apply -f /tmp/token.yaml -n vxflexos
- kubectl apply -f /tmp/token.yaml -n isilon
- ```
-
-2. Edit the following parameters in samples/secret/karavi-authorization-config.json file in [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples), [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret), or [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver.
-
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - |
- | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - |
- | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
- | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
- | systemID | System ID of the backend storage array. | Yes | " " |
- | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
- | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
-
-
-Create the karavi-authorization-config secret using the following command:
-
-`kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -`
-
->__Note__:
-> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password
-> - For PowerScale, the *systemID* will be the *clusterName* of the array.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
-3. Create the proxy-server-root-certificate secret.
-
- If running in *insecure* mode, create the secret with empty data:
-
- `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -`
-
- Otherwise, create the proxy-server-root-certificate secret with the appropriate file:
-
- `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -`
-
-
->__Note__: Follow the steps below for additional configurations to one or more of the supported CSI drivers.
-#### PowerFlex
-
-Please refer to step 5 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in samples/config.yaml file to communicate with the sidecar.
-
-1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
-2. Create vxflexos-config secret using the following command:
-
- `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -`
-
-Please refer to step 9 in the [installation steps for PowerFlex](../../../csidriver/installation/helm/powerflex) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar.
-
-3. Enable CSM for Authorization and provide *proxyHost* address
-
-4. Install the CSI PowerFlex driver
-#### PowerMax
-
-Please refer to step 7 in the [installation steps for PowerMax](../../../csidriver/installation/helm/powermax) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar.
-
-1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
-2. Enable CSM for Authorization and provide *proxyHost* address
-
-3. Install the CSI PowerMax driver
-
-#### PowerScale
-
-Please refer to step 5 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar.
-
-1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json
-
-*Notes:*
-> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
-
-2. Enable CSM for Authorization and provide *proxyHost* address
-
-Please refer to step 6 in the [installation steps for PowerScale](../../../csidriver/installation/helm/isilon) to edit the parameters in samples/secret/secret.yaml file to communicate with the sidecar.
-
-3. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
->__Note__: Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*.
+The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant administrator.
-4. Create the isilon-creds secret using the following command:
+Please follow the steps outlined in [PowerFlex](../../configuration/powerflex), [PowerMax](../../configuration/powermax), or [PowerScale](../../configuration/powerscale) to configure the CSI Driver to work with the Authorization sidecar.
- `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
-
-5. Install the CSI PowerScale driver
## Updating CSM for Authorization Proxy Server Configuration
CSM for Authorization has a subset of configuration parameters that can be updated dynamically:
@@ -350,9 +166,9 @@ Copy the new, encoded data and edit the `karavi-config-secret` with the new data
Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret.
->__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`
+>__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is required if certificates were not provided in `$HOME/.karavi/config.json`
-`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' | kubectl -n $namespace apply -f -`
+`karavictl generate token --tenant $TenantName --insecure --addr grpc.DNS-hostname:443 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' | kubectl -n $namespace apply -f -`
## CSM for Authorization Proxy Server Dynamic Configuration Settings
diff --git a/content/v1/authorization/release/_index.md b/content/v1/authorization/release/_index.md
index 4352059fbe..16ff0da3bb 100644
--- a/content/v1/authorization/release/_index.md
+++ b/content/v1/authorization/release/_index.md
@@ -6,22 +6,13 @@ Description: >
Dell Container Storage Modules (CSM) release notes for authorization
---
-## Release Notes - CSM Authorization 1.4.0
+## Release Notes - CSM Authorization 1.5.1
### New Features/Changes
-- CSM 1.4 Release specific changes. ([#350](https://github.com/dell/csm/issues/350))
-- CSM Authorization insecure related entities are renamed to skipCertificateValidation. ([#368](https://github.com/dell/csm/issues/368))
+- Show volumes associated with the tenant from the k8s server. ([#408](https://github.com/dell/csm/issues/408))
+- CSM 1.5.1 release specific changes. ([#582](https://github.com/dell/csm/issues/582))
-### Bugs
+### Bugs
-- PowerScale volumes unable to be created with Helm deployment of CSM Authorization. ([#419](https://github.com/dell/csm/issues/419))
-- Authorization CLI documentation does not mention --array-insecure flag when creating or updating storage systems. ([#416](https://github.com/dell/csm/issues/416))
-- Authorization: Add documentation for backing up and restoring redis data. ([#410](https://github.com/dell/csm/issues/410))
-- CSM Authorization doesn't recognize storage with capital letters. ([#398](https://github.com/dell/csm/issues/398))
-- Update Authorization documentation with supported versions of k3s-selinux and container-selinux packages. ([#393](https://github.com/dell/csm/issues/393))
-- Using Authorization without dependency on jq. ([#390](https://github.com/dell/csm/issues/390))
-- Authorization Documentation Improvement. ([#384](https://github.com/dell/csm/issues/384))
-- Unit test failing for csm-authorization. ([#382](https://github.com/dell/csm/issues/382))
-- Karavictl has incorrect permissions after download. ([#360](https://github.com/dell/csm/issues/360))
-- Helm deployment of Authorization denies a valid request path from csi-powerflex. ([#353](https://github.com/dell/csm/issues/353))
\ No newline at end of file
+- CSM Authorization installation fails due to a PATH not looking in /usr/local/bin. ([#580](https://github.com/dell/csm/issues/580))
diff --git a/content/v1/authorization/troubleshooting.md b/content/v1/authorization/troubleshooting.md
index 4792dc36ac..24f50402b4 100644
--- a/content/v1/authorization/troubleshooting.md
+++ b/content/v1/authorization/troubleshooting.md
@@ -7,6 +7,7 @@ Description: >
---
## RPM Deployment
+- [The Failure of Building an Authorization RPM](#The-Failure-of-Building-an-Authorization-RPM)
- [Running `karavictl tenant` commands result in an HTTP 504 error](#running-karavictl-tenant-commands-result-in-an-http-504-error)
- [Installation fails to install policies](#installation-fails-to-install-policies)
- [After installation, the create-pvc Pod is in an Error state](#after-installation-the-create-pvc-pod-is-in-an-error-state)
@@ -16,6 +17,27 @@ Description: >
---
+### The Failure of Building an Authorization RPM
+ This response occurs when running 'make rpm' without the proper permissions or correct pathing of the Authorization repository.
+
+```
+Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/root/karavi-authorization/bin/deploy" to rootfs at "/home/builder/rpm/deploy": mount /root/karavi-authorization/bin/deploy:/home/builder/rpm/deploy (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.ERRO[0001] error waiting for container: context canceled
+```
+
+__Resolution__
+
+1. Ensure the cloned repository is in a folder independent of the root or home directory.
+
+```
+/root/myrepos/karavi-authorization
+```
+
+2. Enable appropriate permissions to the RPM folder (this is where the Authorization RPM is located after being built).
+
+```
+chmod o+rwx deploy/rpm
+```
+
### Retrieve CSM Authorization Server Logs
To retrieve logs from services on the CSM Authorization Server, run the following command (e.g proxy-server logs):
@@ -53,15 +75,7 @@ error: failed to install policies (see /tmp/policy-install-for-karavi3163047435)
__Resolution__
-View the contents /tmp/policy-install-for-karavi* file listed in the error message. If there is a Permission denied error while running the policy-install.sh script, manually run the script to install policies.
-
-```
-$ cat /tmp/policy-install-for-karavi3163047435
-
-# find the location of the policy-install.sh script located in the file and manually run the script
-
-$ /tmp/karavi-installer-2908017483/policy-install.sh
-```
+This issue should only occur with older versions of CSM Authorization. If your system is encountering this issue, upgrade to version 1.5.0 or above.
### After installation, the create-pvc Pod is in an Error state
If SELinux is enabled, the create-pvc Pod may be in an Error state:
@@ -164,4 +178,4 @@ kubectl -n rollout restart deploy/proxy-server
```
kubectl -n rollout restart deploy/vxflexos-controller
kubectl -n rollout restart daemonSet/vxflexos-node
-```
+```
\ No newline at end of file
diff --git a/content/v1/authorization/uninstallation.md b/content/v1/authorization/uninstallation.md
index fcbcb37aa2..65525879eb 100644
--- a/content/v1/authorization/uninstallation.md
+++ b/content/v1/authorization/uninstallation.md
@@ -10,7 +10,13 @@ This section outlines the uninstallation steps for Container Storage Modules (CS
## Uninstalling the RPM
-To uninstall the rpm package on the system, run the below command:
+To uninstall the rpm package on the system, you must first uninstall the K3s SELinux package if SELinux is enabled. To uninstall the K3s SELinux package, run:
+
+```
+rpm -e k3s-selinux
+```
+
+To uninstall the CSM Authorization rpm package on the system, run:
```
rpm -e
diff --git a/content/v1/authorization/upgrade.md b/content/v1/authorization/upgrade.md
index 4c31e3a926..8be889ac83 100644
--- a/content/v1/authorization/upgrade.md
+++ b/content/v1/authorization/upgrade.md
@@ -14,10 +14,10 @@ This section outlines the upgrade steps for Container Storage Modules (CSM) for
Obtain the latest single binary installer RPM by following one of our two options [here](../deployment/#single-binary-installer).
-To update the rpm package on the system, run the below command:
+To update the rpm package on the system, run the below command from within the extracted folder:
```
-rpm -Uvh karavi-authorization-.x86_64.rpm --nopreun --nopostun
+sh install_karavi_auth.sh --upgrade
```
To verify that the new version of the rpm is installed and K3s has been updated, run the below commands:
diff --git a/content/v1/csidriver/Architecture_Diagram.png b/content/v1/csidriver/Architecture_Diagram.png
index a2496f9ce4..05454d6919 100644
Binary files a/content/v1/csidriver/Architecture_Diagram.png and b/content/v1/csidriver/Architecture_Diagram.png differ
diff --git a/content/v1/csidriver/_index.md b/content/v1/csidriver/_index.md
index 774e3e0762..8f3b093ee9 100644
--- a/content/v1/csidriver/_index.md
+++ b/content/v1/csidriver/_index.md
@@ -16,43 +16,44 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
{{
}}
diff --git a/content/v1/csidriver/features/powerflex.md b/content/v1/csidriver/features/powerflex.md
index f39abd8d26..cba623fc29 100644
--- a/content/v1/csidriver/features/powerflex.md
+++ b/content/v1/csidriver/features/powerflex.md
@@ -377,7 +377,7 @@ For configuring Controller HA on the Dell CSI Operator, please refer to the [Del
## SDC Deployment
-The CSI PowerFlex driver version 1.3 and later support the automatic deployment of the PowerFlex SDC on Kubernetes nodes which run the node portion of the CSI driver. The deployment of the SDC kernel module occurs on these nodes with OS platforms which support automatic SDC deployment: currently only Red Hat CoreOS (RHCOS). On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps below. Refer https://hub.docker.com/r/dellemc/sdc for your OS versions.
+The CSI PowerFlex driver version 1.3 and later support the automatic deployment of the PowerFlex SDC on Kubernetes nodes which run the node portion of the CSI driver. The deployment of the SDC kernel module occurs on these nodes with OS platforms which support automatic SDC deployment: currently Red Hat CoreOS (RHCOS), RHEL8.x,RHEL 7.9 are the only supported OS platforms. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps below. Refer https://hub.docker.com/r/dellemc/sdc for your OS versions.
- On Kubernetes nodes which run the node portion of the CSI driver, the SDC init container runs prior to the driver being installed. It installs the SDC kernel module on the nodes with OS version which supports automatic SDC deployment. If there is an SDC kernel module installed then the version is checked and updated.
- Optionally, if the SDC monitor is enabled, another container is started and runs as the monitor. Follow PowerFlex SDC documentation to get monitor metrics.
@@ -631,3 +631,34 @@ Events:
Warning VolumeConditionAbnormal 35s (x9 over 12m) kubelet Volume vol4: volPath: /var/.../rhel-705f0dcbf1/mount is not mounted:
Warning VolumeConditionAbnormal 5s kubelet Volume vol2: Volume is not found by node driver at 2021-11-11 02:04:49
```
+
+## Set QoS Limits
+Starting in version 2.5, CSI Driver for PowerFlex now supports setting the limits for the bandwidth and IOPS that one SDC generates for the specified volume. This enables the CSI driver to control the quality of service (QoS).
+In this release this is supported at the StorageClass level, so once a volume is created QoS Settings can't be adjusted later.
+To accomplish this, two new parameters are introduced in the storage class: bandwidthLimitInKbps and iopsLimit.
+> Ensure that the proper values are enabled in your storage class yaml files. Refer to the [sample storage class yamls](https://github.com/dell/csi-powerflex/tree/main/samples/storageclass) for more details.
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+name: vxflexos
+annotations:
+storageclass.kubernetes.io/is-default-class: "true"
+provisioner: csi-vxflexos.dellemc.com
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+parameters:
+storagepool: # Insert Storage pool
+systemID: # Insert System ID
+bandwidthLimitInKbps: # Insert bandwidth limit in Kbps
+iopsLimit: # Insert iops limit
+csi.storage.k8s.io/fstype: ext4
+volumeBindingMode: WaitForFirstConsumer
+allowedTopologies:
+- matchLabelExpressions:
+ - key: csi-vxflexos.dellemc.com/ # Insert System ID
+ values:
+ - csi-vxflexos.dellemc.com
+```
+Once the volume gets created, the ControllerPublishVolume will set the QoS limits for the volumes mapped to SDC.
\ No newline at end of file
diff --git a/content/v1/csidriver/features/powermax.md b/content/v1/csidriver/features/powermax.md
index 53ddcc5f1a..40dce2261e 100644
--- a/content/v1/csidriver/features/powermax.md
+++ b/content/v1/csidriver/features/powermax.md
@@ -103,7 +103,7 @@ spec:
## iSCSI CHAP
-Starting from v1.3.0, the unidirectional Challenge Handshake Authentication Protocol (CHAP) for iSCSI has been supported.
+Starting from version 1.3.0, the unidirectional Challenge Handshake Authentication Protocol (CHAP) for iSCSI has been supported.
To enable CHAP authentication:
1. Create secret `powermax-creds` with the key `chapsecret` set to the iSCSI CHAP secret. If the secret exists, delete and re-create the secret with this newly added key.
2. Set the parameter `enableCHAP` in `my-powermax-settings.yaml` to true.
@@ -126,7 +126,7 @@ When challenged, the host initiator transmits a CHAP credential and CHAP secret
## Custom Driver Name
-Starting from version 1.3.0 driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell PowerMax in the same Kubernetes/OpenShift cluster.
+Starting from version 1.3.0 of the driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell PowerMax in the same Kubernetes/OpenShift cluster.
To use this feature, set the following values under `customDriverName` in `my-powermax-settings.yaml`.
- Value: Set this to the custom name of the driver.
@@ -253,7 +253,7 @@ For additional information, see the website: [Kubernetes](https://kubernetes.io/
## CSI PowerMax Reverse Proxy
-To get the maximum performance out of the CSI driver for PowerMax and Unisphere for PowerMax REST APIs, starting with v1.4 of the driver, you can deploy the optional CSI PowerMax Reverse Proxy application.
+CSI PowerMax Reverse Proxy application is deployed along with the driver to get the maximum performance out of the CSI driver for PowerMax and Unisphere for PowerMax REST APIs.
CSI PowerMax Reverse Proxy is a (go) HTTPS server that acts as a reverse proxy for the Unisphere for PowerMax RESTAPI interface. Any RESTAPI request sent from the driver to the reverse proxy is forwarded to the Unisphere server and the response is routed back to the driver.
@@ -287,9 +287,9 @@ key=tls.key
### Using Helm installer
-In the `my-powermax-settings.yaml` file, the csireverseproxy section can be used to deploy and configure the CSI PowerMax Reverse Proxy.
+In the `my-powermax-settings.yaml` file, the csireverseproxy section can be used to configure the CSI PowerMax Reverse Proxy.
-The new Helm chart is configured as a sub chart for the CSI PowerMax helm chart. If it is enabled (using the `enabled` parameter in the csireverseproxy section of the `my-powermax-settings.yaml` file), the install script automatically installs the CSI PowerMax Reverse Proxy and configures the CSI PowerMax driver to use this service.
+The new Helm chart is configured as a sub chart for the CSI PowerMax helm chart. The install script automatically installs the CSI PowerMax Reverse Proxy and configures the CSI PowerMax driver to use this service.
### Using Dell CSI Operator
@@ -565,3 +565,44 @@ spec:
When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume.
To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes).
+
+## Support for auto RDM for vSphere over FC
+
+CSI Driver for Dell PowerMax 2.5.0 and above supports auto RDM for vSphere over FC.
+
+This feature supports volume provisioning on Kubernetes clusters running on vSphere (VMware hypervisor) via RDM mechanism. This feature enables the users to use PMAX CSI drivers with VMs on vSphere Hypervisor with the same feature and functionality as there with bare metal servers when they have only FC ports in PMAX storage.
+
+It will be supported only on new/freshly installed clusters where the cluster is exclusively deployed in a virtualized vSphere environment. Having hybrid topologies like ISCSI or FC (in pass-through) is not supported.
+
+To use this feature, set vSphere.enabled to true
+
+```
+ VMware/vSphere virtualization support
+# set enable to true, if you to enable VMware virtualized environment support via RDM
+# Allowed Values:
+# "true" - vSphere volumes are enabled
+# "false" - vSphere volumes are disabled
+# Default value: "false"
+vSphere:
+ enabled: false
+ # fcPortGroup: an existing portGroup that driver will use for vSphere
+ # recommended format: csi-x-VC-PG, x can be anything of user choice
+ fcPortGroup: "csi-vsphere-VC-PG"
+ # fcHostGroup: an existing host(initiator group) that driver will use for vSphere
+ # this hostGroup should contain initiators from all the ESXs/ESXi host
+ # where the cluster is deployed
+ # recommended format: csi-x-VC-HG, x can be anything of user choice
+ fcHostGroup: "csi-vsphere-VC-HG"
+ # vCenterHost: URL/endpoint of the vCenter where all the ESX are present
+ vCenterHost: "00.000.000.01"
+ # vCenterUserName: username from the vCenter credentials
+ vCenterUserName: "user"
+ # vCenterPassword: password from the vCenter credentials
+ vCenterPassword: "pwd"
+
+```
+
+>Note: Replication is not supported with this feature.
+>Limitations of RDM can be referred [here.](https://configmax.esp.vmware.com/home)
+>Supported number of RDM Volumes per VM is 60 as per the limitations.
+>RDMs should not be added/removed manually from vCenter on any of the cluster VMs.
diff --git a/content/v1/csidriver/features/powerstore.md b/content/v1/csidriver/features/powerstore.md
index df8ab6544e..82a5fe4751 100644
--- a/content/v1/csidriver/features/powerstore.md
+++ b/content/v1/csidriver/features/powerstore.md
@@ -614,7 +614,7 @@ The user will be able to install the driver and able to create pods.
## PV/PVC Metrics
-CSI Driver for Dell Powerstore 2.1.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.volumeHealthMonitorInterval parameter.
+CSI Driver for Dell Powerstore 2.1.0 and above supports volume health monitoring. To enable Volume Health Monitoring from the node side, the alpha feature gate CSIVolumeHealth needs to be enabled. To use this feature, set controller.healthMonitor.enabled and node.healthMonitor.enabled to true. To change the monitor interval, set controller.healthMonitor.interval parameter.
## Single Pod Access Mode for PersistentVolumes
@@ -717,7 +717,7 @@ spec:
>Note: Default description value is `pvcName-pvcNamespace`.
-The following is the list of all the attribtues supported by PowerStore CSI driver:
+This is the list of all the attributes supported by PowerStore CSI driver:
| Block Volume | NFS Volume |
| --- | --- |
@@ -730,3 +730,17 @@ The following is the list of all the attribtues supported by PowerStore CSI driv
>Make sure that the attributes specified are supported by the version of PowerStore array used.
>Configurable Volume Attributes feature is supported with Helm.
+
+## Storage Capacity Tracking
+CSI PowerStore driver version 2.5.0 and above supports Storage Capacity Tracking.
+
+This feature helps the scheduler to make more informed choices about where to start pods which depend on unbound volumes with late binding (aka "wait for first consumer"). Pods will be scheduled on a node (satisfying the topology constraints) only if the requested capacity is available on the storage array.
+If such a node is not available, the pods stay in Pending state. This means they are not scheduled.
+
+Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.
+
+The attribute `storageCapacity.enabled` in `my-powerstore-settings.yaml` can be used to enabled/disabled the feature during driver installation .
+To configure how often driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample files provided [here](https://github.com/dell/dell-csi-operator/tree/master/samples) by editing the `capacity-poll-interval` argument present in the `provisioner` sidecar.
+
+**Note:**
+>This feature requires kubernetes v1.24 and above and will be automatically disabled in lower version of kubernetes.
\ No newline at end of file
diff --git a/content/v1/csidriver/features/unity.md b/content/v1/csidriver/features/unity.md
index 4cac022944..7681fb2548 100644
--- a/content/v1/csidriver/features/unity.md
+++ b/content/v1/csidriver/features/unity.md
@@ -507,7 +507,7 @@ kubectl edit configmap -n unity unity-config-params
## Tenancy support for Unity XT NFS
-The CSI Unity XT driver version v2.1.0 (and later versions) supports the Tenancy feature of Unity XT such that the user will be able to associate specific worker nodes (in the cluster) and NFS storage volumes with Tenant.
+The CSI Unity XT driver supports the Tenancy feature of Unity XT that allows the user to associate specific worker nodes (in the cluster) and NFS storage volumes with Tenant.
Prerequisites (to be manually created in Unity XT Array) before the driver installation:
* Create Tenants
diff --git a/content/v1/csidriver/installation/helm/isilon.md b/content/v1/csidriver/installation/helm/isilon.md
index 3488f66182..fc1f9975bf 100644
--- a/content/v1/csidriver/installation/helm/isilon.md
+++ b/content/v1/csidriver/installation/helm/isilon.md
@@ -22,6 +22,7 @@ The following are requirements to be met before installing the CSI Driver for De
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
- Install Helm 3
- Mount propagation is enabled on container runtime that is being used
+- `nfs-utils` package must be installed on nodes that will mount volumes
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
- If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first
@@ -47,27 +48,43 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
- [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
-## Volume Health Monitoring
+#### Installation example
+You can install CRDs and the default snapshot controller by running these commands:
+```bash
+git clone https://github.com/kubernetes-csi/external-snapshotter/
+cd ./external-snapshotter
+git checkout release-
+kubectl kustomize client/config/crd | kubectl create -f -
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
+```
+
+*NOTE:*
+- It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
+
+### (Optional) Volume Health Monitoring
Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via helm.
+
+If enabled capacity metrics (used & free capacity, used & free inodes) for PowerScale PV will be expose in Kubernetes metrics API.
+
To enable this feature, add the below block to the driver manifest before installing the driver. This ensures to install external
health monitor sidecar. To get the volume health state value under controller should be set to true as seen below. To get the
volume stats value under node should be set to true.
- ```yaml
+```yaml
controller:
healthMonitor:
# enabled: Enable/Disable health monitor of CSI volumes
@@ -89,30 +106,16 @@ node:
# false: disable checking of health condition of CSI volumes
# Default value: None
enabled: false
- ```
-
-#### Installation example
-
-You can install CRDs and the default snapshot controller by running the following commands:
-```bash
-git clone https://github.com/kubernetes-csi/external-snapshotter/
-cd ./external-snapshotter
-git checkout release-
-kubectl kustomize client/config/crd | kubectl create -f -
-kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
```
-*NOTE:*
-- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
-
### (Optional) Replication feature Requirements
-
Applicable only if you decided to enable the Replication feature in `values.yaml`
```yaml
replication:
enabled: true
```
+
#### Replication CRD's
The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use `csm-replication/deploy/replicationcrds.all.yaml` located in the csm-replication git repo for the installation.
@@ -122,7 +125,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
@@ -132,6 +135,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
+| driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
| logLevel | CSI driver log level | No | "debug" |
| certSecretCount | Defines the number of certificate secrets, which the user is going to create for SSL authentication. (isilon-cert-0..isilon-cert-(n-1)); Minimum value should be 1.| Yes | 1 |
| [allowedNetworks](../../../features/powerscale/#support-custom-networks-for-nfs-io-traffic) | Defines the list of networks that can be used for NFS I/O traffic, CIDR format must be used. | No | [ ] |
@@ -168,6 +172,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| isiAccessZone | Define the name of the access zone a volume can be created in. If storageclass is missing with AccessZone parameter, then value of isiAccessZone is used for the same. | No | System |
| enableQuota | Indicates whether the provisioner should attempt to set (later unset) quota on a newly provisioned volume. This requires SmartQuotas to be enabled.| No | true |
| isiPath | Define the base path for the volumes to be created on PowerScale cluster. This value acts as a default value for isiPath, if not specified for a cluster config in secret| No | /ifs/data/csi |
+ | ignoreUnresolvableHosts | Allows new host to add to existing export list though any of the existing hosts from the same exports are unresolvable/doesn't exist anymore. | No | false |
| noProbeOnStart | Define whether the controller/node plugin should probe all the PowerScale clusters during driver initialization | No | false |
| autoProbe | Specify if automatically probe the PowerScale cluster if not done already during CSI calls | No | true |
| **authorization** | [Authorization](../../../../authorization/deployment) is an optional feature to apply credential shielding of the backend PowerScale. | - | - |
@@ -187,6 +192,8 @@ CRDs should be configured during replication prepare stage with repctl as descri
- ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same.
- Whenever the *certSecretCount* parameter changes in *my-isilon-setting.yaml* user needs to reinstall the driver.
- In order to enable authorization, there should be an authorization proxy server already installed.
+ - If you are using a custom image, check the *version* and *driverRepository* fields in *my-isilon-setting.yaml* to make sure that they are pointing to the correct image repository and driver version. These two fields are spliced together to form the image name, as shown here: /csi-isilon:v
+
6. Edit following parameters in samples/secret/secret.yaml file and update/add connection/authentication information for one or more PowerScale clusters.
@@ -199,22 +206,24 @@ CRDs should be configured during replication prepare stage with repctl as descri
| isDefault | Indicates if this is a default cluster (would be used by storage classes without ClusterName parameter). Only one of the cluster config should be marked as default. | No | false |
| ***Optional parameters*** | Following parameters are Optional. If specified will override default values from values.yaml. |
| skipCertificateValidation | Specify whether the PowerScale OneFS API server's certificate chain and hostname must be verified. | No | default value from values.yaml |
+ | ignoreUnresolvableHosts | Allows new host to add to existing export list though any of the existing hosts from the same exports are unresolvable/doesn't exist anymore. | No | default value from values.yaml |
| endpointPort | Specify the HTTPs port number of the PowerScale OneFS API server | No | default value from values.yaml |
| isiPath | The base path for the volumes to be created on PowerScale cluster. Note: IsiPath parameter in storageclass, if present will override this attribute. | No | default value from values.yaml |
| mountEndpoint | Endpoint of the PowerScale OneFS API server, for example, 10.0.0.1. This must be specified if [CSM-Authorization](https://github.com/dell/karavi-authorization) is enabled. | No | - |
+### User privileges
The username specified in *secret.yaml* must be from the authentication providers of PowerScale. The user must have enough privileges to perform the actions. The suggested privileges are as follows:
-
-
- | Privilege | Type |
- | --------- | ----- |
- | ISI_PRIV_LOGIN_PAPI | Read Only |
- | ISI_PRIV_NFS | Read Write |
- | ISI_PRIV_QUOTA | Read Write |
- | ISI_PRIV_SNAPSHOT | Read Write |
- | ISI_PRIV_IFS_RESTORE | Read Only |
- | ISI_PRIV_NS_IFS_ACCESS | Read Only |
- | ISI_PRIV_IFS_BACKUP | Read Only |
+
+ | Privilege | Type |
+ | ---------------------- | ---------- |
+ | ISI_PRIV_LOGIN_PAPI | Read Only |
+ | ISI_PRIV_NFS | Read Write |
+ | ISI_PRIV_QUOTA | Read Write |
+ | ISI_PRIV_SNAPSHOT | Read Write |
+ | ISI_PRIV_IFS_RESTORE | Read Only |
+ | ISI_PRIV_NS_IFS_ACCESS | Read Only |
+ | ISI_PRIV_IFS_BACKUP | Read Only |
+ | ISI_PRIV_SYNCIQ | Read Write |
Create isilon-creds secret using the following command:
`kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
diff --git a/content/v1/csidriver/installation/helm/powerflex.md b/content/v1/csidriver/installation/helm/powerflex.md
index af80f767db..7d219de6b2 100644
--- a/content/v1/csidriver/installation/helm/powerflex.md
+++ b/content/v1/csidriver/installation/helm/powerflex.md
@@ -47,11 +47,13 @@ Verify that zero padding is enabled on the PowerFlex storage pools that will be
### Install PowerFlex Storage Data Client
The CSI Driver for PowerFlex requires you to have installed the PowerFlex Storage Data Client (SDC) on all Kubernetes nodes which run the node portion of the CSI driver.
-SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment;
-currently only Red Hat CoreOS (RHCOS).
-On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
+SDC could be installed automatically by CSI driver install on Kubernetes nodes with OS platform which support automatic SDC deployment; for Red Hat CoreOS (RHCOS), RHEL 7.9 and RHEL 8.x. On Kubernetes nodes with OS version not supported by automatic install, you must perform the Manual SDC Deployment steps [below](#manual-sdc-deployment).
Refer to https://hub.docker.com/r/dellemc/sdc for supported OS versions.
+*NOTE:* To install CSI driver for Powerflex with automated SDC deployment, you need below two packages on worker nodes.
+1. libaio
+2. numactl-libs
+
**Optional:** For a typical install, you will pull SDC kernel modules from the Dell FTP site, which is set up by default. Some users might want to mirror this repository to a local location. The [PowerFlex KB article](https://www.dell.com/support/kbdoc/en-us/000184206/how-to-use-a-private-repository-for) has instructions on how to do this.
#### Manual SDC Deployment
@@ -78,14 +80,14 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -104,13 +106,13 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- When using Kubernetes it is recommended to use 6.0.x version of snapshotter/snapshot-controller.
+- When using Kubernetes it is recommended to use 6.1.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
## Install the Driver
**Steps**
-1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
+1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powerflex.git` to clone the git repository.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace vxflexos` to create a new one.
@@ -124,7 +126,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
| password | Password for accessing PowerFlex system. If authorization is enabled, password will be ignored. | true | - |
| systemID | System name/ID of PowerFlex system. | true | - |
| allSystemNames | List of previous names of powerflex array if used for PV create | false | - |
- | endpoint | REST API gateway HTTPS endpoint for PowerFlex system. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | true | - |
+ | endpoint | REST API gateway HTTPS endpoint/PowerFlex Manager public IP for PowerFlex system. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | true | - |
| skipCertificateValidation | Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface. | true | true |
| isDefault | An array having isDefault=true is for backward compatibility. This parameter should occur once in the list. | false | false |
| mdm | mdm defines the MDM(s) that SDC should register with on start. This should be a list of MDM IP addresses or hostnames separated by comma. | true | - |
@@ -158,7 +160,7 @@ Use the below command to replace or update the secret:
- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
- If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.25.0-0"
+ kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features.
@@ -174,7 +176,7 @@ Use the below command to replace or update the secret:
| Parameter | Description | Required | Default |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- |
-| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.0.0 |
+| version | Set to verify the values file version matches driver version and used to pull the image as part of the image name. | Yes | 2.5.0 |
| driverRepository | Set to give the repository containing the driver image (used as part of the image name). | Yes | dellemc |
| powerflexSdc | Set to give the location of the SDC image used if automatic SDC deployment is being utilized. | No | dellemc/sdc:3.6 |
| certSecretCount | Represents the number of certificate secrets, which the user is going to create for SSL authentication. | No | 0 |
@@ -196,7 +198,7 @@ Use the below command to replace or update the secret:
| tolerations | Defines tolerations that would be applied to controller deployment. Leave as blank to install the controller on worker nodes only. If deploying on master nodes is desired, uncomment out this section. | Yes | " " |
| **healthMonitor** | This section configures the optional deployment of the external health monitor sidecar, for controller side volume health monitoring. | - | - |
| enabled | Enable/Disable deployment of external health monitor sidecar. | No | false |
-| volumeHealthMonitorInterval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h)| No | 60s |
+| interval | Interval of monitoring volume health condition. Allowed values: Number followed by unit (s,m,h)| No | 60s |
| **node** | This section allows the configuration of node-specific parameters. | - | - |
| healthMonitor.enabled | Enable/Disable health monitor of CSI volumes- volume usage, volume condition | No | false |
| nodeSelector | Defines what nodes would be selected for pods of node daemonset. Leave as blank to use all nodes. | Yes | " " |
diff --git a/content/v1/csidriver/installation/helm/powermax.md b/content/v1/csidriver/installation/helm/powermax.md
index 383eac559b..3b6dd65f86 100644
--- a/content/v1/csidriver/installation/helm/powermax.md
+++ b/content/v1/csidriver/installation/helm/powermax.md
@@ -11,10 +11,10 @@ The controller section of the Helm chart installs the following components in a
- CSI Driver for Dell PowerMax
- Kubernetes External Provisioner, which provisions the volumes
- Kubernetes External Attacher, which attaches the volumes to the containers
-- Kubernetes External Snapshotter, which provides snapshot support
+- Kubernetes External Snapshotter, which provides snapshot support-
+- CSI PowerMax ReverseProxy, which maximizes CSI driver and Unisphere performance
- Kubernetes External Resizer, which resizes the volume
- (optional) Kubernetes External health monitor, which provides volume health status
-- (optional) CSI PowerMax ReverseProxy, which maximizes CSI driver and Unisphere performance
- (optional) Dell CSI Replicator, which provides Replication capability.
The node section of the Helm chart installs the following component in a _DaemonSet_ in the specified namespace:
@@ -28,6 +28,7 @@ The following requirements must be met before installing CSI Driver for Dell Pow
- Install Helm 3
- Fibre Channel requirements
- iSCSI requirements
+- Auto RDM for vSphere over FC requirements
- Certificate validation for Unisphere REST API calls
- Mount propagation is enabled on container runtime that is being used
- Linux multipathing requirements
@@ -35,6 +36,7 @@ The following requirements must be met before installing CSI Driver for Dell Pow
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
- If using Powerpath , install the PowerPath for Linux requirements
+
### Install Helm 3
Install Helm 3 on the master node before you install CSI Driver for Dell PowerMax.
@@ -64,6 +66,20 @@ Set up the iSCSI initiators as follows:
For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+### Auto RDM for vSphere over FC requirements
+
+The CSI Driver for Dell PowerMax supports auto RDM for vSphere over FC. These requirements are applicable for the clusters deployed on ESX/ESXi using virtualized environement.
+
+Set up the environment as follows:
+
+- Requires VMware vCenter management software to manage all ESX/ESXis where the cluster is hosted.
+
+- Add all FC array ports zoned to the ESX/ESXis to a port group where the cluster is hosted .
+
+- Add initiators from all ESX/ESXis to a host(initiator group) where the cluster is hosted.
+
+>Note: Initiators from all ESX/ESXi should be part of a single host(initiator group) and not hostgroup(cascaded intitiator group).
+
### Certificate validation for Unisphere REST API calls
As part of the CSI driver installation, the CSI driver requires a secret with the name _powermax-certs_ present in the namespace _powermax_. This secret contains the X509 certificates of the CA which signed the Unisphere SSL certificate in PEM format. This secret is mounted as a volume in the driver container. In earlier releases, if the install script did not find the secret, it created an empty secret with the same name. From the 1.2.0 release, the secret volume has been made optional. The install script no longer attempts to create an empty secret.
@@ -125,7 +141,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots.
@@ -133,7 +149,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -152,7 +168,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
### (Optional) Replication feature Requirements
@@ -173,7 +189,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
@@ -183,7 +199,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
```
where *myusername* and *mypassword* are credentials for a user with PowerMax privileges.
4. Create the secret by running `kubectl create -f samples/secret/secret.yaml`.
-5. If you are going to install the new CSI PowerMax ReverseProxy service, create a TLS secret with the name - _csireverseproxy-tls-secret_ which holds an SSL certificate and the corresponding private key in the namespace where you are installing the driver.
+5. Create a TLS secret with the name - _csireverseproxy-tls-secret_ which holds an SSL certificate and the corresponding private key in the namespace where you are installing the driver.
6. Copy the default values.yaml file `cd helm && cp csi-powermax/values.yaml my-powermax-settings.yaml`
7. Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI.
8. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml`
@@ -195,10 +211,10 @@ CRDs should be configured during replication prepare stage with repctl as descri
| storageArrays| This section refers to the list of arrays managed by the driver and Reverse Proxy in StandAlone mode.| - | - |
| storageArrayId | This refers to PowerMax Symmetrix ID.| Yes | 000000000001|
| endpoint | This refers to the URL of the Unisphere server managing _storageArrayId_. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes if Reverse Proxy mode is _StandAlone_ | https://primary-1.unisphe.re:8443 |
-| backupEndpoint | This refers to the URL of the backup Unisphere server managing _storageArrayId_, if Reverse Proxy is installed in _StandAlone_ mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| No | https://backup-1.unisphe.re:8443 |
+| backupEndpoint | This refers to the URL of the backup Unisphere server managing _storageArrayId_, if Reverse Proxy is installed in _StandAlone_ mode. If authorization is enabled, backupEndpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on| Yes | https://backup-1.unisphe.re:8443 |
| managementServers | This section refers to the list of configurations for Unisphere servers managing powermax arrays.| - | - |
| endpoint | This refers to the URL of the Unisphere server. If authorization is enabled, endpoint should be the HTTPS localhost endpoint that the authorization sidecar will listen on | Yes | https://primary-1.unisphe.re:8443 |
-| credentialsSecret| This refers to the user credentials for _endpoint_ | No| primary-1-secret|
+| credentialsSecret| This refers to the user credentials for _endpoint_ | Yes| primary-1-secret|
| skipCertificateValidation | This parameter should be set to false if you want to do client-side TLS verification of Unisphere for PowerMax SSL certificates.| No | "True" |
| certSecret | The name of the secret in the same namespace containing the CA certificates of the Unisphere server | Yes, if skipCertificateValidation is set to false | Empty|
| limits | This refers to various limits for Reverse Proxy | No | - |
@@ -221,7 +237,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| powerMaxDebug | Enables low level and http traffic logging between the CSI driver and Unisphere. Don't enable this unless asked to do so by the support team. | No | false |
| enableCHAP | Determine if the driver is going to configure SCSI node databases on the nodes with the CHAP credentials. If enabled, the CHAP secret must be provided in the credentials secret and set to the key "chapsecret" | No | false |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
-| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.3.1 |
+| version | Current version of the driver. Don't modify this value as this value will be used by the install script. | Yes | v2.3.0 |
| images | Defines the container images used by the driver. | - | - |
| driverRepository | Defines the registry of the container image used for the driver. | Yes | dellemc |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
@@ -240,8 +256,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
| topologyControl.enabled | Allows to enable/disable topology control to filter topology keys | No | false |
| **csireverseproxy**| This section refers to the configuration options for CSI PowerMax Reverse Proxy | - | - |
-| enabled | Boolean parameter which indicates if CSI PowerMax Reverse Proxy is going to be configured and installed. **NOTE:** If not enabled, then there is no requirement to configure any of the following values. | No | "False" |
-| image | This refers to the image of the CSI Powermax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v2.1.0 |
+| image | This refers to the image of the CSI Powermax Reverse Proxy container. | Yes | dellemc/csipowermax-reverseproxy:v2.4.0 |
| tlsSecret | This refers to the TLS secret of the Reverse Proxy Server.| Yes | csirevproxy-tls-secret |
| deployAsSidecar | If set to _true_, the Reverse Proxy is installed as a sidecar to the driver's controller pod otherwise it is installed as a separate deployment.| Yes | "True" |
| port | Specify the port number that is used by the NodePort service created by the CSI PowerMax Reverse Proxy installation| Yes | 2222 |
@@ -260,6 +275,14 @@ CRDs should be configured during replication prepare stage with repctl as descri
| image | Image for dell-csi-replicator sidecar. | No | " " |
| replicationContextPrefix | enables side cars to read required information from the volume context | No | powermax |
| replicationPrefix | Determine if replication is enabled | No | replication.storage.dell.com |
+| **vSphere**| This section refers to the configuration options for VMware virtualized environment support via RDM | - | - |
+| enabled | A boolean that enables/disables VMware virtualized environment support. | No | false |
+| fcPortGroup | Existing portGroup that driver will use for vSphere. | Yes | "" |
+| fcHostGroup | Existing host(initiator group) that driver will use for vSphere. | Yes | "" |
+| vCenterHost | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
+| vCenterUserName | Username from the vCenter credentials. | Yes | "" |
+| vCenterPassword | Password from the vCenter credentials. | Yes | "" |
+
8. Install the driver using `csi-install.sh` bash script by running `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ../helm/my-powermax-settings.yaml`
9. Or you can also install the driver using standalone helm chart using the command `helm install --values my-powermax-settings.yaml --namespace powermax powermax ./csi-powermax`
@@ -292,22 +315,6 @@ Starting with CSI PowerMax v1.7.0, `dell-csi-helm-installer` will not create any
## Sample values file
The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes
-### CSI PowerMax driver without Proxy
-In this mode, the CSI PowerMax driver can only connect to a single `Unisphere` server. So, you just specify a list of storage arrays
-and the address of the `Unisphere` server
-
-```yaml
-global:
- defaultCredentialsSecret: powermax-creds
- storageArrays:
- - storageArrayId: "000000000001"
- - storageArrayId: "000000000002"
- managementServers:
- - endpoint: https://unisphere-address:8443
-```
-
->Note: If you provide multiple endpoints in the list of management servers, the installer will only use the first server in the list
-
### CSI PowerMax driver with Proxy in Linked mode
In this mode, the CSI PowerMax ReverseProxy acts as a `passthrough` for the RESTAPI calls and only provides limited functionality
such as rate limiting, backup Unisphere server. The CSI PowerMax driver is still responsible for the authentication with the Unisphere server.
@@ -330,13 +337,12 @@ global:
maxActiveWrite: 4
maxOutStandingRead: 50
maxOutStandingWrite: 50
- - endpoint: https://backup-unisphere:8443 #Optional
+ - endpoint: https://backup-unisphere:8443
# "csireverseproxy" refers to the subchart csireverseproxy
csireverseproxy:
# Set enabled to true if you want to use proxy
- enabled: true
- image: dellemc/csipowermax-reverseproxy:v2.3.0
+ image: dellemc/csipowermax-reverseproxy:v2.4.0
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
@@ -382,9 +388,7 @@ global:
# "csireverseproxy" refers to the subchart csireverseproxy
csireverseproxy:
- # Set enabled to true if you want to use proxy
- enabled: true
- image: dellemc/csipowermax-reverseproxy:v2.3.0
+ image: dellemc/csipowermax-reverseproxy:v2.4.0
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
diff --git a/content/v1/csidriver/installation/helm/powerstore.md b/content/v1/csidriver/installation/helm/powerstore.md
index 974db4a545..1726937351 100644
--- a/content/v1/csidriver/installation/helm/powerstore.md
+++ b/content/v1/csidriver/installation/helm/powerstore.md
@@ -102,7 +102,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation.
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
@@ -110,7 +110,7 @@ The CSI external-snapshotter sidecar is split into two controllers:
- A CSI external-snapshotter sidecar
The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available:
-Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation.
+Use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller) for the installation.
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -128,7 +128,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
### Volume Health Monitoring
@@ -145,12 +145,11 @@ controller:
# false: disable checking of health condition of CSI volumes
# Default value: None
enabled: false
-
- # volumeHealthMonitorInterval: Interval of monitoring volume health condition
+ # interval: Interval of monitoring volume health condition
# Allowed values: Number followed by unit (s,m,h)
# Examples: 60s, 5m, 1h
# Default value: 60s
- volumeHealthMonitorInterval: 60s
+ interval: 60s
node:
healthMonitor:
@@ -178,7 +177,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.5.1 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
3. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters:
@@ -215,7 +214,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
| controller.snapshot.snapNamePrefix | Defines prefix to apply to the names of a created snapshots | No | "csisnap" |
| controller.resizer.enabled | Allows to enable/disable resizer sidecar with driver installation for volume expansion feature | No | "true" |
| controller.healthMonitor.enabled | Allows to enable/disable volume health monitor | No | false |
-| controller.healthMonitor.volumeHealthMonitorInterval | Interval of monitoring volume health condition | No | 60s |
+| controller.healthMonitor.interval | Interval of monitoring volume health condition | No | 60s |
| controller.nodeSelector | Defines what nodes would be selected for pods of controller deployment | Yes | " " |
| controller.tolerations | Defines tolerations that would be applied to controller deployment | Yes | " " |
| node.nodeNamePrefix | Defines the string added to each node that the CSI driver registers | No | "csi-node" |
@@ -228,6 +227,8 @@ CRDs should be configured during replication prepare stage with repctl as descri
| images.driverRepository | To use an image from custom repository | No | dockerhub |
| version | To use any driver version | No | Latest driver version |
| allowAutoRoundOffFilesystemSize | Allows the controller to round off filesystem to 3Gi which is the minimum supported value | No | false |
+| storageCapacity.enabled | Enable/Disable storage capacity tracking | No | true
+| storageCapacity.pollInterval | Configure how often the driver checks for changed capacity | No | 5m
8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml`
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore`
diff --git a/content/v1/csidriver/installation/helm/unity.md b/content/v1/csidriver/installation/helm/unity.md
index 9f666f7ca5..bd46ea332a 100644
--- a/content/v1/csidriver/installation/helm/unity.md
+++ b/content/v1/csidriver/installation/helm/unity.md
@@ -88,7 +88,7 @@ Install CSI Driver for Unity XT using this procedure.
*Before you begin*
- * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.4.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
+ * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.5.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -102,7 +102,7 @@ Procedure
* ArrayId corresponds to the serial number of Unity XT array.
* Unity XT Array username must have role as Storage Administrator to be able to perform CRUD operations.
* If the user is using complex K8s version like "v1.21.3-mirantis-1", use below kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.25.0-0"
+ kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
2. Copy the `helm/csi-unity/values.yaml` into a file named `myvalues.yaml` in the same directory of `csi-install.sh`, to customize settings for installation.
@@ -185,7 +185,7 @@ Procedure
| storageArrayList.endpoint | REST API gateway HTTPS endpoint Unity XT system| true | - |
| storageArrayList.arrayId | ArrayID for Unity XT system | true | - |
| storageArrayList.skipCertificateValidation | "skipCertificateValidation " determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface. If it is set to false, then a secret unity-certs has to be created with an X.509 certificate of CA which signed the Unisphere certificate. | true | true |
- | storageArrayList.isDefault| An array having isDefault=true or isDefaultArray=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - |
+ | storageArrayList.isDefault| An array having isDefault=true or isDefault=true will be considered as the default array when arrayId is not specified in the storage class. This parameter should occur only once in the list. | true | - |
Example: secret.yaml
@@ -252,14 +252,14 @@ Procedure
In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster
#### Volume Snapshot CRD's
- The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation.
+ The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
- Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation.
+ Use [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller) for the installation.
#### Installation example
@@ -273,7 +273,7 @@ Procedure
```
**Note**:
- - It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
+ - It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
@@ -283,7 +283,7 @@ Procedure
A successful installation must display messages that look similar to the following samples:
```
------------------------------------------------------
- > Installing CSI Driver: csi-unity on 1.22
+ > Installing CSI Driver: csi-unity on 1.25
------------------------------------------------------
------------------------------------------------------
> Checking to see if CSI Driver is already installed
@@ -291,56 +291,57 @@ Procedure
------------------------------------------------------
> Verifying Kubernetes and driver configuration
------------------------------------------------------
- |- Kubernetes Version: 1.22
+ |- Kubernetes Version: 1.25
|
|- Driver: csi-unity
|
|- Verifying Kubernetes version
+ |
+ |--> Verifying minimum Kubernetes version Success
+ |
+ |--> Verifying maximum Kubernetes version Success
|
- |--> Verifying minimum Kubernetes version Success
+ |- Verifying that required namespaces have been created Success
|
- |--> Verifying maximum Kubernetes version Success
+ |- Verifying that required secrets have been created Success
|
- |- Verifying that required namespaces have been created Success
- |
- |- Verifying that required secrets have been created Success
- |
- |- Verifying that optional secrets have been created Success
- |
- |- Verifying alpha snapshot resources
- |
- |--> Verifying that alpha snapshot CRDs are not installed Success
+ |- Verifying that optional secrets have been created Success
+ |
+ |- Verifying alpha snapshot resources
+ |
+ |--> Verifying that alpha snapshot CRDs are not installed Success
|
|- Verifying sshpass installation.. |
- |- Verifying iSCSI installation
+ |- Verifying iSCSI installation
Enter the root password of 10.**.**.**:
Enter the root password of 10.**.**.**:
Success
|
- |- Verifying snapshot support
- |
- |--> Verifying that snapshot CRDs are available Success
- |
- |--> Verifying that the snapshot controller is available Success
+ |- Verifying snapshot support
+ |
+ |--> Verifying that snapshot CRDs are available Success
+ |
+ |--> Verifying that the snapshot controller is available Success
|
- |- Verifying helm version Success
+ |- Verifying helm version Success
|
- |- Verifying helm values version Success
-
+ |- Verifying helm values version Success
+
------------------------------------------------------
> Verification Complete - Success
------------------------------------------------------
|
- |- Installing Driver Success
- |
- |--> Waiting for Deployment unity-controller to be ready Success
- |
- |--> Waiting for DaemonSet unity-node to be ready Success
+ |- Installing Driver Success
+ |
+ |--> Waiting for Deployment unity-controller to be ready Success
+ |
+ |--> Waiting for DaemonSet unity-node to be ready Success
------------------------------------------------------
> Operation complete
------------------------------------------------------
```
+
Results:
At the end of the script unity-controller Deployment and DaemonSet unity-node will be ready, execute command `kubectl get pods -n unity` to get the status of the pods and you will see the following:
diff --git a/content/v1/csidriver/installation/offline/_index.md b/content/v1/csidriver/installation/offline/_index.md
index 4d15df3b06..8707ec4051 100644
--- a/content/v1/csidriver/installation/offline/_index.md
+++ b/content/v1/csidriver/installation/offline/_index.md
@@ -65,7 +65,7 @@ The resulting offline bundle file can be copied to another machine, if necessary
For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
```
-git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git
+git clone -b v1.10.0 https://github.com/dell/dell-csi-operator.git
```
```
cd dell-csi-operator/scripts
@@ -76,22 +76,26 @@ cd dell-csi-operator/scripts
*
* Pulling and saving container images
- dellemc/csi-isilon:v2.0.0
- dellemc/csi-isilon:v2.1.0
- dellemc/csipowermax-reverseproxy:v2.3.0
+ dellemc/csi-isilon:v2.3.0
+ dellemc/csi-isilon:v2.4.0
+ dellemc/csi-isilon:v2.5.0
+ dellemc/csipowermax-reverseproxy:v2.4.0
dellemc/csi-powermax:v2.3.1
dellemc/csi-powermax:v2.4.0
- dellemc/csi-powerstore:v2.0.0
- dellemc/csi-powerstore:v2.1.0
- dellemc/csi-unity:v2.0.0
- dellemc/csi-unity:v2.1.0
- localregistry:5028/csi-unity/csi-unity:20220303110841
- dellemc/csi-vxflexos:v2.0.0
- dellemc/csi-vxflexos:v2.1.0
- localregistry:5035/csi-operator/dell-csi-operator:v1.7.0
- dellemc/sdc:3.5.1.1
+ dellemc/csi-powermax:v2.5.0
+ dellemc/csi-powerstore:v2.3.0
+ dellemc/csi-powerstore:v2.4.0
+ dellemc/csi-powerstore:v2.5.0
+ dellemc/csi-unity:v2.3.0
+ dellemc/csi-unity:v2.4.0
+ dellemc/csi-unity:v2.5.0
+ dellemc/csi-vxflexos:v2.3.0
+ dellemc/csi-vxflexos:v2.4.0
+ dellemc/csi-vxflexos:v2.5.0
+ dellemc/dell-csi-operator:v1.10.0
dellemc/sdc:3.5.1.1-1
dellemc/sdc:3.6
+ dellemc/sdc:3.6.0.6
docker.io/busybox:1.32.0
...
...
@@ -113,17 +117,18 @@ cd dell-csi-operator/scripts
dell-csi-operator-bundle/
dell-csi-operator-bundle/driverconfig/
dell-csi-operator-bundle/driverconfig/config.yaml
- dell-csi-operator-bundle/driverconfig/isilon_v200_v119.json
- dell-csi-operator-bundle/driverconfig/isilon_v200_v120.json
- dell-csi-operator-bundle/driverconfig/isilon_v200_v121.json
- dell-csi-operator-bundle/driverconfig/isilon_v200_v122.json
- dell-csi-operator-bundle/driverconfig/isilon_v210_v120.json
- dell-csi-operator-bundle/driverconfig/isilon_v210_v121.json
- dell-csi-operator-bundle/driverconfig/isilon_v210_v122.json
- dell-csi-operator-bundle/driverconfig/isilon_v220_v121.json
- dell-csi-operator-bundle/driverconfig/isilon_v220_v122.json
- dell-csi-operator-bundle/driverconfig/isilon_v220_v123.json
- dell-csi-operator-bundle/driverconfig/powermax_v200_v119.json
+ dell-csi-operator-bundle/driverconfig/isilon_v230_v121.json
+ dell-csi-operator-bundle/driverconfig/isilon_v230_v122.json
+ dell-csi-operator-bundle/driverconfig/isilon_v230_v123.json
+ dell-csi-operator-bundle/driverconfig/isilon_v230_v124.json
+ dell-csi-operator-bundle/driverconfig/isilon_v240_v121.json
+ dell-csi-operator-bundle/driverconfig/isilon_v240_v122.json
+ dell-csi-operator-bundle/driverconfig/isilon_v240_v123.json
+ dell-csi-operator-bundle/driverconfig/isilon_v240_v124.json
+ dell-csi-operator-bundle/driverconfig/isilon_v250_v123.json
+ dell-csi-operator-bundle/driverconfig/isilon_v250_v124.json
+ dell-csi-operator-bundle/driverconfig/isilon_v250_v125.json
+ dell-csi-operator-bundle/driverconfig/powermax_v230_v121.json
...
...
@@ -173,47 +178,51 @@ Preparing a offline bundle for installation
5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB
e20ed4c73206: Loading layer [==================================================>] 17.22MB/17.22MB
- Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
+ Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.0
d72a74c56330: Loading layer [==================================================>] 3.031MB/3.031MB
f2d2ab12e2a7: Loading layer [==================================================>] 48.08MB/48.08MB
- Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.2
+ Loaded image: k8s.gcr.io/sig-storage/csi-snapshotter-v6.1.0
417cb9b79ade: Loading layer [==================================================>] 3.062MB/3.062MB
61fefb35ccee: Loading layer [==================================================>] 16.88MB/16.88MB
- Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
+ Loaded image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
7a5b9c0b4b14: Loading layer [==================================================>] 3.031MB/3.031MB
1555ad6e2d44: Loading layer [==================================================>] 49.86MB/49.86MB
- Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
+ Loaded image: k8s.gcr.io/sig-storage/csi-attacher-v4.0.0
2de1422d5d2d: Loading layer [==================================================>] 54.56MB/54.56MB
- Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1
+ Loaded image: k8s.gcr.io/sig-storage/csi-resizer-v1.6.0
25a1c1010608: Loading layer [==================================================>] 54.54MB/54.54MB
- Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
+ Loaded image: k8s.gcr.io/sig-storage/csi-snapshotter-v6.0.1
07363fa84210: Loading layer [==================================================>] 3.062MB/3.062MB
5227e51ea570: Loading layer [==================================================>] 54.92MB/54.92MB
- Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0
+ Loaded image: k8s.gcr.io/sig-storage/csi-attacher-v3.5.0
cfb5cbeabdb2: Loading layer [==================================================>] 55.38MB/55.38MB
- Loaded image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
+ Loaded image: k8s.gcr.io/sig-storage/csi-resizer-v1.5.0
...
...
*
* Tagging and pushing images
- localregistry:5035/csi-operator/dell-csi-operator:v1.7.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.7.0
- dellemc/csi-isilon:v2.0.0 -> localregistry:5000/csi-operator/csi-isilon:v2.0.0
- dellemc/csi-isilon:v2.1.0 -> localregistry:5000/csi-operator/csi-isilon:v2.1.0
- dellemc/csipowermax-reverseproxy:v1.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v1.4.0
- dellemc/csi-powermax:v2.0.0 -> localregistry:5000/csi-operator/csi-powermax:v2.0.0
- dellemc/csi-powermax:v2.1.0 -> localregistry:5000/csi-operator/csi-powermax:v2.1.0
- dellemc/csi-powerstore:v2.0.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.0.0
- dellemc/csi-powerstore:v2.1.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.1.0
- dellemc/csi-unity:nightly -> localregistry:5000/csi-operator/csi-unity:nightly
- dellemc/csi-unity:v2.0.0 -> localregistry:5000/csi-operator/csi-unity:v2.0.0
- dellemc/csi-unity:v2.1.0 -> localregistry:5000/csi-operator/csi-unity:v2.1.0
- dellemc/csi-vxflexos:v2.0.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.0.0
- dellemc/csi-vxflexos:v2.1.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.1.0
- dellemc/sdc:3.5.1.1 -> localregistry:5000/csi-operator/sdc:3.5.1.1
+ dellemc/dell-csi-operator:v1.10.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.10.0
+ dellemc/csi-isilon:v2.3.0 -> localregistry:5000/csi-operator/csi-isilon:v2.3.0
+ dellemc/csi-isilon:v2.4.0 -> localregistry:5000/csi-operator/csi-isilon:v2.4.0
+ dellemc/csi-isilon:v2.5.0 -> localregistry:5000/csi-operator/csi-isilon:v2.5.0
+ dellemc/csipowermax-reverseproxy:v2.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v2.4.0
+ dellemc/csi-powermax:v2.3.1 -> localregistry:5000/csi-operator/csi-powermax:v2.3.1
+ dellemc/csi-powermax:v2.4.0 -> localregistry:5000/csi-operator/csi-powermax:v2.4.0
+ dellemc/csi-powermax:v2.5.0 -> localregistry:5000/csi-operator/csi-powermax:v2.5.0
+ dellemc/csi-powerstore:v2.3.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.3.0
+ dellemc/csi-powerstore:v2.4.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.4.0
+ dellemc/csi-powerstore:v2.5.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.5.0
+ dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
+ dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
+ dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
+ dellemc/csi-vxflexos:v2.3.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.3.0
+ dellemc/csi-vxflexos:v2.4.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.4.0
+ dellemc/csi-vxflexos:v2.5.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.5.0
dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1
dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
+ dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
...
...
@@ -221,22 +230,26 @@ Preparing a offline bundle for installation
*
* Preparing operator files within /root/dell-csi-operator-bundle
- changing: localregistry:5000/csi-operator/dell-csi-operator:v1.7.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.7.0
- changing: dellemc/csi-isilon:v2.0.0 -> localregistry:5000/csi-operator/csi-isilon:v2.0.0
- changing: dellemc/csi-isilon:v2.1.0 -> localregistry:5000/csi-operator/csi-isilon:v2.1.0
- changing: dellemc/csipowermax-reverseproxy:v1.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v1.4.0
- changing: dellemc/csi-powermax:v2.0.0 -> localregistry:5000/csi-operator/csi-powermax:v2.0.0
- changing: dellemc/csi-powermax:v2.1.0 -> localregistry:5000/csi-operator/csi-powermax:v2.1.0
- changing: dellemc/csi-powerstore:v2.0.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.0.0
- changing: dellemc/csi-powerstore:v2.1.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.1.0
- changing: dellemc/csi-unity:nightly -> localregistry:5000/csi-operator/csi-unity:nightly
- changing: dellemc/csi-unity:v2.0.0 -> localregistry:5000/csi-operator/csi-unity:v2.0.0
- changing: dellemc/csi-unity:v2.1.0 -> localregistry:5000/csi-operator/csi-unity:v2.1.0
- changing: dellemc/csi-vxflexos:v2.0.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.0.0
- changing: dellemc/csi-vxflexos:v2.1.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.1.0
- changing: dellemc/sdc:3.5.1.1 -> localregistry:5000/csi-operator/sdc:3.5.1.1
+ changing: dellemc/dell-csi-operator:v1.10.0 -> localregistry:5000/csi-operator/dell-csi-operator:v1.10.0
+ changing: dellemc/csi-isilon:v2.3.0 -> localregistry:5000/csi-operator/csi-isilon:v2.3.0
+ changing: dellemc/csi-isilon:v2.4.0 -> localregistry:5000/csi-operator/csi-isilon:v2.4.0
+ changing: dellemc/csi-isilon:v2.5.0 -> localregistry:5000/csi-operator/csi-isilon:v2.5.0
+ changing: dellemc/csipowermax-reverseproxy:v2.4.0 -> localregistry:5000/csi-operator/csipowermax-reverseproxy:v2.4.0
+ changing: dellemc/csi-powermax:v2.3.1 -> localregistry:5000/csi-operator/csi-powermax:v2.3.1
+ changing: dellemc/csi-powermax:v2.4.0 -> localregistry:5000/csi-operator/csi-powermax:v2.4.0
+ changing: dellemc/csi-powermax:v2.5.0 -> localregistry:5000/csi-operator/csi-powermax:v2.5.0
+ changing: dellemc/csi-powerstore:v2.3.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.3.0
+ changing: dellemc/csi-powerstore:v2.4.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.4.0
+ changing: dellemc/csi-powerstore:v2.5.0 -> localregistry:5000/csi-operator/csi-powerstore:v2.5.0
+ changing: dellemc/csi-unity:v2.3.0 -> localregistry:5000/csi-operator/csi-unity:v2.3.0
+ changing: dellemc/csi-unity:v2.4.0 -> localregistry:5000/csi-operator/csi-unity:v2.4.0
+ changing: dellemc/csi-unity:v2.5.0 -> localregistry:5000/csi-operator/csi-unity:v2.5.0
+ changing: dellemc/csi-vxflexos:v2.3.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.3.0
+ changing: dellemc/csi-vxflexos:v2.4.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.4.0
+ changing: dellemc/csi-vxflexos:v2.5.0 -> localregistry:5000/csi-operator/csi-vxflexos:v2.5.0
changing: dellemc/sdc:3.5.1.1-1 -> localregistry:5000/csi-operator/sdc:3.5.1.1-1
changing: dellemc/sdc:3.6 -> localregistry:5000/csi-operator/sdc:3.6
+ changing: dellemc/sdc:3.6.0.6 -> localregistry:5000/csi-operator/sdc:3.6.0.6
changing: docker.io/busybox:1.32.0 -> localregistry:5000/csi-operator/busybox:1.32.0
...
...
diff --git a/content/v1/csidriver/installation/operator/_index.md b/content/v1/csidriver/installation/operator/_index.md
index 65bd661ba1..ed99acf458 100644
--- a/content/v1/csidriver/installation/operator/_index.md
+++ b/content/v1/csidriver/installation/operator/_index.md
@@ -11,18 +11,16 @@ The Dell CSI Operator is a Kubernetes Operator, which can be used to install and
## Prerequisites
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.1.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.1.0/deploy/kubernetes/snapshot-controller)
*NOTE:*
-- The manifests available on GitHub install the snapshotter image:
- - [quay.io/k8scsi/csi-snapshotter:v5.0.1](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v5.0.1&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
#### Installation example
@@ -37,7 +35,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
```
*NOTE:*
-- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.1.x version of snapshotter/snapshot-controller.
## Installation
@@ -50,21 +48,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa
#### Full list of CSI Drivers and versions supported by the Dell CSI Operator
| CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version |
| ------------------ | --------- | -------------- | -------------------- | --------------------- |
-| CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
-| CSI PowerMax | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI PowerMax | 2.3.0 | v2.3.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerMax | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerMax | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerFlex | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerFlex | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerFlex | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerScale | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerScale | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI Unity XT | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerScale | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
| CSI Unity XT | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI Unity XT | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
+| CSI Unity XT | 2.5.0 | v2.5.0 | 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
| CSI PowerStore | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerStore | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI PowerStore | 2.5.0 | v2.5.0 | 1.23, 1.24. 1.25 | 4.10, 4.10 EUS, 4.11 |
@@ -97,11 +95,9 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
#### Steps
>**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.10.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Run `bash scripts/install.sh` to install the operator.
->NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
-Any existing installations of Dell CSI Operator (v1.2.0 or later) installed using `install.sh` to the 'default' or 'dell-csi-operator' namespace can be upgraded to the new version by running `install.sh --upgrade`.
{{< imgproc non-olm-1.jpg Resize "2500x" >}}{{< /imgproc >}}
@@ -126,7 +122,7 @@ For installation of the supported drivers, a `CustomResource` has to be created
### Pre-requisites for upstream Kubernetes Clusters
On upstream Kubernetes clusters, make sure to install
* VolumeSnapshot CRDs
- * On clusters running v1.22,v1.23 & v1.24, make sure to install v1 VolumeSnapshot CRDs
+ * On clusters running v1.23,v1.24 & v1.25, make sure to install v1 VolumeSnapshot CRDs
* External Volume Snapshot Controller with the correct version
### Pre-requisites for Red Hat OpenShift Clusters
@@ -220,8 +216,8 @@ Or
{driver name}_{driver version}_ops_{OpenShift version}.yaml
For e.g.
-* samples/powermax_v220_k8s_123.yaml* <- To install CSI PowerMax driver v2.2.0 on a Kubernetes 1.23 cluster
-* samples/powermax_v220_ops_49.yaml* <- To install CSI PowerMax driver v2.2.0 on an OpenShift 4.9 cluster
+* samples/powermax_v250_k8s_125.yaml* <- To install CSI PowerMax driver v2.5.0 on a Kubernetes 1.25 cluster
+* samples/powermax_v250_ops_411.yaml* <- To install CSI PowerMax driver v2.5.0 on an OpenShift 4.11 cluster
Copy the correct sample file and edit the mandatory & any optional parameters specific to your driver installation by following the instructions [here](#modify-the-driver-specification)
>NOTE: A detailed explanation of the various mandatory and optional fields in the CustomResource is available [here](#custom-resource-specification). Please make sure to read through and understand the various fields.
@@ -250,9 +246,6 @@ If the driver-namespace was set to _test-powermax_, and the name of the driver i
Note: If the _state_ of the `CustomResource` is _Running_ then all the driver pods have been successfully installed. If the _state_ is _SuccessFul_, then it means the driver deployment was successful but some driver pods may not be in a _Running_ state.
Please refer to the _Troubleshooting_ section [here](../../troubleshooting/operator) if you encounter any issues during installation.
-### Changes in installation for latest CSI drivers
-If you are installing the latest versions of the CSI drivers, the driver controller will be installed as a Kubernetes `Deployment` instead of a `Statefulset`. These installations can also run multiple replicas for the driver controller pods(not supported for StatefulSets) to support High Availability for the Controller.
-
## Update CSI Drivers
The CSI Drivers installed by the Dell CSI Operator can be updated like any Kubernetes resource. This can be achieved in various ways which include –
@@ -274,7 +267,7 @@ The below notes explain some of the general items to take care of.
1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required.
```yaml
driver:
- configVersion: v2.4.0
+ configVersion: v2.5.0
```
2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add
@@ -298,26 +291,26 @@ The below notes explain some of the general items to take care of.
- args:
- --volume-name-prefix=csiunity
- --default-fstype=ext4
- image: k8s.gcr.io/sig-storage/csi-provisioner:v3.2.0
+ image: k8s.gcr.io/sig-storage/csi-provisioner:v3.3.0
imagePullPolicy: IfNotPresent
name: provisioner
- args:
- --snapshot-name-prefix=csiunitysnap
- image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.0.1
+ image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.1.0
imagePullPolicy: IfNotPresent
name: snapshotter
- args:
- --monitor-interval=60s
- image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.6.0
+ image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.7.0
imagePullPolicy: IfNotPresent
name: external-health-monitor
- - image: k8s.gcr.io/sig-storage/csi-attacher:v3.5.0
+ - image: k8s.gcr.io/sig-storage/csi-attacher:v4.0.0
imagePullPolicy: IfNotPresent
name: attacher
- - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
+ - image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.0
imagePullPolicy: IfNotPresent
name: registrar
- - image: k8s.gcr.io/sig-storage/csi-resizer:v1.5.0
+ - image: k8s.gcr.io/sig-storage/csi-resizer:v1.6.0
imagePullPolicy: IfNotPresent
name: resizer
```
@@ -412,7 +405,7 @@ spec:
You can set the field ***replicas*** to a higher number than `1` for the latest driver versions.
Note - The `image` field should point to the correct image tag for version of the driver you are installing.
-For e.g. - If you wish to install v1.4 of the CSI PowerMax driver, use the image tag `dellemc/csi-powermax:v1.4.0.000R`
+For e.g. - If you wish to install v2.5.0 of the CSI PowerMax driver, use the image tag `dellemc/csi-powermax:v2.5.0`
### SideCars
Although the sidecars field in the driver specification is optional, it is **strongly** recommended to not modify any details related to sidecars provided (if present) in the sample manifests. The only exception to this is modifications requested by the documentation, for example, filling in blank IPs or other such system-specific data. Any modifications not specifically requested by the documentation should be only done after consulting with Dell support.
diff --git a/content/v1/csidriver/installation/operator/powerflex.md b/content/v1/csidriver/installation/operator/powerflex.md
index 73350f7aa5..29b9b2693b 100644
--- a/content/v1/csidriver/installation/operator/powerflex.md
+++ b/content/v1/csidriver/installation/operator/powerflex.md
@@ -43,7 +43,7 @@ Kubernetes Operators make it easy to deploy and manage the entire lifecycle of c
- Optionally, enable sdc monitor by uncommenting the section for sidecar in manifest yaml. Please note the following:
- **If using sidecar**, you will need to edit the value fields under the HOST_PID and MDM fields by filling the empty quotes with host PID and the MDM IPs.
- **If not using sidecar**, please leave this commented out -- otherwise, the empty fields will cause errors.
-##### Example CR: [config/samples/vxflex_v220_ops_48.yaml](https://github.com/dell/dell-csi-operator/blob/master/samples/vxflex_v220_ops_48.yaml)
+##### Example CR: [config/samples/vxflex_v250_ops_411.yaml](https://github.com/dell/dell-csi-operator/blob/master/samples/vxflex_v250_ops_411.yaml)
```yaml
sideCars:
# Comment the following section if you don't want to run the monitoring sidecar
@@ -95,7 +95,7 @@ For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deploymen
# System name/ID of PowerFlex system.
# Required: true
systemID: "ID1"
- # REST API gateway HTTPS endpoint for PowerFlex system.
+ # REST API gateway HTTPS endpoint/PowerFlex Manager public IP for PowerFlex system.
# Required: true
endpoint: "https://127.0.0.1"
# Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface.
@@ -161,13 +161,13 @@ metadata:
namespace: test-vxflexos
spec:
driver:
- configVersion: v2.3.0
+ configVersion: v2.5.0
replicas: 1
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
fsGroupPolicy: File
common:
- image: "dellemc/csi-vxflexos:v2.3.0"
+ image: "dellemc/csi-vxflexos:v2.5.0"
imagePullPolicy: IfNotPresent
envs:
- name: X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT
diff --git a/content/v1/csidriver/installation/operator/powermax.md b/content/v1/csidriver/installation/operator/powermax.md
index 1290b00418..2dc6f9ca74 100644
--- a/content/v1/csidriver/installation/operator/powermax.md
+++ b/content/v1/csidriver/installation/operator/powermax.md
@@ -36,6 +36,18 @@ Set up the iSCSI initiators as follows:
For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+#### Auto RDM for vSphere over FC requirements
+
+The CSI Driver for Dell PowerMax supports auto RDM for vSphere over FC. These requirements are applicable for the clusters deployed on ESX/ESXi using virtualized environement.
+
+Set up the environment as follows:
+
+- Requires VMware vCenter management software to manage all ESX/ESXis where the cluster is hosted.
+
+- Add all FC array ports zoned to the ESX/ESXis to a port group where the cluster is hosted .
+
+- Add initiators from all ESX/ESXis to a host(initiator group) where the cluster is hosted.
+
#### Linux multipathing requirements
CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver.
@@ -114,15 +126,22 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
| X_CSI_TRANSPORT_PROTOCOL | Choose which transport protocol to use (ISCSI, FC, auto or None) | Yes | auto |
| X_CSI_POWERMAX_PORTGROUPS |List of comma-separated port groups (ISCSI only). Example: "PortGroup1,PortGroup2" | No | - |
| X_CSI_MANAGED_ARRAYS | List of comma-separated array ID(s) which will be managed by the driver | Yes | - |
- | X_CSI_POWERMAX_PROXY_SERVICE_NAME | Name of CSI PowerMax ReverseProxy service. Leave blank if not using reverse proxy | No | - |
+ | X_CSI_POWERMAX_PROXY_SERVICE_NAME | Name of CSI PowerMax ReverseProxy service. | Yes | powermax-reverseproxy |
| X_CSI_GRPC_MAX_THREADS | Number of concurrent grpc requests allowed per client | No | 4 |
| X_CSI_IG_MODIFY_HOSTNAME | Change any existing host names. When nodenametemplate is set, it changes the name to the specified format else it uses driver default host name format. | No | false |
| X_CSI_IG_NODENAME_TEMPLATE | Provide a template for the CSI driver to use while creating the Host/IG on the array for the nodes in the cluster. It is of the format a-b-c-%foo%-xyz where foo will be replaced by host name of each node in the cluster. | No | - |
| X_CSI_POWERMAX_DRIVER_NAME | Set custom CSI driver name. For more details on this feature see the related [documentation](../../../features/powermax/#custom-driver-name) | No | - |
| X_CSI_HEALTH_MONITOR_ENABLED | Enable/Disable health monitor of CSI volumes from Controller and Node plugin. Provides details of volume status, usage and volume condition. As a prerequisite, external-health-monitor sidecar section should be uncommented in samples which would install the sidecar | No | false |
+ | X_CSI_VSPHERE_ENABLED | Enable VMware virtualized environment support via RDM | No | false |
+ | X_CSI_VSPHERE_PORTGROUP | Existing portGroup that driver will use for vSphere | Yes | "" |
+ | X_CSI_VSPHERE_HOSTGROUP | Existing host(initiator group) that driver will use for vSphere | Yes | "" |
+ | X_CSI_VCenter_HOST | URL/endpoint of the vCenter where all the ESX are present | Yes | "" |
+ | X_CSI_VCenter_USERNAME | Username from the vCenter credentials | Yes | "" |
+ | X_CSI_VCenter_PWD | Password from the vCenter credentials | Yes | "" |
| ***Node parameters***|
| X_CSI_POWERMAX_ISCSI_ENABLE_CHAP | Enable ISCSI CHAP authentication. For more details on this feature see the related [documentation](../../../features/powermax/#iscsi-chap) | No | false |
| X_CSI_TOPOLOGY_CONTROL_ENABLED | Enable/Disabe topology control. It filters out arrays, associated transport protocol available to each node and creates topology keys based on any such user input. | No | false |
+
5. Execute the following command to create the PowerMax custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerMax driver.
**Note** - If CSI driver is getting installed using OCP UI , create these two configmaps manually using the command `oc create -f `
@@ -168,11 +187,9 @@ Create a secret named powermax-certs in the namespace where the CSI PowerMax dri
### CSI PowerMax ReverseProxy
-CSI PowerMax ReverseProxy is an optional component that can be installed with the CSI PowerMax driver. For more details on this feature see the related [documentation](../../../features/powermax#csi-powermax-reverse-proxy).
-
-When you install CSI PowerMax ReverseProxy, dell-csi-operator will create a Deployment and ClusterIP service as part of the installation
+CSI PowerMax ReverseProxy is component that will be installed along with the CSI PowerMax driver. For more details on this feature see the related [documentation](../../../features/powermax#csi-powermax-reverse-proxy).
-**Note** - To use the ReverseProxy with the CSI PowerMax driver, the ReverseProxy service should be created before you install the CSIPowerMax driver.
+Deployment and ClusterIP service will be created by dell-csi-operator.
#### Pre-requisites
Create a TLS secret that holds an SSL certificate and a private key which is required by the reverse proxy server.
@@ -294,8 +311,8 @@ metadata:
namespace: test-powermax
spec:
driver:
- # Config version for CSI PowerMax v2.4.0 driver
- configVersion: v2.4.0
+ # Config version for CSI PowerMax v2.5.0 driver
+ configVersion: v2.5.0
# replica: Define the number of PowerMax controller nodes
# to deploy to the Kubernetes release
# Allowed values: n, where n > 0
@@ -304,8 +321,8 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- # Image for CSI PowerMax driver v2.4.0
- image: dellemc/csi-powermax:v2.4.0
+ # Image for CSI PowerMax driver v2.5.0
+ image: dellemc/csi-powermax:v2.5.0
# imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container.
# Allowed values:
# Always: Always pull the image.
@@ -351,11 +368,10 @@ spec:
- name: "X_CSI_TRANSPORT_PROTOCOL"
value: ""
# X_CSI_POWERMAX_PROXY_SERVICE_NAME: Refers to the name of the proxy service in kubernetes
- # Set this to "powermax-reverseproxy" if you are installing the proxy
# Allowed values: "powermax-reverseproxy"
- # default values: ""
+ # default values: "powermax-reverseproxy"
- name: "X_CSI_POWERMAX_PROXY_SERVICE_NAME"
- value: ""
+ value: "powermax-reverseproxy"
# X_CSI_GRPC_MAX_THREADS: Defines the maximum number of concurrent grpc requests.
# Set this value to a higher number (max 50) if you are using the proxy
# Allowed values: n, where n > 4
@@ -467,7 +483,6 @@ data:
Note:
- - `dell-csi-operator` does not support the installation of CSI PowerMax ReverseProxy as a sidecar to the controller Pod. This facility is only present with `dell-csi-helm-installer`.
- `Kubelet config dir path` is not yet configurable in case of Operator based driver installation.
- Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
@@ -535,3 +550,51 @@ X_CSI_TOPOLOGY_CONTROL_ENABLED provides a way to filter topology keys on a node
>Note: Name of the configmap should always be `node-topology-config`.
+
+## Support for auto RDM for vSphere over FC
+
+This feature is introduced in CSI Driver for PowerMax version 2.5.0.
+
+### Operator based installation
+Support for auto RDM for vSphere over FC feature is optional and by default this feature is disabled for drivers when installed via operator.
+
+To enable this feature, set `X_CSI_VSPHERE_ENABLED` to `true` in the driver manifest under controller and node section.
+
+```
+# VMware/vSphere virtualization support
+ # set X_CSI_VSPHERE_ENABLED to true, if you to enable VMware virtualized environment support via RDM
+ # Allowed values:
+ # "true" - vSphere volumes are enabled
+ # "false" - vSphere volumes are disabled
+ # Default value: "false"
+ - name: "X_CSI_VSPHERE_ENABLED"
+ value: "false"
+ # X_CSI_VSPHERE_PORTGROUP: An existing portGroup that driver will use for vSphere
+ # recommended format: csi-x-VC-PG, x can be anything of user choice
+ # Allowed value: valid existing port group on the array
+ # Default value: ""
+ - name: "X_CSI_VSPHERE_PORTGROUP"
+ value: ""
+ # X_CSI_VSPHERE_HOSTGROUP: An existing host(initiator group) that driver will use for vSphere
+ # this hostGroup should contain initiators from all the ESXs/ESXi host where the cluster is deployed
+ # recommended format: csi-x-VC-HG, x can be anything of user choice
+ # Allowed value: valid existing host on the array
+ # Default value: ""
+ - name: "X_CSI_VSPHERE_HOSTGROUP"
+ value: ""
+ # X_CSI_VCenter_HOST: URL/endpoint of the vCenter where all the ESX are present
+ # Allowed value: valid vCenter host endpoint
+ # Default value: ""
+ - name: "X_CSI_VCenter_HOST"
+ value: ""
+ # X_CSI_VCenter_USERNAME: username from the vCenter credentials
+ # Allowed value: valid vCenter host username
+ # Default value: ""
+ - name: "X_CSI_VCenter_USERNAME"
+ value: ""
+ # X_CSI_VCenter_PWD: password from the vCenter credentials
+ # Allowed value: valid vCenter host password
+ # Default value: ""
+ - name: "X_CSI_VCenter_PWD"
+ value: ""
+```
\ No newline at end of file
diff --git a/content/v1/csidriver/installation/operator/powerstore.md b/content/v1/csidriver/installation/operator/powerstore.md
index 78c374f19c..110fbca777 100644
--- a/content/v1/csidriver/installation/operator/powerstore.md
+++ b/content/v1/csidriver/installation/operator/powerstore.md
@@ -69,13 +69,14 @@ metadata:
namespace: test-powerstore
spec:
driver:
- configVersion: v2.3.0
+ configVersion: v2.5.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
fsGroupPolicy: ReadWriteOnceWithFSType
+ storageCapacity: true
common:
- image: "dellemc/csi-powerstore:v2.3.0"
+ image: "dellemc/csi-powerstore:v2.5.0"
imagePullPolicy: IfNotPresent
envs:
- name: X_CSI_POWERSTORE_NODE_NAME_PREFIX
@@ -85,6 +86,8 @@ spec:
sideCars:
- name: external-health-monitor
args: ["--monitor-interval=60s"]
+ - name: provisioner
+ args: ["--capacity-poll-interval=5m"]
controller:
envs:
@@ -131,6 +134,7 @@ data:
| replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, the excess pods will be pending state till new nodes are available for scheduling. Default is 2 which allows for Controller high availability. | Yes | 2 |
| namespace | Specifies namespace where the drive will be installed | Yes | "test-powerstore" |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No |"ReadWriteOnceWithFSType"|
+| storageCapacity | Enable/Disable storage capacity tracking feature | No | true |
| ***Common parameters for node and controller*** |
| X_CSI_POWERSTORE_NODE_NAME_PREFIX | Prefix to add to each node registered by the CSI driver | Yes | "csi-node"
| X_CSI_FC_PORTS_FILTER_FILE_PATH | To set path to the file which provides a list of WWPN which should be used by the driver for FC connection on this node | No | "/etc/fc-ports-filter" |
diff --git a/content/v1/csidriver/installation/operator/unity.md b/content/v1/csidriver/installation/operator/unity.md
index d728919dde..1f485b3070 100644
--- a/content/v1/csidriver/installation/operator/unity.md
+++ b/content/v1/csidriver/installation/operator/unity.md
@@ -17,9 +17,10 @@ The following table lists driver configuration parameters for multiple storage a
| --------- | ----------- | -------- |-------- |
| username | Username for accessing Unity XT system | true | - |
| password | Password for accessing Unity XT system | true | - |
-| restGateway | REST API gateway HTTPS endpoint Unity XT system| true | - |
+| endpoint | REST API gateway HTTPS endpoint Unity XT system| true | - |
| arrayId | ArrayID for Unity XT system | true | - |
-| isDefaultArray | An array having isDefaultArray=true is for backward compatibility. This parameter should occur once in the list. | true | - |
+| isDefault | An array having isDefault=true is for backward compatibility. This parameter should occur once in the list. | true | - |
+| skipCertificateValidation | Determines if the driver is going to validate unisphere certs while connecting to the Unisphere REST API interface | true | true | - |
Ex: secret.yaml
@@ -97,12 +98,12 @@ metadata:
namespace: test-unity
spec:
driver:
- configVersion: v2.4.0
+ configVersion: v2.5.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- image: "dellemc/csi-unity:v2.4.0"
+ image: "dellemc/csi-unity:v2.5.0"
imagePullPolicy: IfNotPresent
sideCars:
- name: provisioner
diff --git a/content/v1/csidriver/release/operator.md b/content/v1/csidriver/release/operator.md
index 924c939f57..0583c4272f 100644
--- a/content/v1/csidriver/release/operator.md
+++ b/content/v1/csidriver/release/operator.md
@@ -3,12 +3,18 @@ title: Operator
description: Release notes for Dell CSI Operator
---
-## Release Notes - Dell CSI Operator 1.9.0
+## Release Notes - Dell CSI Operator 1.10.0
->**Note:** There will be a delay in certification of Dell CSI Operator 1.9.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.9.0 release.
+### New Features/Changes
+
+- [Added support to Kubernetes 1.25](https://github.com/dell/csm/issues/478)
+- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
+
+>**Note:** There will be a delay in certification of Dell CSI Operator 1.10.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.10.0 release.
### Fixed Issues
-There are no fixed issues in this release.
+
+- [Fix for secrets getting regenerated on apply of CSM driver manifest](https://github.com/dell/csm/issues/485)
### Known Issues
There are no known issues in this release.
diff --git a/content/v1/csidriver/release/powerflex.md b/content/v1/csidriver/release/powerflex.md
index 9a3b0cd0fa..4c82574ead 100644
--- a/content/v1/csidriver/release/powerflex.md
+++ b/content/v1/csidriver/release/powerflex.md
@@ -3,15 +3,20 @@ title: PowerFlex
description: Release notes for PowerFlex CSI driver
---
-## Release Notes - CSI PowerFlex v2.4.0
+## Release Notes - CSI PowerFlex v2.5.0
### New Features/Changes
-- [Added optional parameter protectionDomain to storageclass](https://github.com/dell/csm/issues/415)
-- [Added InstallationID annotation for volume attributes.](https://github.com/dell/csm/issues/434)
-- RHEL 8.6 support added
+- [Read Only Block support](https://github.com/dell/csm/issues/509)
+- [Added support for setting QoS limits by CSI-PowerFLex driver](https://github.com/dell/csm/issues/533)
+- [Added support for standardizing helm installation for CSI-PowerFlex driver](https://github.com/dell/csm/issues/494)
+- [Automated SDC deployment on RHEL 7.9 and 8.x](https://github.com/dell/csm/issues/494)
+- [SLES 15 SP4 support added](https://github.com/dell/csm/issues/539)
+- [OCP 4.11 support added](https://github.com/dell/csm/issues/480)
+- [K8 1.25 support added](https://github.com/dell/csm/issues/478)
+- [Added support for PowerFlex storage system v4.0](https://github.com/dell/csm/issues/476)
### Fixed Issues
-- [Enhancements and fixes to volume group snapshotter](https://github.com/dell/csm/issues/371)
+- [Fix for volume RO mount option](https://github.com/dell/csm/issues/503)
### Known Issues
diff --git a/content/v1/csidriver/release/powermax.md b/content/v1/csidriver/release/powermax.md
index 273de37a5a..4659f4b5de 100644
--- a/content/v1/csidriver/release/powermax.md
+++ b/content/v1/csidriver/release/powermax.md
@@ -3,17 +3,19 @@ title: PowerMax
description: Release notes for PowerMax CSI driver
---
-## Release Notes - CSI PowerMax v2.4.0
+## Release Notes - CSI PowerMax v2.5.0
> Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
### New Features/Changes
-- [Online volume expansion for replicated volumes.](https://github.com/dell/csm/issues/336)
-- [Added support for PowerMaxOS 10.](https://github.com/dell/csm/issues/389)
-- [Removed 9.x Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389)
-- [Added 10.0 Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389)
-- [Automatic SRDF group creation for PowerMax arrays (PowerMaxOS 10 and above).](https://github.com/dell/csm/issues/411)
-- [Added PowerPath support.](https://github.com/dell/csm/issues/436)
+- [Added support for Kubernetes 1.25.](https://github.com/dell/csm/issues/478)
+- [csi-reverseproxy is mandated along with the driver](https://github.com/dell/csm/issues/495)
+- [Added support for auto RDM for vSphere over FC](https://github.com/dell/csm/issues/528)
+- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
+- [SLES 15 SP4 support added](https://github.com/dell/csm/issues/539)
+
+>Note: Replication for PowerMax is supported in Kubernetes 1.25.
+>Replication is not supported with VMware/Vsphere virtualization support.
### Fixed Issues
There are no fixed issues in this release.
@@ -22,10 +24,8 @@ There are no fixed issues in this release.
| Issue | Workaround |
|-------|------------|
-|[Volume Attachment failure due to WWN mismatch](https://github.com/dell/csm/issues/548)| Please upgrade the driver to 2.5.0+|
| Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node |
-| After expanding file system volume , new size is not getting reflected inside the container | This is a known issue and has been reported at https://github.com/dell/csm/issues/378 . Workaround : Remount the volumes 1. Edit the replica count as 0 in application StatefulSet 2. Change the replica count as 1 for same StatefulSet. |
### Note:
diff --git a/content/v1/csidriver/release/powerscale.md b/content/v1/csidriver/release/powerscale.md
index 01909ced74..7732968b83 100644
--- a/content/v1/csidriver/release/powerscale.md
+++ b/content/v1/csidriver/release/powerscale.md
@@ -3,11 +3,14 @@ title: PowerScale
description: Release notes for PowerScale CSI driver
---
-## Release Notes - CSI Driver for PowerScale v2.4.0
+## Release Notes - CSI Driver for PowerScale v2.5.0
### New Features/Changes
-- [Added support to add client only to root clients when RO volume is created from snapshot and RootClientEnabled is set to true.](https://github.com/dell/csm/issues/362)
+- [Add support for Standalone Helm charts.](https://github.com/dell/csm/issues/506)
+- [Add an option to the CSI driver force the client list to be updated even if there are unresolvable host.](https://github.com/dell/csm/issues/534)
+- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
+- [Added support for Kubernetes 1.25](https://github.com/dell/csm/issues/478)
### Fixed Issues
diff --git a/content/v1/csidriver/release/powerstore.md b/content/v1/csidriver/release/powerstore.md
index b11c3b8d86..ebf09d1f0d 100644
--- a/content/v1/csidriver/release/powerstore.md
+++ b/content/v1/csidriver/release/powerstore.md
@@ -3,17 +3,15 @@ title: PowerStore
description: Release notes for PowerStore CSI driver
---
-## Release Notes - CSI PowerStore v2.4.0
+## Release Notes - CSI PowerStore v2.5.1
### New Features/Changes
-- [Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype](https://github.com/dell/csm/issues/188)
-- [Added support for iSCSI in TKG Qualification](https://github.com/dell/csm/issues/363)
-- [Added support for Stand alone Helm Chart](https://github.com/dell/csm/issues/355)
+There are no features/changes in this release.
### Fixed Issues
-There are no fixed issues in this release.
+- [Fixed issue where driver was not properly cleaning up resources when volumes were unmounted](https://github.com/dell/csm/issues/666)
### Known Issues
@@ -27,3 +25,4 @@ There are no fixed issues in this release.
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring and RWOP (ReadWriteOncePod) access mode will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
+- This release is only supported when driver is installed via helm.
\ No newline at end of file
diff --git a/content/v1/csidriver/release/unity.md b/content/v1/csidriver/release/unity.md
index 9a0668e3c3..12801433ba 100644
--- a/content/v1/csidriver/release/unity.md
+++ b/content/v1/csidriver/release/unity.md
@@ -3,11 +3,12 @@ title: Unity XT
description: Release notes for Unity XT CSI driver
---
-## Release Notes - CSI Unity XT v2.4.0
+## Release Notes - CSI Unity XT v2.5.0
### New Features/Changes
-- [Added support to configure fsGroupPolicy](https://github.com/dell/csm/issues/361)
+- [Added support to Kubernetes 1.25](https://github.com/dell/csm/issues/478)
+- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
### Known Issues
diff --git a/content/v1/csidriver/troubleshooting/powerflex.md b/content/v1/csidriver/troubleshooting/powerflex.md
index f53deb66cd..3a93f5bed6 100644
--- a/content/v1/csidriver/troubleshooting/powerflex.md
+++ b/content/v1/csidriver/troubleshooting/powerflex.md
@@ -14,11 +14,11 @@ description: Troubleshooting PowerFlex Driver
|CreateVolume error System is not configured in the driver | Powerflex name if used for systemID in StorageClass ensure same name is also used in array config systemID |
|Defcontext mount option seems to be ignored, volumes still are not being labeled correctly.|Ensure SElinux is enabled on a worker node, and ensure your container run time manager is properly configured to be utilized with SElinux.|
|Mount options that interact with SElinux are not working (like defcontext).|Check that your container orchestrator is properly configured to work with SElinux.|
-|Installation of the driver on Kubernetes v1.21/v1.22/v1.23 fails with the following error: ```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.21/v1.22/v1.23 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)|
+|Installation of the driver on Kubernetes v1.23/v1.24/v1.25 fails with the following error: ```Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"```|Kubernetes v1.23/v1.24/v1.25 requires v1 version of snapshot CRDs to be created in cluster, see the [Volume Snapshot Requirements](../../installation/helm/powerflex/#optional-volume-snapshot-requirements)|
| The `kubectl logs -n vxflexos vxflexos-controller-* driver` logs show `x509: certificate signed by unknown authority` |A self assigned certificate is used for PowerFlex array. See [certificate validation for PowerFlex Gateway](../../installation/helm/powerflex/#certificate-validation-for-powerflex-gateway-rest-api-calls)|
| When you run the command `kubectl apply -f snapclass-v1.yaml`, you get the error `error: unable to recognize "snapclass-v1.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"` | Check to make sure that the v1 snapshotter CRDs are installed, and not the v1beta1 CRDs, which are no longer supported. |
| The controller pod is stuck and producing errors such as" `Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)` | Make sure that v1 snapshotter CRDs and v1 snapclass are installed, and not v1beta1, which is no longer supported. |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.23.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 <= 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-vxflexos/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix: 1. Remove any multipath mapping involving a powerflex volume with `multipath -f ` 2. Blacklist CSI-PowerFlex volumes in multipath config file |
diff --git a/content/v1/csidriver/troubleshooting/unity.md b/content/v1/csidriver/troubleshooting/unity.md
index cd398664b5..6933aa5630 100644
--- a/content/v1/csidriver/troubleshooting/unity.md
+++ b/content/v1/csidriver/troubleshooting/unity.md
@@ -12,6 +12,6 @@ description: Troubleshooting Unity XT Driver
| Dynamic array detection will not work in Topology based environment | Whenever a new array is added or removed, then the driver controller and node pod should be restarted with command **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** when **topology-based storage classes are used**. For dynamic array addition without topology, the driver will detect the newly added or removed arrays automatically|
| If source PVC is deleted when cloned PVC exists, then source PVC will be deleted in the cluster but on array, it will still be present and marked for deletion. | All the cloned PVC should be deleted in order to delete the source PVC from the array. |
| PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** |
-| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
+| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.26.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
| Volume attachments are not removed after deleting the pods | If you are using Kubernetes version < 1.24, assign the volume name prefix such that the total length of volume name created in array should be more than 68 bytes. From Kubernetes version >= 1.24, this issue is taken care. Please refer the kubernetes issue https://github.com/kubernetes/kubernetes/issues/97230 which has detailed explanation. |
diff --git a/content/v1/csidriver/upgradation/drivers/isilon.md b/content/v1/csidriver/upgradation/drivers/isilon.md
index 5fcdd65f99..84d5dccee1 100644
--- a/content/v1/csidriver/upgradation/drivers/isilon.md
+++ b/content/v1/csidriver/upgradation/drivers/isilon.md
@@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver
---
You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.
-## Upgrade Driver from version 2.3.0 to 2.4.0 using Helm
+## Upgrade Driver from version 2.4.0 to 2.5.0 using Helm
**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Clone the repository using `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
+1. Clone the repository using `git clone -b v2.5.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
3. Upgrade the CSI Driver for Dell PowerScale using following command:
diff --git a/content/v1/csidriver/upgradation/drivers/operator.md b/content/v1/csidriver/upgradation/drivers/operator.md
index 51298cee83..5d317b2a1e 100644
--- a/content/v1/csidriver/upgradation/drivers/operator.md
+++ b/content/v1/csidriver/upgradation/drivers/operator.md
@@ -13,10 +13,9 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the
### Using Installation Script
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.10.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
-3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
->Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
+3. Execute `bash scripts/install.sh --upgrade`. This command will install the latest version of the operator.
### Using OLM
The upgrade of the Dell CSI Operator is done via Operator Lifecycle Manager.
@@ -25,5 +24,5 @@ The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role whi
- If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version.
- If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process.
-**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.9.0`.
+**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.10.0`.
diff --git a/content/v1/csidriver/upgradation/drivers/powerflex.md b/content/v1/csidriver/upgradation/drivers/powerflex.md
index 75fbe21a34..7c2eb2e59d 100644
--- a/content/v1/csidriver/upgradation/drivers/powerflex.md
+++ b/content/v1/csidriver/upgradation/drivers/powerflex.md
@@ -10,9 +10,9 @@ Description: Upgrade PowerFlex CSI driver
You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operator.
-## Update Driver from v2.2 to v2.3 using Helm
+## Update Driver from v2.4 to v2.5 using Helm
**Steps**
-1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.3.0 driver.
+1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powerflex.git` to clone the git repository and get the v2.3.0 driver.
2. You need to create config.yaml with the configuration of your system.
Check this section in installation documentation: [Install the Driver](../../../installation/helm/powerflex#install-the-driver)
3. Update values file as needed.
diff --git a/content/v1/csidriver/upgradation/drivers/powermax.md b/content/v1/csidriver/upgradation/drivers/powermax.md
index de810ef264..d64909ba08 100644
--- a/content/v1/csidriver/upgradation/drivers/powermax.md
+++ b/content/v1/csidriver/upgradation/drivers/powermax.md
@@ -16,10 +16,10 @@ You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
1. Upgrade the Unisphere to have 10.0 endpoint support.Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support.
-## Update Driver from v2.3 to v2.4 using Helm
+## Update Driver from v2.4 to v2.5 using Helm
**Steps**
-1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.4 driver.
+1. Run `git clone -b v2.5.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the driver.
2. Update the values file as needed.
2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`.
diff --git a/content/v1/csidriver/upgradation/drivers/powerstore.md b/content/v1/csidriver/upgradation/drivers/powerstore.md
index aa24207cef..311751629e 100644
--- a/content/v1/csidriver/upgradation/drivers/powerstore.md
+++ b/content/v1/csidriver/upgradation/drivers/powerstore.md
@@ -7,15 +7,15 @@ weight: 1
Description: Upgrade PowerStore CSI driver
---
-You can upgrade the CSI Driver for Dell PowerStore using Helm or Dell CSI Operator.
+You can upgrade the CSI Driver for Dell PowerStore using Helm.
-## Update Driver from v2.3 to v2.4 using Helm
+## Update Driver from v2.5 to v2.5.1 using Helm
Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
-2. Edit `helm/config.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
+1. Run `git clone -b v2.5.1 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
+2. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
- *username*, *password*: defines credentials for connecting to array.
@@ -28,18 +28,7 @@ Note: While upgrading the driver via helm, controllerCount variable in myvalues.
Add more blocks similar to above for each PowerStore array if necessary.
3. (optional) create new storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
- >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3 driver will not be deleted, v2.4 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3 driver and make it default in the `config.yaml` file.
+ >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 driver will not be deleted, v2.5.1 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3/v2.4/v2.5 driver and make it default in the `secret.yaml` file.
4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
5. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` and update parameters as per the requirement.
6. Run the `csi-install` script with the option _\-\-upgrade_ by running: `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade`.
-
-## Upgrade using Dell CSI Operator:
-
-**Notes:**
-1. While upgrading the driver via operator, replicas count in sample CR yaml can be at most one less than the number of worker nodes.
-2. Upgrading the Operator does not upgrade the CSI Driver.
-
-
-1. Please upgrade the Dell CSI Operator by following [here](./../operator).
-2. Once the operator is upgraded, to upgrade the driver, refer [here](./../../../installation/operator/#update-csi-drivers).
-
diff --git a/content/v1/csidriver/upgradation/drivers/unity.md b/content/v1/csidriver/upgradation/drivers/unity.md
index a1bfe7a3cc..d328a4d21a 100644
--- a/content/v1/csidriver/upgradation/drivers/unity.md
+++ b/content/v1/csidriver/upgradation/drivers/unity.md
@@ -20,9 +20,9 @@ You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator
Preparing myvalues.yaml is the same as explained in the install section.
-To upgrade the driver from csi-unity v2.3.0 to csi-unity v2.4.0
+To upgrade the driver from csi-unity v2.4.0 to csi-unity v2.5.0
-1. Get the latest csi-unity v2.4.0 code from Github using using `git clone -b v2.4.0 https://github.com/dell/csi-unity.git`.
+1. Get the latest csi-unity v2.5.0 code from Github using `git clone -b v2.5.0 https://github.com/dell/csi-unity.git`.
2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed.
3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command:
`./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade`
diff --git a/content/v1/csm_diagram.jpg b/content/v1/csm_diagram.jpg
deleted file mode 100644
index f84839f761..0000000000
Binary files a/content/v1/csm_diagram.jpg and /dev/null differ
diff --git a/content/v1/csm_hexagon.png b/content/v1/csm_hexagon.png
index 0c11d1ca02..f2d5eecfd1 100644
Binary files a/content/v1/csm_hexagon.png and b/content/v1/csm_hexagon.png differ
diff --git a/content/v1/deployment/_index.md b/content/v1/deployment/_index.md
index 8e6ecf2e58..5c698bdce4 100644
--- a/content/v1/deployment/_index.md
+++ b/content/v1/deployment/_index.md
@@ -7,8 +7,6 @@ weight: 1
---
The Container Storage Modules along with the required CSI Drivers can each be deployed using CSM operator.
->Note: Currently CSM operator is in tech preview and is not supported in production environments.
-
{{< cardpane >}}
{{< card header="[**CSM Operator**](csmoperator/)"
footer="Supports driver [PowerScale](csmoperator/drivers/powerscale/), modules [Authorization](csmoperator/modules/authorization/) [Replication](csmoperator/modules/replication/)">}}
diff --git a/content/v1/deployment/csmoperator/_index.md b/content/v1/deployment/csmoperator/_index.md
index 887c1abb50..e3f87f4acf 100644
--- a/content/v1/deployment/csmoperator/_index.md
+++ b/content/v1/deployment/csmoperator/_index.md
@@ -5,31 +5,29 @@ description: Container Storage Modules Operator
weight: 1
---
-{{% pageinfo color="primary" %}}
-The Dell Container Storage Modules Operator Operator is currently in tech-preview and is not supported in production environments. It can be used in environments where no other Dell CSI Drivers or CSM Modules are installed.
-{{% /pageinfo %}}
-
The Dell Container Storage Modules Operator Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually.
## Supported Platforms
Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenShift. Supported versions are listed below.
-| Kubernetes Version | OpenShift Version |
-| -------------------- | ------------------- |
-| 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| Kubernetes Version | OpenShift Version |
+| -------------------------- | ------------------- |
+| 1.23, 1.24, 1.25 | 4.10, 4.10 EUS, 4.11 |
## Supported CSI Drivers
| CSI Driver | Version | ConfigVersion |
| ------------------ | --------- | -------------- |
-| CSI PowerScale | 2.2.0 + | v2.2.0 + |
+| CSI PowerScale | 2.3.0 + | v2.3.0 + |
+| CSI PowerFlex | 2.3.0 + | v2.3.0 + |
## Supported CSM Modules
| CSM Modules | Version | ConfigVersion |
| ------------------ | --------- | -------------- |
| CSM Authorization | 1.2.0 + | v1.2.0 + |
-| CSM Authorization | 1.3.0 + | v1.3.0 + |
+| CSM Replication | 1.3.0 + | v1.3.0 + |
+| CSM Observability | 1.2.0 + | v1.2.0 + |
## Installation
Dell CSM Operator can be installed manually or via Operator Hub.
@@ -38,7 +36,7 @@ Dell CSM Operator can be installed manually or via Operator Hub.
#### Operator Installation on a cluster without OLM
-1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator).
+1. Clone and checkout the required csm-operator version using `git clone -b v1.0.0 https://github.com/dell/csm-operator.git`
2. `cd csm-operator`
3. (Optional) If using a local Docker image, edit the `deploy/operator.yaml` file and set the image name for the CSM Operator Deployment.
4. Run `bash scripts/install.sh` to install the operator.
@@ -52,7 +50,7 @@ Dell CSM Operator can be installed manually or via Operator Hub.
{{< imgproc install_pods.jpg Resize "2500x" >}}{{< /imgproc >}}
#### Operator Installation on a cluster with OLM
-1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator).
+1. Clone and checkout the required csm-operator version using `git clone -b v1.0.0 https://github.com/dell/csm-operator.git`
2. `cd csm-operator`
3. Run `bash scripts/install_olm.sh` to install the operator.
>NOTE: Dell CSM Operator will get installed in the `test-csm-operator-olm` namespace.
@@ -61,7 +59,7 @@ Dell CSM Operator can be installed manually or via Operator Hub.
4. Once installation completes, run the command `kubectl get pods -n test-csm-operator-olm` to validate the installation. If installed successfully, you should be able to see the operator pods and CSV in the `test-csm-operator-olm` namespace. The CSV phase will be in `Succeeded` state.
-{{< imgproc install_olm_pods.jpg Resize "2500x" >}}{{< /imgproc >}}
+{{< imgproc install_olm_pods.JPG Resize "2500x" >}}{{< /imgproc >}}
>**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**.
@@ -81,7 +79,7 @@ To uninstall a CSM operator, run `bash scripts/uninstall.sh`. This will uninstal
#### Operator uninstallation on a cluster with OLM
To uninstall a CSM operator installed with OLM run `bash scripts/uninstall_olm.sh`. This will uninstall the operator in `test-csm-operator-olm` namespace.
-{{< imgproc uninstall_olm.jpg Resize "2500x" >}}{{< /imgproc >}}
+{{< imgproc uninstall_olm.JPG Resize "2500x" >}}{{< /imgproc >}}
### To upgrade Dell CSM Operator, perform the following steps.
Dell CSM Operator can be upgraded in 2 ways:
@@ -91,10 +89,9 @@ Dell CSM Operator can be upgraded in 2 ways:
2.Using Operator Lifecycle Manager (OLM)
#### Using Installation Script
-1. Clone the [Dell CSM Operator repository](https://github.com/dell/csm-operator).
+1. Clone and checkout the required csm-operator version using `git clone -b v1.0.0 https://github.com/dell/csm-operator.git`
2. `cd csm-operator`
-3. git checkout -b 'csm-operator-version'
-4. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
+3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
>Note: Dell CSM Operator would install to the 'dell-csm-operator' namespace by default.
diff --git a/content/v1/deployment/csmoperator/drivers/powerflex.md b/content/v1/deployment/csmoperator/drivers/powerflex.md
new file mode 100644
index 0000000000..272dd1e221
--- /dev/null
+++ b/content/v1/deployment/csmoperator/drivers/powerflex.md
@@ -0,0 +1,163 @@
+---
+title: PowerFlex
+linkTitle: "PowerFlex"
+description: >
+ Installing Dell CSI Driver for PowerFlex via Dell CSM Operator
+---
+
+## Installing CSI Driver for PowerFlex via Dell CSM Operator
+
+The CSI Driver for Dell PowerFlex can be installed via the Dell CSM Operator.
+To deploy the Operator, follow the instructions available [here](../../#installation).
+
+Note that the deployment of the driver using the operator does not use any Helm charts and the installation and configuration parameters will be slightly different from the one specified via the Helm installer.
+
+**Note**: MKE (Mirantis Kubernetes Engine) does not support the installation of CSI-PowerFlex via Operator.
+
+### Listing installed drivers with the ContainerStorageModule CRD
+User can query for all Dell CSI drivers using this command:
+`kubectl get csm --all-namespaces`
+
+### Prerequisites
+- If multipath is configured, ensure CSI-PowerFlex volumes are blacklisted by multipathd. See [troubleshooting section](../../../../csidriver/troubleshooting/powerflex) for details
+
+#### SDC Deployment for Operator
+- This feature deploys the sdc kernel modules on all nodes with the help of an init container.
+- For non-supported versions of the OS also do the manual SDC deployment steps given below. Refer to https://hub.docker.com/r/dellemc/sdc for supported versions.
+- **Note:** When the driver is created, MDM value for initContainers in driver CR is set by the operator from mdm attributes in the driver configuration file,
+ config.yaml. An example of config.yaml is below in this document. Do not set MDM value for initContainers in the driver CR file manually.
+- **Note:** To use an sdc-binary module from customer ftp site:
+ - Create a secret, sdc-repo-secret.yaml to contain the credentials for the private repo. To generate the base64 encoding of a credential:
+ ```yaml
+ echo -n | base64 -i
+```
+ secret sample to use:
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: sdc-repo-creds
+ namespace: vxflexos
+ type: Opaque
+ data:
+ # set username to the base64 encoded username, sdc default is
+ username:
+ # set password to the base64 encoded password, sdc default is
+ password:
+```
+ - Create secret for FTP side by using the command `kubectl create -f sdc-repo-secret.yaml`.
+ - Optionally, enable sdc monitor by setting the enable flag for the sdc-monitor to true. Please note:
+ - **If using sidecar**, you will need to edit the value fields under the HOST_PID and MDM fields by filling the empty quotes with host PID and the MDM IPs.
+ - **If not using sidecar**, leave the enabled field set to false.
+##### Example CR: [samples/storage_csm_powerflex_v240.yaml](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerflex_v240.yaml)
+```yaml
+ sideCars:
+ # sdc-monitor is disabled by default, due to high CPU usage
+ - name: sdc-monitor
+ enabled: false
+ image: dellemc/sdc:3.6
+ envs:
+ - name: HOST_PID
+ value: "1"
+ - name: MDM
+ value: "10.xx.xx.xx,10.xx.xx.xx" #provide MDM value
+```
+
+#### Manual SDC Deployment
+
+For detailed PowerFlex installation procedure, see the _Dell PowerFlex Deployment Guide_. Install the PowerFlex SDC using this procedure:
+
+**Steps**
+
+1. Download the PowerFlex SDC from [Dell Online support](https://www.dell.com/support). The filename is EMC-ScaleIO-sdc-*.rpm, where * is the SDC name corresponding to the PowerFlex installation version.
+2. Export the shell variable _MDM_IP_ in a comma-separated list using `export MDM_IP=xx.xxx.xx.xx,xx.xxx.xx.xx`, where xxx represents the actual IP address in your environment. This list contains the IP addresses of the MDMs.
+3. Install the SDC per the _Dell PowerFlex Deployment Guide_:
+ - For environments using RPM, run `rpm -iv ./EMC-ScaleIO-sdc-*.x86_64.rpm`, where * is the SDC name corresponding to the PowerFlex installation version.
+4. To add more MDM_IP for multi-array support, run `/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip 10.xx.xx.xx.xx,10.xx.xx.xx`1. Create namespace.
+ Execute `kubectl create namespace test-vxflexos` to create the `test-vxflexos` namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'test-vxflexos'.
+
+#### Create Secret
+1. Create namespace:
+ Execute `kubectl create namespace test-vxflexos` to create the test-vxflexos namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'test-vxflexos'.
+2. Prepare the config.yaml for driver configuration.
+
+ Example: config.yaml
+
+ ```yaml
+ # Username for accessing PowerFlex system.
+ # Required: true
+ - username: "admin"
+ # Password for accessing PowerFlex system.
+ # Required: true
+ password: "password"
+ # System name/ID of PowerFlex system.
+ # Required: true
+ systemID: "ID1"
+ # REST API gateway HTTPS endpoint/PowerFlex Manager public IP for PowerFlex system.
+ # Required: true
+ endpoint: "https://127.0.0.1"
+ # Determines if the driver is going to validate certs while connecting to PowerFlex REST API interface.
+ # Allowed values: true or false
+ # Required: true
+ # Default value: true
+ skipCertificateValidation: true
+ # indicates if this array is the default array
+ # needed for backwards compatibility
+ # only one array is allowed to have this set to true
+ # Required: false
+ # Default value: false
+ isDefault: true
+ # defines the MDM(s) that SDC should register with on start.
+ # Allowed values: a list of IP addresses or hostnames separated by comma.
+ # Required: true
+ # Default value: none
+ mdm: "10.0.0.1,10.0.0.2"
+ # Defines all system names used to create powerflex volumes
+ # Required: false
+ # Default value: none
+ AllSystemNames: "name1,name2"
+ - username: "admin"
+ password: "Password123"
+ systemID: "ID2"
+ endpoint: "https://127.0.0.2"
+ skipCertificateValidation: true
+ mdm: "10.0.0.3,10.0.0.4"
+ AllSystemNames: "name1,name2"
+ ```
+
+ After editing the file, run this command to create a secret called `test-vxflexos-config`. If you are using a different namespace/secret name, just substitute those into the command.
+ `kubectl create secret generic test-vxflexos-config -n test-vxflexos --from-file=config=config.yaml`
+
+ Use this command to replace or update the secret:
+
+ `kubectl create secret generic test-vxflexos-config -n test-vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl replace -f -`
+
+### Install Driver
+
+1. Follow all the [prerequisites](#prerequisite) above
+
+2. Create a CR (Custom Resource) for PowerFlex using the sample files provided
+ [here](https://github.com/dell/csm-operator/tree/master/samples). This file can be modified to use custom parameters if needed.
+
+3. Users should configure the parameters in CR. The following table lists the primary configurable parameters of the PowerFlex driver and their default values:
+
+ | Parameter | Description | Required | Default |
+ | --------- | ----------- | -------- |-------- |
+ | dnsPolicy | Determines the DNS Policy of the Node service | Yes | ClusterFirstWithHostNet |
+ | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
+ | replicas | Controls the number of controller pods you deploy. If the number of controller pods is greater than the number of available nodes, excess pods will become stay in a pending state. Defaults are 2 which allows for Controller high availability. | Yes | 2 |
+ | ***Common parameters for node and controller*** |
+ | X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT | Enable list volume operation to include snapshots (since creating a volume from a snap actually results in a new snap) | No | false |
+ | X_CSI_VXFLEXOS_ENABLESNAPSHOTCGDELETE | Enable this to automatically delete all snapshots in a consistency group when a snap in the group is deleted | No | false |
+ | X_CSI_DEBUG | To enable debug mode | No | true |
+ | X_CSI_ALLOW_RWO_MULTI_POD_ACCESS | Setting allowRWOMultiPodAccess to "true" will allow multiple pods on the same node to access the same RWO volume. This behavior conflicts with the CSI specification version 1.3. NodePublishVolume description that requires an error to be returned in this case. However, some other CSI drivers support this behavior and some customers desire this behavior. Customers use this option at their own risk. | No | false |
+
+4. Execute this command to create PowerFlex custom resource:
+ ```kubectl create -f ``` .
+ This command will deploy the CSI-PowerFlex driver in the namespace specified in the input YAML file.
+
+5. [Verify the CSI Driver installation](../#verifying-the-driver-installation)
+
+**Note** :
+ 1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
+ 2. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
diff --git a/content/v1/deployment/csmoperator/drivers/powerscale.md b/content/v1/deployment/csmoperator/drivers/powerscale.md
index 261e0c1222..7a336a6994 100644
--- a/content/v1/deployment/csmoperator/drivers/powerscale.md
+++ b/content/v1/deployment/csmoperator/drivers/powerscale.md
@@ -137,7 +137,7 @@ User can query for all Dell CSI drivers using the following command:
```kubectl create -f ``` .
This command will deploy the CSI-PowerScale driver in the namespace specified in the input YAML file.
-7. [Verify the CSI Driver installation](../drivers/_index.md#verifying-the-driver-installation)
+5. [Verify the CSI Driver installation](../#verifying-the-driver-installation)
**Note** :
1. "Kubelet config dir path" is not yet configurable in case of Operator based driver installation.
diff --git a/content/v1/deployment/csmoperator/install.jpg b/content/v1/deployment/csmoperator/install.jpg
index 14b6362c45..9178a259e1 100644
Binary files a/content/v1/deployment/csmoperator/install.jpg and b/content/v1/deployment/csmoperator/install.jpg differ
diff --git a/content/v1/deployment/csmoperator/install_olm_pods.jpg b/content/v1/deployment/csmoperator/install_olm_pods.jpg
index fff68a99e0..99df2345ab 100644
Binary files a/content/v1/deployment/csmoperator/install_olm_pods.jpg and b/content/v1/deployment/csmoperator/install_olm_pods.jpg differ
diff --git a/content/v1/deployment/csmoperator/modules/_index.md b/content/v1/deployment/csmoperator/modules/_index.md
index 1ac79f9d15..db07b2c5d5 100644
--- a/content/v1/deployment/csmoperator/modules/_index.md
+++ b/content/v1/deployment/csmoperator/modules/_index.md
@@ -10,4 +10,4 @@ The steps include:
1. Deploy the Dell CSM Operator (if it is not already deployed). Please follow the instructions available [here](../../#installation).
2. Configure any pre-requisite for the desired module(s). See the specific module below for more information
-3. Follow the instructions available [here](../drivers/powerscale.md/#install-driver)) to install the Dell CSI Driver via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module.
+3. Follow the instructions to install the Dell CSI Driver (such as [PowerScale](../drivers/powerscale/#install-driver) or [PowerFlex](../drivers/powerflex/#install-driver)) via the CSM Operator. The module section in the ContainerStorageModule CR should be updated to enable the desired module(s). There are [sample manifests](https://github.com/dell/csm-operator/tree/main/samples) provided which can be edited to do an easy installation of the driver along with the module.
diff --git a/content/v1/deployment/csmoperator/modules/authorization.md b/content/v1/deployment/csmoperator/modules/authorization.md
index 4d1e2ca19b..286fc227ae 100644
--- a/content/v1/deployment/csmoperator/modules/authorization.md
+++ b/content/v1/deployment/csmoperator/modules/authorization.md
@@ -5,8 +5,182 @@ description: >
Pre-requisite for Installing Authorization via Dell CSM Operator
---
-The CSM Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. Please note, Dell CSM operator currently ONLY supports deploying CSM Authorization sidecar/container.
+## Install CSM Authorization via Dell CSM Operator
-## Pre-requisite
+The CSM Authorization module for supported Dell CSI Drivers can be installed via the Dell CSM Operator.
+To deploy the Operator, follow the instructions available [here](../../#installation).
-Follow the instructions available in CSM Authorization for [Configuring a Dell CSI Driver with CSM for Authorization](../../../authorization/deployment/_index.md/#configuring-a-dell-csi-driver).
\ No newline at end of file
+### Prerequisite
+
+1. Execute `kubectl create namespace authorization` to create the authorization namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'authorization'.
+
+2. Install cert-manager CRDs `kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml`
+
+3. Prepare `samples/authorization/config.yaml` provided [here](https://github.com/dell/csm-operator/blob/main/samples/authorization/config.yaml) which contains the JWT signing secret. The following table lists the configuration parameters.
+
+ | Parameter | Description | Required | Default |
+ | --------- | ------------------------------------------------------------ | -------- | ------- |
+ | web.jwtsigningsecret | String used to sign JSON Web Tokens | true | secret |.
+
+ Example:
+
+ ```yaml
+ web:
+ jwtsigningsecret: randomString123
+ ```
+
+ After editing the file, run this command to create a secret called `karavi-config-secret`:
+
+ `kubectl create secret generic karavi-config-secret -n authorization --from-file=config.yaml=samples/csm-authorization/config.yaml`
+
+ Use this command to replace or update the secret:
+
+ `kubectl create secret generic karavi-config-secret -n authorization --from-file=config.yaml=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -`
+
+4. Create the `karavi-storage-secret` using the file provided [here](https://github.com/dell/csm-operator/blob/main/samples/authorization/karavi-storage-secret.yaml) to store storage system credentials.
+
+ Use this command to create the secret:
+
+ `kubectl create -f samples/authorization/karavi-storage-secret.yaml`
+
+5. Prepare a storage class for Redis to use for persistence. If not supplied, the default storage class in your environment is used.
+
+ Example, if using CSM Authorization for PowerScale:
+
+ ```yaml
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: isilon
+ provisioner: csi-isilon.dellemc.com
+ reclaimPolicy: Delete
+ allowVolumeExpansion: true
+ parameters:
+ # The name of the access zone a volume can be created in
+ # Optional: true
+ # Default value: default value specified in values.yaml
+ # Examples: System, zone1
+ AccessZone: System
+
+ # The base path for the volumes to be created on PowerScale cluster.
+ # Ensure that this path exists on PowerScale cluster.
+ # Allowed values: unix absolute path
+ # Optional: true
+ # Default value: value specified in values.yaml for isiPath
+ # Examples: /ifs/data/csi, /ifs/engineering
+ IsiPath: /ifs/data/csi
+
+ # The permissions for isi volume directory path
+ # This value overrides the isiVolumePathPermissions attribute of corresponding cluster config in secret, if present
+ # Allowed values: valid octal mode number
+ # Default value: "0777"
+ # Examples: "0777", "777", "0755"
+ #IsiVolumePathPermissions: "0777"
+
+ # AccessZone groupnet service IP. Update AzServiceIP if different than endpoint.
+ # Optional: true
+ # Default value: endpoint of the cluster ClusterName
+ #AzServiceIP : 192.168.2.1
+
+ # When a PVC is being created, this parameter determines, when a node mounts the PVC,
+ # whether to add the k8s node to the "Root clients" field or "Clients" field of the NFS export
+ # Allowed values:
+ # "true": adds k8s node to the "Root clients" field of the NFS export
+ # "false": adds k8s node to the "Clients" field of the NFS export
+ # Optional: true
+ # Default value: "false"
+ RootClientEnabled: "false"
+
+ # Name of PowerScale cluster, where pv will be provisioned.
+ # This name should match with name of one of the cluster configs in isilon-creds secret.
+ # If this parameter is not specified, then default cluster config in isilon-creds secret
+ # will be considered if available.
+ # Optional: true
+ #ClusterName:
+
+ # Sets the filesystem type which will be used to format the new volume
+ # Optional: true
+ # Default value: None
+ #csi.storage.k8s.io/fstype: "nfs"
+
+ # volumeBindingMode controls when volume binding and dynamic provisioning should occur.
+ # Allowed values:
+ # Immediate: indicates that volume binding and dynamic provisioning occurs once the
+ # PersistentVolumeClaim is created
+ # WaitForFirstConsumer: will delay the binding and provisioning of a PersistentVolume
+ # until a Pod using the PersistentVolumeClaim is created
+ # Default value: Immediate
+ volumeBindingMode: Immediate
+
+ # allowedTopologies helps scheduling pods on worker nodes which match all of below expressions.
+ # If enableCustomTopology is set to true in helm values.yaml, then do not specify allowedTopologies
+ # Change all instances of to the IP of the PowerScale OneFS API server
+ #allowedTopologies:
+ # - matchLabelExpressions:
+ # - key: csi-isilon.dellemc.com/
+ # values:
+ # - csi-isilon.dellemc.com
+
+ # specify additional mount options for when a Persistent Volume is being mounted on a node.
+ # To mount volume with NFSv4, specify mount option vers=4. Make sure NFSv4 is enabled on the Isilon Cluster
+ #mountOptions: ["", "", ..., ""]
+ ```
+
+ Save the file and create it by using `kubectl create -f `.
+
+### Install CSM Authorization Proxy Server
+
+1. Follow all the [prerequisites](#prerequisite).
+
+2. Create a CR (Custom Resource) for Authorization using the sample file provided [here](https://github.com/dell/csm-operator/blob/main/samples/authorization/csm_authorization_proxy_server.yaml). This file can be modified to use custom parameters if needed.
+
+3. Users should configure the parameters in the CR. This table lists the primary configurable parameters of the Authorization Proxy Server and their default values:
+
+ | Parameter | Description | Required | Default |
+ | --------- | ----------- | -------- |-------- |
+ | **authorization** | This section configures the CSM-Authorization components. | - | - |
+ | PROXY_HOST | The hostname to configure the self-signed certificate (if applicable), and the proxy, tenant, role, and storage service Ingresses. | Yes | csm-authorization.com |
+ | AUTHORIZATION_LOG_LEVEL | CSM Authorization log level. Allowed values: “error”, “warn”/“warning”, “info”, “debug”. | Yes | debug |
+ | AUTHORIZATION_ZIPKIN_COLLECTORURI | The URI of the Zipkin instance to export traces. | No | - |
+ | AUTHORIZATION_ZIPKIN_PROBABILITY | The ratio of traces to export. | No | - |
+ | PROXY_INGRESS_CLASSNAME | The ingressClassName of the proxy-service Ingress. | Yes | nginx |
+ | PROXY_INGRESS_HOSTS | Additional host rules to be applied to the proxy-service Ingress. | No | authorization-ingress-nginx-controller.authorization.svc.cluster.local |
+ | TENANT_INGRESS_CLASSNAME | The ingressClassName of the tenant-service Ingress. | Yes | nginx |
+ | ROLE_INGRESS_CLASSNAME | The ingressClassName of the role-service Ingress. | Yes | nginx |
+ | STORAGE_INGRESS_CLASSNAME | The ingressClassName of the storage-service Ingress. | Yes | nginx |
+ | REDIS_STORAGE_CLASS | The storage class for Redis to use for persistence. If not supplied, the default storage class is used. | Yes | - |
+ | **ingress-nginx** | This section configures the enablement of the NGINX Ingress Controller. | - | - |
+ | enabled | Enable/Disable deployment of the NGINX Ingress Controller. Set to false if you already have an Ingress Controller installed. | No | true |
+ | **cert-manager** | This section configures the enablement of cert-manager. | - | - |
+ | enabled | Enable/Disable deployment of cert-manager. Set to false if you already have cert-manager installed. | No | true |
+
+4. Execute this command to create the Authorization CR:
+
+ ```kubectl create -f samples/authorization/csm_authorization_proxy_server.yaml```
+
+ >__Note__:
+ > - This command will deploy the Authorization Proxy Server in the namespace specified in the input YAML file.
+
+5. Create the `karavi-auth-tls` secret using your own certificate or by using a self-signed certificate generated via cert-manager.
+
+ If using your own certificate that is valid for each Ingress hostname, use this command to create the `karavi-auth-tls` secret:
+
+ `kubectl create secret tls karavi-auth-tls -n authorization --key --cert `
+
+ If using a self-signed certificate, prepare `samples/authorization/certificate.yaml` provided [here](https://github.com/dell/csm-operator/blob/main/samples/authorization/certificate.yaml). An entry for each hostname specified in the CR must be added under `dnsNames` for the certificate to be valid for each Ingress.
+
+ Use this command to create the `karavi-auth-tls` secret:
+
+ `kubectl create -f samples/authorization/certificate.yaml`
+
+### Install Karavictl
+
+Follow the instructions available in CSM Authorization for [Installing karavictl](../../../../authorization/deployment/helm/#install-karavictl).
+
+### Configuring the CSM Authorization Proxy Server
+
+Follow the instructions available in CSM Authorization for [Configuring the CSM Authorization Proxy Server](../../../../authorization/deployment/helm/#configuring-the-csm-authorization-proxy-server).
+
+### Configuring a Dell CSI Driver with CSM Authorization
+
+Follow the instructions available in CSM Authorization for [Configuring a Dell CSI Driver with CSM for Authorization](../../../../authorization/deployment/helm/#configuring-a-dell-csi-driver-with-csm-for-authorization).
\ No newline at end of file
diff --git a/content/v1/deployment/csmoperator/modules/observability.md b/content/v1/deployment/csmoperator/modules/observability.md
new file mode 100644
index 0000000000..9b598142b3
--- /dev/null
+++ b/content/v1/deployment/csmoperator/modules/observability.md
@@ -0,0 +1,62 @@
+---
+title: Observability
+linktitle: Observability
+description: >
+ Pre-requisite for Installing Observability via Dell CSM Operator
+---
+
+The CSM Observability module for supported Dell CSI Drivers can be installed via the Dell CSM Operator. Dell CSM Operator will deploy CSM Observability, including topology service, Otel collector, and metrics services.
+
+## Prerequisites
+
+- Create a namespace `karavi`
+ ```
+ kubectl create namespace karavi
+ ```
+- [Install cert-manager with Helm](https://cert-manager.io/docs/installation/helm/)
+ 1. Add the Helm repository
+ ```
+ helm repo add jetstack https://charts.jetstack.io
+ ```
+ 2. Update your local Helm chart repository cache
+ ```
+ helm repo update
+ ```
+ 3. Install cert-manager in the namespace `karavi`
+ ```
+ helm install \
+ cert-manager jetstack/cert-manager \
+ --namespace karavi \
+ --version v1.10.0 \
+ --set installCRDs=true
+ ```
+ 4. Verify installation
+ ```
+ $ kubectl get pod -n karavi
+ NAME READY STATUS RESTARTS AGE
+ cert-manager-7b45d477c8-z28sq 1/1 Running 0 2m2s
+ cert-manager-cainjector-86f7f4749-mdz7c 1/1 Running 0 2m2s
+ cert-manager-webhook-66c85f8577-c7hxx 1/1 Running 0 2m2s
+ ```
+- Create certificates
+ - Option 1: Self-signed certificates
+ 1. A Sample certificates manifest can be found at `samples/observability/selfsigned-cert.yaml`.
+ 2. Create certificates
+ ```
+ kubectl create -f selfsigned-cert.yaml
+ ```
+
+ - Option 2: Custom certificates
+ 1. Replace `tls.crt` and `tls.key` with actual base64-encoded certificate and private key in `samples/observability/custom-cert.yaml`.
+ 2. Create certificates
+ ```
+ kubectl create -f custom-cert.yaml
+ ```
+- Enable Observability module and components in [sample manifests](https://github.com/dell/csm-operator/tree/main/samples)
+ - Scenario 1: Deploy one supported CSI Driver and enable Observability module
+ - If you enable `metrics-powerscale` or `metrics-powerflex`, must enable `otel-collector` as well.
+
+ - Scenario 2: Deploy multiple supported CSI Drivers and enable Observability module
+ - When deploying the first driver, enable all components of Observability module in the CR.
+ - For the following drivers, only enable the metrics service, and remove `topology` and `otel-collector` sections from the CR.
+ - The CR created at first must be deleted at last.
\ No newline at end of file
diff --git a/content/v1/deployment/csmoperator/modules/replication.md b/content/v1/deployment/csmoperator/modules/replication.md
index cba958854a..8b6e3b63c9 100644
--- a/content/v1/deployment/csmoperator/modules/replication.md
+++ b/content/v1/deployment/csmoperator/modules/replication.md
@@ -16,7 +16,7 @@ To use Replication, you need at least two clusters:
To configure all the clusters, follow the steps below:
-1. On your main cluster, follow the instructions available in CSM Replication for [Installation using repctl](../../../replication/deployment/install-repctl.md). NOTE: On step 4 of the link above, you MUST use the command below to automatically package all clusters' `.kube` config as a secret:
+1. On your main cluster, follow the instructions available in CSM Replication for [Installation using repctl](../../../../replication/deployment/install-repctl), with the exception of step 4. When you reach step 4, you MUST use the command below to automatically package all clusters `.kube` config as a secret:
```shell
./repctl cluster inject
@@ -24,4 +24,4 @@ To configure all the clusters, follow the steps below:
CSM Operator needs this admin configs instead of the service accounts’ configs to be able to properly manage the target clusters. The default service account that'll be used is the CSM Operator service account.
-2. On each of the target clusters, configure the prerequisites for deploying the driver via Dell CSM Operator. For example, PowerScale has the following [prerequisites for deploying PowerScale via Dell CSM Operator](../drivers/powerscale.md/#prerequisite)
\ No newline at end of file
+2. On each of the target clusters, configure the prerequisites for deploying the driver via Dell CSM Operator. For example, PowerScale has the following [prerequisites for deploying PowerScale via Dell CSM Operator](../../drivers/powerscale/#prerequisite)
diff --git a/content/v1/deployment/csmoperator/release/_index.md b/content/v1/deployment/csmoperator/release/_index.md
new file mode 100644
index 0000000000..8feaf1b8ca
--- /dev/null
+++ b/content/v1/deployment/csmoperator/release/_index.md
@@ -0,0 +1,22 @@
+---
+title: "Release notes"
+linkTitle: "Release notes"
+weight: 5
+Description: >
+ Release notes for Dell Container Storage Modules Operator
+---
+
+## Release Notes - Container Storage Modules Operator v1.0.0
+
+### New Features/Changes
+- [Added support for CSI PowerFlex Driver](https://github.com/dell/csm/issues/477)
+- [Added support for CSM Observability Module](https://github.com/dell/csm/issues/488)
+- [Added support to Kubernetes 1.25](https://github.com/dell/csm/issues/478)
+- [Added support for OpenShift 4.11](https://github.com/dell/csm/issues/480)
+
+
+### Fixed Issues
+There are no fixed issues in this release.
+
+### Known Issues
+There are no known issues in this release.
\ No newline at end of file
diff --git a/content/v1/deployment/csmoperator/uninstall_olm.JPG b/content/v1/deployment/csmoperator/uninstall_olm.JPG
index dcf78dba4e..516a0591e6 100644
Binary files a/content/v1/deployment/csmoperator/uninstall_olm.JPG and b/content/v1/deployment/csmoperator/uninstall_olm.JPG differ
diff --git a/content/v1/observability/_index.md b/content/v1/observability/_index.md
index cc8165d4a3..00fc4b7976 100644
--- a/content/v1/observability/_index.md
+++ b/content/v1/observability/_index.md
@@ -47,8 +47,8 @@ CSM for Observability provides the following capabilities:
{{
}}
## Supported CSI Drivers
diff --git a/content/v1/observability/deployment/helm.md b/content/v1/observability/deployment/helm.md
index 6433b60836..cc8860f6e3 100644
--- a/content/v1/observability/deployment/helm.md
+++ b/content/v1/observability/deployment/helm.md
@@ -17,7 +17,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
**Steps**
1. Create a namespace where you want to install the module `kubectl create namespace karavi`
-2. Install cert-manager CRDs `kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml`
+2. Install cert-manager CRDs `kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.10.0/cert-manager.crds.yaml`
3. Add the Dell Helm Charts repo `helm repo add dell https://dell.github.io/helm-charts`
diff --git a/content/v1/observability/deployment/offline.md b/content/v1/observability/deployment/offline.md
index 9017bff0b9..d8e7a6539f 100644
--- a/content/v1/observability/deployment/offline.md
+++ b/content/v1/observability/deployment/offline.md
@@ -72,10 +72,10 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Downloading and saving Docker images
- dellemc/csm-topology:v1.3.0
- dellemc/csm-metrics-powerflex:v1.3.0
- dellemc/csm-metrics-powerstore:v1.3.0
- dellemc/csm-metrics-powerscale:v1.0.0
+ dellemc/csm-topology:v1.4.0
+ dellemc/csm-metrics-powerflex:v1.4.0
+ dellemc/csm-metrics-powerstore:v1.4.0
+ dellemc/csm-metrics-powerscale:v1.1.0
otel/opentelemetry-collector:0.42.0
nginxinc/nginx-unprivileged:1.20
@@ -105,10 +105,10 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Loading, tagging, and pushing Docker images to registry :5000/
- dellemc/csm-topology:v1.3.0 -> :5000/csm-topology:v1.3.0
- dellemc/csm-metrics-powerflex:v1.3.0 -> :5000/csm-metrics-powerflex:v1.3.0
- dellemc/csm-metrics-powerstore:v1.3.0 -> :5000/csm-metrics-powerstore:v1.3.0
- dellemc/csm-metrics-powerscale:v1.0.0 -> :5000/csm-metrics-powerscale:v1.0.0
+ dellemc/csm-topology:v1.4.0 -> :5000/csm-topology:v1.4.0
+ dellemc/csm-metrics-powerflex:v1.4.0 -> :5000/csm-metrics-powerflex:v1.4.0
+ dellemc/csm-metrics-powerstore:v1.4.0 -> :5000/csm-metrics-powerstore:v1.4.0
+ dellemc/csm-metrics-powerscale:v1.1.0 -> :5000/csm-metrics-powerscale:v1.1.0
otel/opentelemetry-collector:0.42.0 -> :5000/opentelemetry-collector:0.42.0
nginxinc/nginx-unprivileged:1.20 -> :5000/nginx-unprivileged:1.20
```
diff --git a/content/v1/observability/release/_index.md b/content/v1/observability/release/_index.md
index 07f248dc73..b3c3ee9fce 100644
--- a/content/v1/observability/release/_index.md
+++ b/content/v1/observability/release/_index.md
@@ -6,15 +6,15 @@ Description: >
Dell Container Storage Modules (CSM) release notes for observability
---
-## Release Notes - CSM Observability 1.3.0
+## Release Notes - CSM Observability 1.4.0
### New Features/Changes
-- [Support PowerScale in CSM Observability](https://github.com/dell/csm/issues/452)
-- [Set PV/PVC's namespace when using Observability Module](https://github.com/dell/csm/issues/453)
-- [CSM Observability modules stick with otel controller 0.42.0](https://github.com/dell/csm/issues/454)
+- [CSM support for Kubernetes 1.25](https://github.com/dell/csm/issues/478)
+- [CSM support for Openshift 4.11](https://github.com/dell/csm/issues/480)
+- [CSM support for PowerFlex 4.0](https://github.com/dell/csm/issues/476)
+- [Observability - Improve Grafana dashboard](https://github.com/dell/csm/issues/519)
### Fixed Issues
-
-- [Observability Topology: nil pointer error](https://github.com/dell/csm/issues/430)
+- [step_error: command not found in karavi-observability-install.sh](https://github.com/dell/csm/issues/479)
### Known Issues
\ No newline at end of file
diff --git a/content/v1/observability/uninstall/_index.md b/content/v1/observability/uninstall/_index.md
index 296ebfa64c..277013ce1c 100644
--- a/content/v1/observability/uninstall/_index.md
+++ b/content/v1/observability/uninstall/_index.md
@@ -18,5 +18,5 @@ $ helm delete karavi-observability --namespace [CSM_NAMESPACE]
You may also want to uninstall the CRDs created for cert-manager.
```console
-$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
+$ kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.10.0/cert-manager.crds.yaml
```
diff --git a/content/v1/observability/upgrade/_index.md b/content/v1/observability/upgrade/_index.md
index 932c107e02..8812473a64 100644
--- a/content/v1/observability/upgrade/_index.md
+++ b/content/v1/observability/upgrade/_index.md
@@ -26,7 +26,7 @@ Check if the latest Helm chart version is available:
```
helm search repo dell
NAME CHART VERSION APP VERSION DESCRIPTION
-dell/karavi-observability 1.0.1 1.0.0 CSM for Observability is part of the [Container...
+dell/karavi-observability 1.4.0 1.4.0 CSM for Observability is part of the [Container...
```
>Note: If using cert-manager CustomResourceDefinitions older than v1.5.3, delete the old CRDs and install v1.5.3 of the CRDs prior to upgrade. See [Prerequisites](../deployment/helm#prerequisites) for location of CRDs.
diff --git a/content/v1/references/cli/_index.md b/content/v1/references/cli/_index.md
index e99a6775da..d631a60a35 100644
--- a/content/v1/references/cli/_index.md
+++ b/content/v1/references/cli/_index.md
@@ -15,18 +15,27 @@ This document outlines all dellctl commands, their intended use, options that ca
| [dellctl cluster add](#dellctl-cluster-add) | Add a k8s cluster to be managed by dellctl |
| [dellctl cluster remove](#dellctl-cluster-remove) | Removes a k8s cluster managed by dellctl |
| [dellctl cluster get](#dellctl-cluster-get) | List all clusters currently being managed by dellctl |
-| [dellctl backup](#dellctl-backup) | Allows to manipulate application backups/clones |
+| [dellctl backup](#dellctl-backup) | Allows you to manipulate application backups/clones |
| [dellctl backup create](#dellctl-backup-create) | Create an application backup/clones |
| [dellctl backup delete](#dellctl-backup-delete) | Delete application backups |
| [dellctl backup get](#dellctl-backup-get) | Get application backups |
-| [dellctl restore](#dellctl-restore) | Allows to manipulate application restores |
+| [dellctl restore](#dellctl-restore) | Allows you to manipulate application restores |
| [dellctl restore create](#dellctl-restore-create) | Restore an application backup |
| [dellctl restore delete](#dellctl-restore-delete) | Delete application restores |
| [dellctl restore get](#dellctl-restore-get) | Get application restores |
+| [dellctl schedule](#dellctl-schedule) | Allows you to manipulate schedules |
+| [dellctl schedule create](#dellctl-schedule-create) | Create a schedule |
+| [dellctl schedule create for-backup](#dellctl-schedule-create-for-backup) | Create a schedule for application backups |
+| [dellctl schedule delete](#dellctl-schedule-delete) | Delete schedules |
+| [dellctl schedule get](#dellctl-schedule-get) | Get schedules |
+| [dellctl encryption rekey](#dellctl-encryption-rekey) | Rekey an encrypted volume |
+| [dellctl encryption rekey-status](#dellctl-encryption-rekey-status) | Get status of an encryption rekey operation |
+| [dellctl images](#dellctl-images) | List the container images needed by csi driver |
+| [dellctl volume get](#dellctl-volume-get) | Gets PowerFlex volume infomation for a given tenant on a local cluster |
## Installation instructions
-1. Download `dellctl` from [here](https://github.com/dell/csm/releases/tag/v1.4.0).
+1. Download `dellctl` from [here](https://github.com/dell/csm/releases/tag/v1.5.1).
2. chmod +x dellctl
3. Move `dellctl` to `/usr/local/bin` or add `dellctl`'s containing directory path to PATH environment variable.
4. Run `dellctl --help` to know available commands or run `dellctl command --help` to know more about a specific command.
@@ -60,7 +69,7 @@ Outputs help text
### dellctl cluster
-Allows to manipulate one or more k8s cluster configurations
+Allows you to manipulate one or more k8s cluster configurations
##### Available Commands
@@ -191,7 +200,7 @@ cluster2 v1.22 https://1.2.3.5:6443 035133aa-5b65-4080-a813-
### dellctl backup
-Allows to manipulate application backups/clones
+Allows you to manipulate application backups/clones
##### Available Commands
@@ -373,7 +382,7 @@ demo-app-clones Completed 2022-07-27 11:53:37 -0400 EDT 2022-08-26 11:53:3
### dellctl restore
-Allows to manipulate application restores
+Allows you to manipulate application restores
##### Available Commands
@@ -532,3 +541,333 @@ Get restores with their names
NAME BACKUP STATUS CREATED COMPLETED
restore1 backup1 Completed 2022-07-27 12:35:29 -0400 EDT
```
+
+
+
+---
+
+
+
+### dellctl schedule
+
+Allows you to manipulate schedules
+
+##### Available Commands
+
+```
+ create Create a schedule
+ delete Delete schedules
+ get Get schedules
+```
+
+##### Flags
+
+```
+ -h, --help Help for schedule
+```
+
+##### Output
+
+Outputs help text
+
+
+
+---
+
+
+
+### dellctl schedule create
+
+Create a schedule
+
+##### Available Commands
+
+```
+ for-backup Create a schedule for application backups
+```
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ -h, --help Help for create
+ --name string Name for the schedule
+ --schedule string A cron expression representing when to create the application backup
+```
+
+##### Output
+
+Outputs help text
+
+
+
+---
+
+
+
+### dellctl schedule create for-backup
+
+Create a schedule for application backups
+
+##### Flags
+
+```
+ --exclude-namespaces stringArray List of namespace names to exclude from the backup.
+ --include-namespaces stringArray List of namespace names to include in the backup (use '*' for all namespaces). (default *)
+ --ttl duration Backup retention period. (default 720h0m0s)
+ --exclude-resources stringArray Resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io.
+ --include-resources stringArray Resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources).
+ --backup-location string Storage location where k8s resources and application data will be backed up to. (default "default")
+ --data-mover string Data mover to be used to backup application data. (default "Restic")
+ --include-cluster-resources optionalBool[=true] Include cluster-scoped resources in the backup
+ -l, --label-selector labelSelector Only backup resources matching this label selector. (default )
+ --set-owner-references-in-backup optionalBool[=false] Specifies whether to set OwnerReferences on backups created by this schedule.
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+ -h, --help Help for for-backup
+```
+
+##### Global Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ --name string Name for the schedule
+ --schedule string A cron expression representing when to create the application backup
+```
+
+##### Output
+
+Create a schedule to backup namespace demo, every 1hour
+
+```
+# dellctl schedule create for-backup --name schedule1 --schedule "@every 1h" --include-namespaces demo
+ INFO schedule request "schedule1" submitted successfully.
+ INFO Run 'dellctl schedule get schedule1' for more details.
+```
+
+Create a schedule to backup namespace demo, once a day at midnight and set OwnerReferences on backups created by this schedule
+
+```
+# dellctl schedule create for-backup --name schedule2 --schedule "@daily" --include-namespaces demo --set-owner-references-in-backup
+ INFO schedule request "schedule2" submitted successfully.
+ INFO Run 'dellctl schedule get schedule2' for more details.
+```
+
+Create a schedule to backup namespace demo, at 23:00(11:00 pm) every saturday
+
+```
+# dellctl schedule create for-backup --name schedule3 --schedule "00 23 * * 6" --include-namespaces demo
+ INFO schedule request "schedule3" submitted successfully.
+ INFO Run 'dellctl schedule get schedule3' for more details.
+```
+
+
+
+---
+
+
+
+### dellctl schedule delete
+
+Delete one or more schedules
+
+##### Flags
+
+```
+ --all Delete all schedules
+ --cluster-id string Id of the cluster managed by dellctl
+ --confirm Confirm deletion
+ -h, --help Help for delete
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+```
+
+##### Output
+
+Delete a schedule with name
+
+```
+# dellctl schedule delete schedule1
+Are you sure you want to continue (Y/N)? y
+ INFO Request to delete schedule "schedule1" submitted successfully.
+```
+
+Delete multiple schedules
+
+```
+# dellctl schedule delete schedule1 schedule2
+Are you sure you want to continue (Y/N)? y
+ INFO Request to delete schedule "schedule1" submitted successfully.
+ INFO Request to delete schedule "schedule2" submitted successfully.
+```
+
+Delete all schedules without asking for user confirmation
+
+```
+# dellctl schedule delete --confirm --all
+ INFO Request to delete schedule "schedule1" submitted successfully.
+ INFO Request to delete schedule "schedule2" submitted successfully.
+```
+
+
+---
+
+
+
+### dellctl schedule get
+
+Get schedules
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ -h, --help Help for get
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+```
+
+##### Output
+
+Get all the application schedules created on local cluster
+
+```
+# dellctl schedule get
+NAME STATUS CREATED PAUSED SCHEDULE LAST BACKUP TIME
+schedule1 Enabled 2022-11-04 08:33:35 +0000 UTC false @every 1h NA
+schedule2 Enabled 2022-11-04 08:35:57 +0000 UTC false @daily NA
+```
+
+Get schedules with their names
+
+```
+# dellctl schedule get schedule1
+NAME STATUS CREATED PAUSED SCHEDULE LAST BACKUP TIME
+schedule1 Enabled 2022-11-04 08:33:35 +0000 UTC false @every 1h NA
+```
+
+### dellctl encryption rekey
+
+Encryption rekey with a name for the rekey object and volume name of an encrypted volume
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ -h, --help help for get
+```
+
+
+##### Output
+
+
+```
+# dellctl encryption rekey myrekey k8s-5d2cc565d4
+ INFO rekey request "myrekey" submitted successfully for persistent volume "k8s-5d2cc565d4".
+ INFO Run 'dellctl encryption rekey-status myrekey' for more details.
+```
+
+
+### dellctl encryption rekey-status
+
+Encryption rekey status with name of the rekey object
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ -h, --help help for get
+```
+
+
+##### Output
+
+
+```
+# dellctl encryption rekey-status myrekey
+ INFO Status of rekey request myrekey = completed
+```
+
+### dellctl images
+
+List the container images needed by csi driver
+
+**NOTE.**: dellctl images currently supports csi-vxflexos driver only.
+
+#### Aliases
+
+```
+images,imgs
+```
+
+#### Flags
+
+```
+ Flags:
+ -d, --driver string csi driver name
+ -h, --help help for images
+
+```
+#### Output
+
+
+```
+# dellctl images --driver csi-vxflexos
+Driver Image Supported Orchestrator Versions Sidecar Images
+dellemc/csi-vxflexos:v2.5.0 k8s1.25,k8s1.24,k8s1.23,ocp4.11,ocp4.10 k8s.gcr.io/sig-storage/csi-attacher:v4.0.0
+ k8s.gcr.io/sig-storage/csi-provisioner:v3.3.0
+ dellemc/csi-volumegroup-snapshotter:v1.2.0
+ k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
+ k8s.gcr.io/sig-storage/csi-snapshotter:v6.1.0
+ k8s.gcr.io/sig-storage/csi-resizer:v1.6.0
+ k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.6.0
+ dellemc/sdc:3.6.0.6
+
+dellemc/csi-vxflexos:v2.4.0 k8s1.24,k8s1.23,k8s1.22,ocp4.10,ocp4.9 k8s.gcr.io/sig-storage/csi-attacher:v3.5.0
+ k8s.gcr.io/sig-storage/csi-provisioner:v3.2.1
+ dellemc/csi-volumegroup-snapshotter:v1.2.0
+ k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.6.0
+ k8s.gcr.io/sig-storage/csi-snapshotter:v6.0.1
+ k8s.gcr.io/sig-storage/csi-resizer:v1.5.0
+ k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
+ dellemc/sdc:3.6.0.6
+
+dellemc/csi-vxflexos:v2.3.0 k8s1.24,k8s1.23,k8s1.22,ocp4.10,ocp4.9 k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
+ k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
+ dellemc/csi-volumegroup-snapshotter:v1.0.1
+ gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.5.0
+ k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
+ k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
+ k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
+ dellemc/sdc:3.6.0.6
+```
+
+
+### dellctl volume get
+
+Gets PowerFlex volume infomation for a given tenant on a local cluster
+
+##### Aliases
+ get, ls, list
+
+##### Flags
+
+```
+ -h, --help help for get
+ --insecure optionalBool[=true] provide flag to skip certificate validation
+ --namespace string namespace of the secret for the given tenant
+ --proxy string auth proxy endpoint to use
+```
+
+##### Output
+
+Gets PowerFlex volume infomation for a given tenant on a local cluster. The namespace is the namespace where tenant secret is created.
+
+>Note: This was output was generated using Authorization Proxy version 1.5.1. Please ensure you are using version 1.5.1 or greater.
+
+```
+# dellctl volume get --proxy --namespace vxflexos
+NAME VOLUME ID SIZE POOL SYSTEM ID PV NAME PV STATUS STORAGE CLASS PVC NAME NAMESPACE
+k8s-e7c8b39112 a69bf18e00000008 8.000000 mypool 636468e3638c840f k8s-e7c8b39112 Released vxflexos demo-claim10 default
+k8s-e6e2b46103 a69bf18f00000009 8.000000 mypool 636468e3638c840f k8s-e6e2b46103 Bound vxflexos demo-claim11 default
+k8s-b1abb817d3 a69bf19000000001 8.000000 mypool 636468e3638c840f k8s-b1abb817d3 Bound vxflexos demo-claim13 default
+k8s-28e4184f41 c6b2280d0000009a 8.000000 mypool 636468e3638c840f k8s-28e4184f41 Available local-storage
+k8s-7296621062 a69b554f00000004 8.000000 mypool 636468e3638c840f
+```
diff --git a/content/v1/replication/_index.md b/content/v1/replication/_index.md
index 2d0b15c594..2c45ef2807 100644
--- a/content/v1/replication/_index.md
+++ b/content/v1/replication/_index.md
@@ -16,34 +16,41 @@ applications in case of both planned and unplanned migration.
CSM for Replication provides the following capabilities:
{{
}}
-| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
-| ----------------------------------------------------------------------------------- | :------: | :--------: | :--------: | :-------: | :---: |
-| Replicate data using native storage array based replication | yes | yes | yes | no | no |
-| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no |
-| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no |
-| Failover & Reprotect applications using the replicated volumes | yes | yes | yes | no | no |
-| Online Volume Expansion for replicated volumes | yes | no | no | no | no |
-| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no |
+| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
+| ----------------------------------------------------------------------------------------------------------------------------------- | :------: | :--------: | :--------: | :-------: | :---: |
+| Replicate data using native storage array based replication | yes | yes | yes | no | no |
+| Asynchronous file volume replication | no | no | yes | no | no |
+| Asynchronous block volume replication | yes | yes | n/a | no | no |
+| Synchronous file volume replication | no | no | no | no | no |
+| Synchronous block volume replication | yes | no | n/a | no | no |
+| Active-Active (Metro) block volume replication | yes | no | n/a | no | no |
+| Active-Active (Metro) file volume replication | no | no | no | no | no |
+| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no |
+| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no |
+| Failover & Reprotect applications using the replicated volumes | yes | yes | no | no | no |
+| Online Volume Expansion for replicated volumes | yes | no | no | no | no |
+| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no |
{{
}}
@@ -51,11 +58,11 @@ CSM for Replication provides the following capabilities:
CSM for Replication supports the following CSI drivers and versions.
{{
}}
## Details
@@ -78,28 +85,15 @@ the objects still exist in pairs.
* Different namespaces cannot share the same RDF group for creating volumes with ASYNC mode for PowerMax.
* Same RDF group cannot be shared across different replication modes for PowerMax.
-### CSM for Replication Module Capabilities
-
-CSM for Replication provides the following capabilities:
-
-{{
}}
-| Capability | PowerMax | PowerStore | PowerScale | PowerFlex | Unity |
-| ----------------------------------------------------------------| -------- | ---------- | ---------- | --------- | ----- |
-| Asynchronous replication of PVs accross or single K8s clusters | yes | yes (block)| yes | no | no |
-| Synchronous replication of PVs accross or single K8s clusters | yes | no | no | no | no |
-| Metro replication single (stretched) cluster | yes | no | no | no | no |
-| Replication actions (failover, reprotect) | yes | yes | yes | no | no |
-{{
}}
-
### Supported Platforms
The following matrix provides a list of all supported versions for each Dell Storage product.
-| Platforms | PowerMax | PowerStore | PowerScale |
-| ---------- | ----------------- | ---------------- | ---------------- |
-| Kubernetes | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 |
-| RedHat Openshift |4.9, 4.10 | 4.9, 4.10 | 4.9, 4.10 |
-| CSI Driver | 2.x(k8s), 2.2+(OpenShift)| 2.x | 2.2+ |
+| Platforms | PowerMax | PowerStore | PowerScale |
+| ---------------- | ------------------------------ | ---------------- | ---------------- |
+| Kubernetes | 1.23, 1.24, 1.25 | 1.22, 1.23, 1.24 | 1.22, 1.23, 1.24 |
+| RedHat Openshift | 4.10, 4.11 | 4.9, 4.10 | 4.9, 4.10 |
+| CSI Driver | 2.x(k8s), 2.2+(OpenShift) | 2.x | 2.2+ |
For compatibility with storage arrays please refer to corresponding [CSI drivers](../csidriver/#features-and-capabilities)
diff --git a/content/v1/replication/deployment/configmap-secrets.md b/content/v1/replication/deployment/configmap-secrets.md
index 677a309e7a..b93d82e71b 100644
--- a/content/v1/replication/deployment/configmap-secrets.md
+++ b/content/v1/replication/deployment/configmap-secrets.md
@@ -31,7 +31,7 @@ Run the following command -
```shell
repctl cluster inject --use-sa
```
-This will create secrets using the token for the `default` ServiceAccount and update the ConfigMap in all the clusters
+This will create secrets using the token for the `dell-replication-controller-sa` ServiceAccount and update the ConfigMap in all the clusters
which have been configured for `repctl`
#### Inject KubeConfigs from repctl configuration
@@ -103,13 +103,13 @@ kubectl create secret generic --from-file=data= NOTE: Available RPO values "Five_Minutes", "Fifteen_Minutes", "Thirty_Minutes", "One_Hour", "Six_Hours", "Twelve_Hours", "One_Day"
* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerScale driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them.
-* `Accesszone` is the name of the access zone a volume can be created in
-* `IsiPath` is the base path for the volumes to be created on the PowerScale cluster
-* `RootClientEnabled` determines whether the driver should enable root squashing or not
+
+> NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\\' cannot be more than 63 characters.
+
+* `Accesszone` is the name of the access zone a volume can be created in.
+* `AzServiceIP` AccessZone groupnet service IP. It is optional and can be provided if different than the PowerScale cluster endpoint.
+* `IsiPath` is the base path for the volumes to be created on the PowerScale cluster.
+* `RootClientEnabled` determines whether the driver should enable root squashing or not.
* `ClusterName` name of PowerScale cluster, where PV will be provisioned, specified as it was listed in `isilon-creds` secret.
After figuring out how storage classes would look, you just need to go and apply them to your Kubernetes clusters with `kubectl`.
@@ -149,20 +160,28 @@ name: "isilon-replication"
driver: "isilon"
reclaimPolicy: "Delete"
replicationPrefix: "replication.storage.dell.com"
+remoteRetentionPolicy:
+ RG: "Retain"
+ PV: "Retain"
parameters:
rpo: "Five_Minutes"
ignoreNamespaces: "false"
volumeGroupPrefix: "csi"
- accessZone: "System"
isiPath: "/ifs/data/csi"
- rootClientEnabled: "false"
clusterName:
source: "cluster-1"
target: "cluster-2"
+ rootClientEnabled:
+ source: "false"
+ target: "false"
+ accessZone:
+ source: "System"
+ target: "System"
+ azServiceIP:
+ source: "192.168.1.1"
+ target: "192.168.1.2"
```
-> NOTE: both storage classes expected to use access zone with same name
-
After preparing the config, you can apply it to both clusters with `repctl`. Before you do this, ensure you've added your clusters to `repctl` via the `add` command.
To create storage classes just run `./repctl create sc --from-config ` and storage classes would be applied to both clusters.
@@ -178,11 +197,53 @@ On your source cluster, create a PersistentVolumeClaim using one of the replicat
The CSI PowerScale driver will create a volume on the array, add it to a VolumeGroup and configure replication
using the parameters provided in the replication enabled Storage Class.
+### SyncIQ Policy Architecture
+When creating `DellCSIReplicationGroup` (RG) objects on the Kubernetes cluster(s) used for replication, matching SyncIQ policies are created on *both* the source and target PowerScale storage arrays.
+
+This is done so that the RG objects can communicate with a relative 'local' and 'remote' set of policies to query for current synchronization status and perform replication actions; on the *source* Kubernetes cluster's RG, the *source* PowerScale array is seen as 'local' and the *target* PowerScale array is seen as remote. The inverse relationship exists on the *target* Kubernetes cluster's RG, which sees the *target* PowerScale array as 'local' and the *source* PowerScale array as 'remote'.
+
+Upon creation, both SyncIQ policies (source and target) are set to a schedule of `When source is modified`. The source PowerScale array's SyncIQ policy is `Enabled` when the RG is created, and the target array's policy is `Disabled`. Similarly, the directory that is being replicated is *read-write accessible* on the source storage array, and is restricted to *read-only* on the target.
+
+### Performing Failover on PowerScale
+
+Steps for performing Failover can be found in the Tools page under [Executing Actions.](https://dell.github.io/csm-docs/docs/replication/tools/#executing-actions) There are some PowerScale-specific considerations to keep in mind:
+- Failover on PowerScale does NOT halt writes on the source side. It is recommended that the storage administrator or end user manually stop writes to ensure no data is lost on the source side in the event of future failback.
+- In the case of unplanned failover, the source-side SyncIQ policy will be left enabled and set to its previously defined `When source is modified` sync schedule. It is recommended for storage admins to manually disable the source-side SyncIQ policy when bringing the failed-over source array back online.
+
+### Performing Failback on PowerScale
+
+Failback operations are not presently supported for PowerScale. In the event of a failover, failback can be performed manually using the below methodologies.
+#### Failback - Discard Target
+
+Performing failback and discarding changes made to the target is to simply resume synchronization from the source. The steps to perform this operation are as follows:
+1. Log in to the source PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
+2. Edit the source-side SyncIQ policy's schedule from `When source is modified` to `Manual`.
+3. Log in to the target PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Local targets` tab.
+4. Perform `Actions > Disallow writes` on the target-side Local Target policy that matches the SyncIQ policy undergoing failback.
+5. Return to the source array. Enable the source-side SyncIQ policy. Edit its schedule from `Manual` to `When source is modified`. Set the time delay for synchronization as appropriate.
+#### Failback - Discard Source
+
+Information on the methodology for performing a failback while taking changes made to the original target can be found in relevant PowerScale SyncIQ documentation. The detailed steps are as follows:
+
+1. Log in to the source PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
+2. Edit the source-side SyncIQ policy's schedule from `When source is modified` to `Manual`.
+3. Log in to the target PowerScale array. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
+4. Delete the target-side SyncIQ policy that has a name matching the SyncIQ policy undergoing failback. This is necessary to prevent conflicts when running resync-prep in the next step.
+5. On the source PowerScale array, enable the SyncIQ policy that is undergoing failback. On this policy, perform `Actions > Resync-prep`. This will create a new SyncIQ policy on the target PowerScale array, matching the original SyncIQ policy with an appended *_mirror* to its name. Wait until the policy being acted on is disabled by the resync-prep operation before continuing.
+6. On the target PowerScale array's `Policies` tab, perform `Actions > Start job` on the *_mirror* policy. Wait for this synchronization to complete.
+7. On the source PowerScale array, switch from the `Policies` tab to the `Local targets` tab. Find the local target policy that matches the SyncIQ policy undergoing failback and perform `Actions > Allow writes`.
+8. On the target PowerScale array, perform `Actions > Resync-prep` on the *_mirror* policy. Wait until the policy on the source side is re-enabled by the resync-prep operation before continuing.
+9. On the target PowerScale array, delete the *_mirror* SyncIQ policy.
+10. On the target PowerScale array, manually recreate the original SyncIQ policy that was deleted in step 4. This will require filepaths, RPO, and other details that can be obtained from the source-side SyncIQ policy. Its name **must** match the source-side SyncIQ policy. Its source directory will be the source-side policy's *target* directory, and vice-versa. Its target host will be the source PowerScale array endpoint.
+11. Ensure that the target-side SyncIQ policy that was just created is **Enabled.** This will create a Local Target policy on the source side. If it was not created as Enabled, enable it now.
+12. On the source PowerScale array, select the `Local targets` tab. Perform `Actions > Allow writes` on the source-side Local Target policy that matches the SyncIQ policy undergoing failback.
+13. Disable the target-side SyncIQ policy.
+14. On the source PowerScale array, edit the SyncIQ policy's schedule from `Manual` to `When source is modified`. Set the time delay for synchronization as appropriate.
+
### Supported Replication Actions
The CSI PowerScale driver supports the following list of replication actions:
- FAILOVER_REMOTE
- UNPLANNED_FAILOVER_LOCAL
-- REPROTECT_LOCAL
- SUSPEND
- RESUME
- SYNC
diff --git a/content/v1/replication/deployment/powerstore.md b/content/v1/replication/deployment/powerstore.md
index c7bf44721d..dfde098928 100644
--- a/content/v1/replication/deployment/powerstore.md
+++ b/content/v1/replication/deployment/powerstore.md
@@ -115,6 +115,9 @@ Let's go through each parameter and what it means:
* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` PowerStore driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name
to differentiate them.
+
+>NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters.
+
* `arrayID` is a unique identifier of the storage array you specified in array connection secret.
Let's follow up that with an example. Let's assume you have two Kubernetes clusters and two PowerStore
diff --git a/content/v1/replication/deployment/unity.md b/content/v1/replication/deployment/unity.md
index cab4a068fe..84bc358ff4 100644
--- a/content/v1/replication/deployment/unity.md
+++ b/content/v1/replication/deployment/unity.md
@@ -110,6 +110,7 @@ Let's go through each parameter and what it means:
* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure.
* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` Unity driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them.
+>NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters.
* `arrayId` is a unique identifier of the storage array you specified in array connection secret.
* `nasServer` id of the Nas server of local array to which the allocated volume will belong.
* `storagePool` is the storage pool of the local array.
diff --git a/content/v1/replication/release/_index.md b/content/v1/replication/release/_index.md
index d110de5734..33d56c7cf5 100644
--- a/content/v1/replication/release/_index.md
+++ b/content/v1/replication/release/_index.md
@@ -6,25 +6,16 @@ Description: >
Dell Container Storage Modules (CSM) release notes for replication
---
-## Release Notes - CSM Replication 1.3.0
+## Release Notes - CSM Replication 1.3.1
### New Features/Changes
-- Added support for Kubernetes 1.24
-- Added support for OpenShift 4.10
-- Added volume upgrade/downgrade functionality for replication volumes
-
+There are no new features in this release.
### Fixed Issues
-- Fixed panic occuring when encountering PVC with empty StorageClass
-- PV and RG retention policy checks are no longer case sensitive
-- RG will now display EMPTY link state when no PV found
-- [`PowerScale`] Running `reprotect` action on source cluster after failover no longer puts RG into UNKNOWN state
-- [`PowerScale`] Deleting RG will break replication link before trying to delete group on array
+- [PowerScale Replication - Replicated PV has the wrong AzServiceIP](https://github.com/dell/csm/issues/514)
+- ["repctl cluster inject --use-sa" doesn't work for Kubernetes 1.24 and above](https://github.com/dell/csm/issues/463)
### Known Issues
-
-| Github ID | Description |
-|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [514](https://github.com/dell/csm/issues/514) | **PowerScale:** When creating a replicated PV in PowerScale, the replicated PV's AzServiceIP property has the target PowerScale endpoint instead of the one defined in the target Storage class. |
-| [515](https://github.com/dell/csm/issues/515) | **PowerScale:** If you failover with an application still running and the volume mounted on the target site, then we cannot mount the PVC due to : "mount.nfs: Stale file handle". |
-| [518](https://github.com/dell/csm/issues/518) | **PowerScale:** On CSM for Replication with PowerScale, after a repctl failover to a target cluster, the source directory has been removed from the PowerScale. The PersistentVolume Object is still present in Kubernetes. |
+| Github ID | Description |
+| --------------------------------------------- | --------------------------------------------------------------------------------------- |
+| [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. |
diff --git a/content/v1/replication/replication-actions.md b/content/v1/replication/replication-actions.md
index 96eece95f8..00a31ab560 100644
--- a/content/v1/replication/replication-actions.md
+++ b/content/v1/replication/replication-actions.md
@@ -34,11 +34,11 @@ For e.g. -
The following table lists details of what actions should be used in different Disaster Recovery workflows & the equivalent operation done on the storage array:
{{
}}
### Maintenance Actions
@@ -46,11 +46,11 @@ These actions can be run at any site and are used to change the replication link
The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays
{{
}}
### How to perform actions
diff --git a/content/v1/resiliency/deployment.md b/content/v1/resiliency/deployment.md
index edadca721a..ec7c5fb61b 100644
--- a/content/v1/resiliency/deployment.md
+++ b/content/v1/resiliency/deployment.md
@@ -24,7 +24,7 @@ The drivers that support Helm chart installation allow CSM for Resiliency to be
# Enable this feature only after contact support for additional information
podmon:
enabled: true
- image: dellemc/podmon:v
+ image: dellemc/podmon:v1.3.0
controller:
args:
- "--csisock=unix:/var/run/csi/csi.sock"
@@ -33,6 +33,7 @@ podmon:
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
node:
args:
- "--csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock"
@@ -41,6 +42,7 @@ podmon:
- "--leaderelection=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
```
@@ -65,6 +67,7 @@ To install CSM for Resiliency with the driver, the following changes are require
| arrayConnectivityPollRate | Optional | The minimum polling rate in seconds to determine if the array has connectivity to a node. Should not be set to less than 5 seconds. See the specific section for each array type for additional guidance. | controller & node |
| arrayConnectivityConnectionLossThreshold | Optional | Gives the number of failed connection polls that will be deemed to indicate array connectivity loss. Should not be set to less than 3. See the specific section for each array type for additional guidance. | controller |
| driver-config-params | Required | String that set the path to a file containing configuration parameter(for instance, Log levels) for a driver. | controller & node |
+| ignoreVolumelessPods | Optional | Boolean value that if set to true will enable CSM for Resiliency to ignore pods without persistent volume attached to the pod. | controller & node |
## PowerFlex Specific Recommendations
@@ -86,6 +89,7 @@ podmon:
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
node:
args:
- "--csisock=unix:/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock"
@@ -94,6 +98,7 @@ podmon:
- "--leaderelection=false"
- "--driver-config-params=/vxflexos-config-params/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
```
@@ -114,6 +119,7 @@ podmon:
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/unity-config/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
node:
args:
- "--csisock=unix:/var/lib/kubelet/plugins/unity.emc.dell.com/csi_sock"
@@ -123,6 +129,7 @@ podmon:
- "--leaderelection=false"
- "--driver-config-params=/unity-config/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
```
@@ -144,6 +151,7 @@ podmon:
- "--skipArrayConnectionValidation=false"
- "--driver-config-params=/csi-isilon-config-params/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
node:
args:
- "--csisock=unix:/var/lib/kubelet/plugins/csi-isilon/csi_sock"
@@ -154,6 +162,7 @@ podmon:
- "--leaderelection=false"
- "--driver-config-params=/csi-isilon-config-params/driver-config-params.yaml"
- "--driverPodLabelValue=dell-storage"
+ - "--ignoreVolumelessPods=false"
```
## Dynamic parameters
diff --git a/content/v1/secure/encryption/_index.md b/content/v1/secure/encryption/_index.md
index 3f2568dfb6..832036ad62 100644
--- a/content/v1/secure/encryption/_index.md
+++ b/content/v1/secure/encryption/_index.md
@@ -26,11 +26,11 @@ For detailed information on the cryptography behind gocryptfs, see [gocryptfs Cr
When a CSI Driver is installed with the Encryption feature enabled, two provisioners are registered in the cluster:
-#### Provisioner for unencrypted volumes
+**Provisioner for unencrypted volumes**
This provisioner belongs to the storage driver and does not depend on the Encryption feature. Use a storage class with this provisioner to create regular unencrypted volumes.
-#### Provisioner for encrypted volumes
+**Provisioner for encrypted volumes**
This provisioner belongs to Encryption and registers with the name [`encryption.pluginName`](deployment/#helm-chart-values) when Encryption is enabled. Use a storage class with this provisioner to create encrypted volumes.
@@ -68,7 +68,8 @@ the CSI driver must be restarted to pick up the change.
{{
}}
| COP/OS | Supported Versions |
|-|-|
-| Kubernetes | 1.22, 1.23, 1.24 |
+| Kubernetes | 1.22, 1.23, 1.24, 1.25 |
+| Red Hat OpenShift | 4.10, 4.11 |
| RHEL | 7.9, 8.4 |
| Ubuntu | 18.04, 20.04 |
| SLES | 15SP2 |
@@ -117,6 +118,10 @@ Access to the data will be lost for ever.
Refer to [Vault Configuration section](vault) for minimal configuration steps required to support Encryption and other configuration considerations.
+## Key Rotation (rekey)
+This preview of Encryption includes the ability to change the KEK (Key Encryption Key) of an encrypted volume, an operation commonly known as Shallow Rekey, or
+Shallow Key Rotation. The KEK is the 256-bit key that encrypts the Data Encryption Key which encrypts the data on the volume.
+
## Kubernetes Worker Hosts Requirements
- Each Kubernetes worker host should have SSH server running.
diff --git a/content/v1/secure/encryption/deployment.md b/content/v1/secure/encryption/deployment.md
index 33fbf2174f..1cd069b24d 100644
--- a/content/v1/secure/encryption/deployment.md
+++ b/content/v1/secure/encryption/deployment.md
@@ -16,6 +16,11 @@ the rest of the deployment process is described in the correspondent [CSI driver
Hashicorp Vault must be [pre-configured](../vault) to support Encryption. The Vault server's IP address and port must be accessible
from the Kubernetes cluster where the CSI driver is to be deployed.
+## Rekey Controller
+
+The Encryption Rekey CRD Controller is an optional component that, if installed, allows encrypted volumes rekeying in a
+Kubernetes cluster. Please refer to [Rekey Configuration](../rekey) for the Rekey Controller installation details.
+
## Helm Chart Values
The drivers that support Encryption via Helm chart have an `encryption` block in their *values.yaml* file that looks like this:
@@ -29,20 +34,31 @@ encryption:
pluginName: "sec-isilon.dellemc.com"
# image: Encryption driver image name.
- image: "dellemc/csm-encryption:v0.1.0"
-
- # imagePullPolicy: If specified, overrides the chart global imagePullPolicy.
- imagePullPolicy:
+ image: "dellemc/csm-encryption:v0.2.0"
# logLevel: Log level of the encryption driver.
# Allowed values: "error", "warning", "info", "debug", "trace".
logLevel: "error"
-
+
+ # apiPort: TCP port number used by the REST API server.
+ apiPort: 3838
+
+ # logLevel: Log level of the encryption driver.
+ # Allowed values: "error", "warning", "info", "debug", "trace".
+ logLevel: "debug"
+
# livenessPort: HTTP liveness probe port number.
# Leave empty to disable the liveness probe.
# Example: 8080
livenessPort:
+ # ocp: Enable when running on OpenShift Container Platform with CoreOS worker nodes.
+ ocp: false
+
+ # ocpCoreID: User ID and group ID of user core on CoreOS worker nodes.
+ # Ignored when ocp is set to false.
+ ocpCoreID: "1000:1000"
+
# extraArgs: Extra command line parameters to pass to the encryption driver.
# Allowed values:
# --sharedStorage - may be required by some applications to work properly.
@@ -51,14 +67,16 @@ encryption:
extraArgs: []
```
-| Parameter | Description | Required | Default |
-| --------- | ----------- | -------- | ------- |
+| Parameter | Description| Required | Default |
+| --------- |------------|----------| ------- |
| enabled | Enable/disable volume encryption feature. | No | false |
| pluginName | The name of the provisioner to use for encrypted volumes. | No | "sec-isilon.dellemc.com" |
-| image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.1.0" |
-| imagePullPolicy | If specified, overrides the chart global imagePullPolicy. | No | CSI driver global imagePullPolicy |
-| logLevel | Log level of the encryption driver. Allowed values: "error", "warning", "info", "debug, `"trace". | No | "error" |
+| image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.2.0" |
+| logLevel | Log level of the encryption driver. Allowed values: "error", "warning", "info", "debug", "trace". | No | "error" |
+| apiPort | TCP Port number used by the REST API Server. | No | 3838 |
| livenessPort | HTTP liveness probe port number. Leave empty to disable the liveness probe. | No | |
+| ocp | Enable when running an OCP Platform with CoreOS worker nodes. | No | false |
+| ocpCoreID | User ID and group ID of user core on CoreOS worker nodes. Ignored when ocp is set to false. | No | "1000:1000" |
| extraArgs | Extra command line parameters to pass to the encryption driver. Allowed values: "\-\-sharedStorage" - may be required by some applications to work properly. When set, performance is reduced and hard links cannot be created. See the [gocryptfs documentation](https://github.com/rfjakob/gocryptfs/blob/v2.2.1/Documentation/MANPAGE.md#-sharedstorage) for more details. | No | [] |
## Secrets and Config Maps
@@ -168,3 +186,4 @@ These fields are available for use in *client.json*:
| tls_config.client_crt | Set to "/etc/dea/vault/client.crt" | Yes | |
| tls_config.client_key | Set to "/etc/dea/vault/client.key" | Yes | |
| tls_config.client_ca | Set to "/etc/dea/vault/server-ca.crt" | Yes | |
+
diff --git a/content/v1/secure/encryption/rekey.md b/content/v1/secure/encryption/rekey.md
new file mode 100644
index 0000000000..3412eac0ec
--- /dev/null
+++ b/content/v1/secure/encryption/rekey.md
@@ -0,0 +1,135 @@
+---
+title: "Rekey Configuration"
+linkTitle: "Rekey Configuration"
+weight: 4
+Description: >
+ Rekey Configuration and Usage
+---
+
+## Rekey Controller Installation
+
+The CSM Encryption Rekey CRD Controller is an optional component that, if installed, allows encrypted volumes rekeying in a
+Kubernetes cluster. The Rekey Controller can be installed via the Dell Helm charts [repository](https://github.com/dell/helm-charts).
+
+Dell Helm charts can be added with the command `helm repo add dell https://dell.github.io/helm-charts`.
+
+### Kubeconfig Secret
+
+A secret with kubeconfig must be created with the name `cluster-kube-config`. Here is an example:
+
+```shell
+ kubectl create secret generic cluster-kube-config --from-file=config=/root/.kube/config
+```
+
+### Helm Chart Values
+
+The Rekey Controller Helm chart defines these values:
+
+```yaml
+# Rekey controller image name.
+image: "dellemc/csm-encryption-rekey-controller:v0.1.0"
+
+# Rekey controller image pull policy.
+# Allowed values:
+# Always: Always pull the image.
+# IfNotPresent: Only pull the image if it does not already exist on the node.
+# Never: Never pull the image.
+imagePullPolicy: IfNotPresent
+
+# logLevel: Log level of the rekey controller.
+# Allowed values: "error", "warning", "info", "debug", "trace".
+logLevel: "info"
+
+# This value is required and must match encryption.pluginName value
+# of the corresponding Dell CSI driver.
+provisioner:
+
+# This value is required and must match encryption.apiPort value
+# of the corresponding Dell CSI driver.
+port:
+```
+
+| Parameter | Description | Required | Default |
+| --------- | ----------- | -------- | ------- |
+| image | Rekey controller image name. | No | "dellemc/csm-encryption-rekey-controller:v0.1.0" |
+| imagePullPolicy | Rekey controller image pull policy. | No | "IfNotPresent" |
+| logLevel | Log level of the rekey controller. | No | "info" |
+| provisioner | This value is required and must match `encryption.pluginName` value of the corresponding Dell CSI driver. | Yes | |
+| port | This value is required and must match `encryption.apiPort` value of the corresponding Dell CSI driver. | Yes | |
+
+### Deployment
+
+Copy the chart's values.yaml to a local file and adjust the values in the local file for the current cluster.
+Deploy the controller using a command similar to this:
+
+```shell
+helm install --values local-values.yaml rekey-controller dell/csm-encryption-rekey-controller
+```
+
+A rekey-controller pod should now be up and running.
+
+## Rekey Usage
+
+Rekeying is initiated and monitored via Kubernetes custom resources of type `rekeys.encryption.storage.dell.com`.
+This can be done directly [using kubectl](#rekey-with-kubectl) or in a more user-friendly way [using dellctl](#rekey-with-dellctl).
+Creation of a rekey resource for a PV will kick off a rekey process on this PV. The rekey resource will contain the result
+of the operation. Refer to [Rekey Status](#rekey-status) for possible status values.
+
+### Rekey with dellctl
+
+If `dellctl` CLI is installed, rekeying an encrypted volume is simple.
+For example, to rekey a PV with the name `k8s-112a5d41bc` use a command like this:
+
+```shell
+$ dellctl encryption rekey myrekey k8s-112a5d41bc
+INFO rekey request "myrekey" submitted successfully for persistent volume "k8s-112a5d41bc".
+INFO Run 'dellctl encryption rekey-status myrekey' for more details.
+```
+
+Then to check the status of the newly created rekey with the name `myrekey` use this command:
+
+```shell
+$ dellctl encryption rekey-status myrekey
+INFO Status of rekey request myrekey = completed
+```
+
+### Rekey with kubectl
+
+Create a cluster-scoped rekey resource to rekey an encrypted volume.
+For example, to rekey a PV with the name `k8s-09a76734f` use a command like this:
+
+```shell
+kubectl create -f - <
Release Notes
---
-### New Features/Changes
+## New Features/Changes
- [Technical preview release](https://github.com/dell/csm/issues/437)
-- PowerScale CSI volumes encryption (for new volumes)
-- Encryption keys stored in Hashicorp Vault
+- Shallow Rekey with Rekey CRDs.
+- OpenShift Container Platform support (4.10 and 4.11).
+- Kubernetes 1.25 support.
-### Fixed Issues
+## Fixed Issues
There are no fixed issues in this release.
-### Known Issues
+## Known Issues
-There are no known issues in this release.
\ No newline at end of file
+There are no known issues in this release.
diff --git a/content/v1/secure/encryption/troubleshooting.md b/content/v1/secure/encryption/troubleshooting.md
index b966adf50a..4323302bce 100644
--- a/content/v1/secure/encryption/troubleshooting.md
+++ b/content/v1/secure/encryption/troubleshooting.md
@@ -1,7 +1,7 @@
---
title: "Troubleshooting"
linkTitle: "Troubleshooting"
-weight: 4
+weight: 5
Description: >
Troubleshooting
---
@@ -43,27 +43,27 @@ If you run a [test instance of the server in a Docker container](../vault#vault-
## Typical Failure Reasons
-#### Incorrect Vault related configuration
+### Incorrect Vault related configuration
- check [logs](#logs-and-events)
- check [vault-auth secret](../deployment#secret-vault-auth)
- check [vault-cert secret](../deployment#secret-vault-cert)
- check [vault-client-conf config map](../deployment#configmap-vault-client-conf)
-#### Incorrect Vault server-side configuration
+### Incorrect Vault server-side configuration
- check [logs](#logs-and-events)
- check [Vault server configuration](../vault#minimum-server-configuration)
-#### Expired AppRole secret ID
+### Expired AppRole secret ID
- [reset the role secret ID](../vault#set-role-id-and-secret-id-to-the-role)
-#### Incorrect CSI driver configuration
+### Incorrect CSI driver configuration
- check the related CSI driver [troubleshooting steps](../../../csidriver/troubleshooting)
-#### SSH server is stopped/restarted on the worker host {#ssh-stopped}
+### SSH server is stopped/restarted on the worker host {#ssh-stopped}
This may manifest in:
- failure to start the CSI driver
@@ -74,7 +74,7 @@ Resolution:
- check SSH server is running on all worker host
- stop all workloads that use encrypted volumes on the node, then restart them
-#### No license provided, or license expired
+### No license provided, or license expired
This may manifest in:
- failure to start the CSI driver
@@ -85,3 +85,18 @@ Resolution:
- check the license is for the cluster on which the encrypted volumes are created
- check [encryption-license secret](../deployment#secret-encryption-license)
+## Typical Rekey Failure reasons
+If all rekeys in the cluster are failing
+- check the Rekey controller helm chart values.yaml `provisioner` name against the Dell CSI driver chart `encryption.pluginName`, and ensure they match.
+- check the Rekey controller helm chart values.yaml `port` number against the Dell CSI driver chart `encryption.apiPort`, and ensure they match.
+
+If Rekeys fail for a particular PV
+ - check that the volume is provisioned by the Encryption provisioner
+ - check that volume attachments exist for the said PV
+ - check that at least one node on which the PV is mounted is available and reachable
+ - check the Encryption provisioner logs for details that may indicate the failure reason
+ - check the Rekey controller log for the reason for failure
+
+If a Rekey results in a `Status.Phase` of `unknown`
+ - this implies the connection failed during the rekey process which may mean the volume was rekeyed
+ - an additional rekey attempt should work assuming a reliable connection to the Encryption provisioner. This may result in the volume being rekeyed twice.
\ No newline at end of file
diff --git a/content/v1/secure/encryption/uninstallation.md b/content/v1/secure/encryption/uninstallation.md
index 60144e866f..9d0997b42e 100644
--- a/content/v1/secure/encryption/uninstallation.md
+++ b/content/v1/secure/encryption/uninstallation.md
@@ -10,11 +10,11 @@ Description: >
Login to each worker host and perform these steps:
-#### Remove directory */root/.driver-sec*
+__Remove directory */root/.driver-sec*__
This directory was created when a CSI driver with Encryption first ran on the host.
-#### Remove entry from */root/.ssh/authorized_keys*
+__Remove entry from */root/.ssh/authorized_keys*__
This is an entry added when a CSI driver with Encryption first ran on the host.
It ends with `driver-sec`, similarly to:
@@ -32,8 +32,12 @@ It can be removed with `sed -i '/^ssh-rsa .* driver-sec$/d' /root/.ssh/authorize
## Remove Kubernetes Resources
-Remove [the resources that were created in Kubernetes cluster for Encryption](../deployment#secrets-and-config-maps).
+Remove [the resources](../deployment#secrets-and-config-maps) created in Kubernetes cluster for Encryption.
## Remove Vault Server Configuration
-Remove [the configuration created in the Vault server for Encryption](../vault#minimum-server-configuration).
+Remove [the configuration](../vault#minimum-server-configuration) created in the Vault server for Encryption.
+
+## Remove Rekey Controller
+
+Remove [the resources](../rekey#rekey-controller-installation) created during the installation of the Rekey Controller.
diff --git a/content/v1/secure/encryption/vault.md b/content/v1/secure/encryption/vault.md
index 6332ea2c13..362b959dcf 100644
--- a/content/v1/secure/encryption/vault.md
+++ b/content/v1/secure/encryption/vault.md
@@ -17,7 +17,7 @@ It creates a standalone server with in-memory (non-persistent) storage, running
> **NOTE**: With in-memory storage, the encryption keys are permanently destroyed upon the server termination.
-#### Generate TLS certificates for server and client
+### Generate TLS certificates for server and client
Create server CA private key and certificate:
@@ -106,7 +106,7 @@ openssl x509 -req \
cat client-ca.crt >> client.crt
```
-#### Create server hcl file
+### Create server hcl file
```shell
cat >server.hcl < Variable `CONF_DIR` below refers to the directory containing files *server.crt*, *server.key*, *client-ca.crt* and *server.hcl*.
```shell
@@ -163,7 +163,7 @@ Refer to the [Hashicorp Vault documentation](https://www.vaultproject.io/docs) f
the Docker host where the server is running.
> - `VAULT_TOKEN` - Authentication token, e.g. the root token `DemoRootToken` used in the [test instance of Vault](#vault-server-installation).
-#### Enable Key/Value secret engine
+### Enable Key/Value secret engine
```shell
vault secrets enable -version=2 -path=dea-keys/ kv
@@ -172,13 +172,13 @@ vault write /dea-keys/config cas_required=true max_versions=1
Key/Value secret engine is used to store encryption keys. Each encryption key is represented by a key-value entry.
-#### Enable AppRole authentication
+### Enable AppRole authentication
```shell
vault auth enable approle
```
-#### Create a role
+### Create a role
```shell
vault write auth/approle/role/dea-role \
@@ -192,7 +192,7 @@ vault write auth/approle/role/dea-role \
TTL values here are chosen arbitrarily and can be changed to desired values.
-#### Create and assign a token policy to the role
+### Create and assign a token policy to the role
```shell
vault policy write dea-policy - <
-CSM is made up of multiple components including modules (enterprise capabilities), CSI drivers (storage enablement) and, other related applications (deployment, feature controllers, etc).
+CSM is made up of multiple components including modules (enterprise capabilities), CSI drivers (storage enablement), and other related applications (deployment, feature controllers, etc).
+
+{{< cardpane >}}
+ {{< card header="[**Authorization**](authorization/)"
+ footer="Supports [PowerFlex](csidriver/features/powerflex/) [PowerScale](csidriver/features/powerscale/) [PowerMax](csidriver/features/powermax/)">}}
+ CSM for Authorization provides storage and Kubernetes administrators the ability to apply RBAC for Dell CSI Drivers. It does this by deploying a proxy between the CSI driver and the storage system to enforce role-based access and usage rules.
+[...Learn more](authorization/)
+
+ {{< /card >}}
+ {{< card header="[**Replication**](replication/)"
+ footer="Supports [PowerStore](csidriver/features/powerstore/) [PowerScale](csidriver/features/powerscale/) [PowerMax](csidriver/features/powermax/)">}}
+ CSM for Replication project aims to bring Replication & Disaster Recovery capabilities of Dell Storage Arrays to Kubernetes clusters. It helps you replicate groups of volumes and can provide you a way to restart applications in case of both planned and unplanned migration.
+[...Learn more](replication/)
+{{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+{{< card header="[**Resiliency**](resiliency/)"
+ footer="Supports [PowerFlex](csidriver/features/powerflex/) [PowerScale](csidriver/features/powerscale/) [Unity](csidriver/features/unity/)">}}
+ CSM for Resiliency is designed to make Kubernetes Applications, including those that utilize persistent storage, more resilient to various failures.
+[...Learn more](resiliency/)
+ {{< /card >}}
+{{< card header="[**Observability**](observability/)"
+ footer="Supports [PowerFlex](csidriver/features/powerflex/) [PowerStore](csidriver/features/powerstore/)">}}
+ CSM for Observability provides visibility on the capacity of the volumes/file shares that is being managed with Dell CSM CSI (Container Storage Interface) drivers along with their performance in terms of bandwidth, IOPS, and response time.
+[...Learn more](observability/)
+ {{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+{{< card header="[**Application Mobility**](applicationmobility/)"
+ footer="Supports all platforms">}}
+ Container Storage Modules for Application Mobility provide Kubernetes administrators the ability to clone their stateful application workloads and application data to other clusters, either on-premise or in the cloud.
+ [...Learn more](applicationmobility/)
+ {{< /card >}}
+ {{< card header="[**Encryption**](secure/encryption)"
+ footer="Supports PowerScale">}}
+ Encryption provides the capability to encrypt user data residing on volumes created by Dell CSI Drivers.
+ [...Learn more](secure/encryption/)
+ {{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+ {{< card header="[License](license/)"
+ footer="Required for [Application Mobility](applicationmobility/) & [Encryption](secure/encryption/)">}}
+ The tech-preview releases of Application Mobility and Encryption require a license.
+ Request a license using the [Container Storage Modules License Request](https://app.smartsheet.com/b/form/5e46fad643874d56b1f9cf4c9f3071fb) by providing the requested details.
+ [...Learn more](license/)
+ {{< /card >}}
+{{< /cardpane >}}
## CSM Supported Modules and Dell CSI Drivers
-| Modules/Drivers | CSM 1.3.1 | [CSM 1.2.1](../v1/) | [CSM 1.2](../v2/) | [CSM 1.1](../v3/) |
+| Modules/Drivers | CSM 1.4 | [CSM 1.3.1](../v1/) | [CSM 1.2.1](../v2/) | [CSM 1.2](../v3/) |
| - | :-: | :-: | :-: | :-: |
-| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 |
-| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | v1.2.0 | v1.1.1 | v1.1.0 | v1.0.1 |
-| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 |
-| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | v1.2.0 | v1.1.0 | v1.1.0 | v1.0.1 |
-| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 |
-| [CSI Driver for Unity XT](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 |
-| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.3.0 | v2.2.0 | v2.2.0| v2.1.0 |
-| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 |
-| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.3.1 | v2.2.0 | v2.2.0 | v2.1.0 |
+| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | v1.4.0 | v1.3.0 | v1.2.0 | v1.2.0 |
+| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | v1.3.0 | v1.2.0 | v1.1.1 | v1.1.0 |
+| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | v1.3.0 | v1.3.0 | v1.2.0 | v1.2.0 |
+| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | v1.3.0 | v1.2.0 | v1.1.0 | v1.1.0 |
+| [Encryption](https://hub.docker.com/r/dellemc/csm-encryption) | v0.1.0 | NA | NA | NA |
+| [Application Mobility](https://hub.docker.com/r/dellemc/csm-application-mobility-controller) | v0.1.0 | NA | NA | NA |
+| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 |
+| [CSI Driver for Unity XT](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 |
+| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.4.0 | v2.3.0 | v2.2.0| v2.2.0 |
+| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.4.0 | v2.3.0 | v2.2.0 | v2.2.0 |
+| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.4.0 | v2.3.1 | v2.2.0 | v2.2.0 |
## CSM Modules Support Matrix for Dell CSI Drivers
-| CSM Module | CSI PowerFlex v2.3.0 | CSI PowerScale v2.3.0 | CSI PowerStore v2.3.0 | CSI PowerMax v2.3.1 | CSI Unity XT v2.3.0 |
+| CSM Module | CSI PowerFlex v2.4.0 | CSI PowerScale v2.4.0 | CSI PowerStore v2.4.0 | CSI PowerMax v2.4.0 | CSI Unity XT v2.4.0 |
| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- |
-| Authorization v1.3| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
-| Observability v1.2| ✔️ | ❌ | ✔️ | ❌ | ❌ |
+| Authorization v1.4| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
+| Observability v1.3| ✔️ | ✔️ | ✔️ | ❌ | ❌ |
| Replication v1.3| ❌ | ✔️ | ✔️ | ✔️ | ❌ |
-| Resiliency v1.2| ✔️ | ✔️ | ❌ | ❌ | ✔️ |
+| Resiliency v1.3| ✔️ | ✔️ | ❌ | ❌ | ✔️ |
+| Encryption v0.1.0| ❌ | ✔️ | ❌ | ❌ | ❌ |
+| Application Mobility v0.1.0| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
diff --git a/content/v2/applicationmobility/_index.md b/content/v2/applicationmobility/_index.md
new file mode 100644
index 0000000000..af367ffed0
--- /dev/null
+++ b/content/v2/applicationmobility/_index.md
@@ -0,0 +1,40 @@
+---
+title: "Application Mobility"
+linkTitle: "Application Mobility"
+weight: 9
+Description: >
+ Application Mobility
+---
+
+>> NOTE: This tech-preview release is not intended for use in production environment.
+
+>> NOTE: Application Mobility requires a time-based license. See [Deployment](./deployment) for instructions.
+
+Container Storage Modules for Application Mobility provide Kubernetes administrators the ability to clone their stateful application workloads and application data to other clusters, either on-premise or in the cloud.
+
+Application Mobility uses [Velero](https://velero.io) and its integration of [Restic](https://restic.net) to copy both application metadata and data to object storage. When a backup is requested, Application Mobility uses these options to determine how the application data is backed up:
+- If [Volume Group Snapshots](../snapshots/volume-group-snapshots/) are enabled on the CSI driver backing the application's Persistent Volumes, crash consistent snapshots of all volumes are used for the backup.
+- If [Volume Snapshots](../snapshots/) are enabled on the Kubernetes cluster and supported by the CSI driver, individual snapshots are used for each Persistent Volume used by the application.
+- If no snapshot options are enabled, default to using full copies of each Persistent Volume used by the application.
+
+After a backup has been created, it can be restored on the same Kubernetes cluster or any other cluster(s) if this criteria is met:
+- Application Mobility is installed on the target cluster(s).
+- The target cluster(s) has access to the object store bucket. For example, if backing up and restoring an application from an on-premise Kubernetes cluster to AWS EKS, an S3 bucket can be used if both the on-premise and EKS cluster have access to it.
+- Storage Class is defined on the target cluster(s) to support creating the required Persistent Volumes used by the application.
+
+## Supported Data Movers
+{{
}}
+| Data Mover | Description |
+|-|-|
+| Restic | Persistent Volume data will be stored in the provided object store bucket |
+{{
}}
\ No newline at end of file
diff --git a/content/v2/applicationmobility/deployment.md b/content/v2/applicationmobility/deployment.md
new file mode 100644
index 0000000000..d5ffb3e8fd
--- /dev/null
+++ b/content/v2/applicationmobility/deployment.md
@@ -0,0 +1,62 @@
+---
+title: "Deployment"
+linkTitle: "Deployment"
+weight: 1
+Description: >
+ Deployment
+---
+
+## Pre-requisites
+- [Request a License for Application Mobility](../../license/)
+- Object store bucket accessible by both the source and target clusters
+
+## Installation
+1. Create a namespace where Application Mobility will be installed.
+ ```
+ kubectl create ns application-mobility
+ ```
+2. Edit the license Secret file (see Pre-requisites above) and set the correct namespace (ex: `namespace: application-mobility`)
+3. Create the Secret containing a license file
+ ```
+ kubectl apply -f license.yml
+ ```
+4. Add the Dell Helm Charts repository
+ ```
+ helm repo add dell https://dell.github.io/helm-charts
+ ```
+5. Either create a values.yml file or provide the `--set` options to the `helm install` to override default values from the [Configuration](#configuration) section.
+6. Install the helm chart
+ ```
+ helm install application-mobility -n application-mobility dell/csm-application-mobility
+ ```
+
+
+### Configuration
+
+This table lists the configurable parameters of the Application Mobility Helm chart and their default values.
+
+| Parameter | Description | Required | Default |
+| - | - | - | - |
+| `replicaCount` | Number of replicas for the Application Mobility controllers | Yes | `1` |
+| `image.pullPolicy` | Image pull policy for the Application Mobility controller images | Yes | `IfNotPresent` |
+| `controller.image` | Location of the Application Mobility Docker image | Yes | `dell/csm-application-mobility-controller:v0.1.0` |
+| `cert-manager.enabled` | If set to true, cert-manager will be installed during Application Mobility installation | Yes | `false` |
+| `veleroNamespace` | If Velero is already installed, set to the namespace where Velero is installed | No | `velero` |
+| `licenseName` | Name of the Secret that contains the License for Application Mobility | Yes | `license` |
+| `objectstore.secretName` | If velero is already installed on the cluster, specify the name of the secret in velero namespace that has credentials to access object store | No | ` ` |
+| `velero.enabled` | If set to true, Velero will be installed during Application Mobility installation | Yes | `true` |
+| `velero.use-volume-snapshots` | If set to true, Velero will use volume snapshots | Yes | `false` |
+| `velero.deployRestic` | If set to true, Velero will also deploy Restic | Yes | `true` |
+| `velero.cleanUpCRDs` | If set to true, Velero CRDs will be cleaned up | Yes | `true` |
+| `velero.credentials.existingSecret` | Optionally, specify the name of the pre-created secret in the release namespace that holds the object store credentials. Either this or secretContents should be specified | No | ` ` |
+| `velero.credentials.name` | Optionally, specify the name to be used for secret that will be created to hold object store credentials. Used in conjunction with secretContents. | No | ` ` |
+| `velero.credentials.secretContents` | Optionally, specify the object store access credentials to be stored in a secret with key "cloud". Either this or existingSecret should be provided. | No | ` ` |
+| `velero.configuration.provider` | Provider to use for Velero. | Yes | `aws` |
+| `velero.configuration.backupStorageLocation.name` | Name of the backup storage location for Velero. | Yes | `default` |
+| `velero.configuration.backupStorageLocation.bucket` | Name of the object store bucket to use for backups. | Yes | `velero-bucket` |
+| `velero.configuration.backupStorageLocation.config` | Additional provider-specific configuration. See https://velero.io/docs/v1.9/api-types/backupstoragelocation/ for specific details. | Yes | ` ` |
+| `velero.initContainers` | List of plugins used by Velero. Dell Velero plugin is required and plugins for other providers can be added. | Yes | ` ` |
+| `velero.initContainers[0].name` | Name of the Dell Velero plugin. | Yes | `dell-custom-velero-plugin` |
+| `velero.initContainers[0].image` | Location of the Dell Velero plugin image. | Yes | `dellemc/csm-application-mobility-velero-plugin:v0.1.0` |
+| `velero.initContainers[0].volumeMounts[0].mountPath` | Mount path of the volume mount. | Yes | `/target` |
+| `velero.initContainers[0].volumeMounts[0].name` | Name of the volume mount. | Yes | `plugins` |
\ No newline at end of file
diff --git a/content/v2/applicationmobility/release.md b/content/v2/applicationmobility/release.md
new file mode 100644
index 0000000000..f9076b4b80
--- /dev/null
+++ b/content/v2/applicationmobility/release.md
@@ -0,0 +1,23 @@
+---
+title: "Release Notes"
+linkTitle: "Release Notes"
+weight: 5
+Description: >
+ Release Notes
+---
+
+
+## Release Notes - CSM Application Mobility 0.1.0
+### New Features/Changes
+
+- [Technical preview release](https://github.com/dell/csm/issues/449)
+- Clone stateful application workloads and application data to other clusters, either on-premise or in the cloud
+- Supports Restic as a data mover for application data
+
+### Fixed Issues
+
+There are no fixed issues in this release.
+
+### Known Issues
+
+There are no known issues in this release.
diff --git a/content/v2/applicationmobility/troubleshooting.md b/content/v2/applicationmobility/troubleshooting.md
new file mode 100644
index 0000000000..b015781524
--- /dev/null
+++ b/content/v2/applicationmobility/troubleshooting.md
@@ -0,0 +1,48 @@
+---
+title: "Troubleshooting"
+linkTitle: "Troubleshooting"
+weight: 4
+Description: >
+ Troubleshooting
+---
+
+## Frequently Asked Questions
+1. [How can I diagnose an issue with Application Mobility?](#how-can-i-diagnose-an-issue-with-application-mobility)
+2. [How can I view logs?](#how-can-i-view-logs)
+3. [How can I debug and troubleshoot issues with Kubernetes?](#how-can-i-debug-and-troubleshoot-issues-with-kubernetes)
+4. [Why are there error logs about a license?](#why-are-there-error-logs-about-a-license)
+
+### How can I diagnose an issue with Application Mobility?
+
+Once you have attempted to install Application Mobility to your Kubernetes or OpenShift cluster, the first step in troubleshooting is locating the problem.
+
+Get information on the state of your Pods.
+```console
+kubectl get pods -n $namespace
+```
+Get verbose output of the current state of a Pod.
+```console
+kubectl describe pod -n $namespace $pod
+```
+### How can I view logs?
+
+View pod container logs. Output logs to a file for further debugging.
+```console
+kubectl logs -n $namespace $pod $container
+kubectl logs -n $namespace $pod $container > $logFileName
+```
+
+### How can I debug and troubleshoot issues with Kubernetes?
+
+* To debug your application that may not be behaving correctly, please reference Kubernetes [troubleshooting applications guide](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/).
+
+* For tips on debugging your cluster, please see this [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/).
+
+### Why are there error logs about a license?
+
+Application Mobility requires a license in order to function. See the [Deployment](../deployment) instructions for steps to request a license.
+
+There will be errors in the logs about the license for these cases:
+- License does not exist
+- License is not valid for the current Kubernetes cluster
+- License has expired
\ No newline at end of file
diff --git a/content/v2/applicationmobility/uninstallation.md b/content/v2/applicationmobility/uninstallation.md
new file mode 100644
index 0000000000..3e98fb7040
--- /dev/null
+++ b/content/v2/applicationmobility/uninstallation.md
@@ -0,0 +1,17 @@
+---
+title: Uninstallation
+linktitle: Uninstallation
+weight: 2
+description: >
+ Uninstallation
+---
+
+This section outlines the uninstallation steps for Application Mobility.
+
+## Uninstall the Application Mobility Helm Chart
+
+This command removes all the Kubernetes components associated with the chart.
+
+```
+$ helm delete [APPLICATION_MOBILITY_NAME] --namespace [APPLICATION_MOBILITY_NAMESPACE]
+```
diff --git a/content/v2/applicationmobility/use_cases.md b/content/v2/applicationmobility/use_cases.md
new file mode 100644
index 0000000000..544a3dcd26
--- /dev/null
+++ b/content/v2/applicationmobility/use_cases.md
@@ -0,0 +1,145 @@
+---
+title: "Use Cases"
+linkTitle: "Use Cases"
+weight: 3
+Description: >
+ Use Cases
+---
+
+After Application Mobility is installed, the [dellctl CLI](../../references/cli/) can be used to register clusters and manage backups and restores of applications. These examples also provide references for using the Application Mobility Custom Resource Definitions (CRDs) to define Custom Resources (CRs) as an alternative to using the `dellctl` CLI.
+
+## Backup and Restore an Application
+This example details the steps when an application in namespace `demo1` is being backed up and then later restored to either the same cluster or another cluster. In this sample, both Application Mobility and Velero are installed in the `application-mobility` namespace.
+
+1. If Velero is not installed in the default `velero` namespace and `dellctl` is being used, set this environment variable to the namespace where it is installed:
+ ```
+ export VELERO_NAMESPACE=application-mobility
+ ```
+1. On the source cluster, create a Backup by providing a name and the included namespace where the application is installed. The application and its data will be available in the object store bucket and can be restored at a later time.
+
+ Using dellctl:
+ ```
+ dellctl backup create backup1 --include-namespaces demo1 --namespace application-mobility
+ ```
+ Using Backup Custom Resource:
+ ```
+ apiVersion: mobility.storage.dell.com/v1alpha1
+ kind: Backup
+ metadata:
+ name: backup1
+ namespace: application-mobility
+ spec:
+ includedNamespaces: [demo1]
+ datamover: Restic
+ clones: []
+ ```
+1. Monitor the backup status until it is marked as Completed.
+
+ Using dellctl:
+ ```
+ dellctl backup get --namespace application-mobility
+ ```
+
+ Using kubectl:
+ ```
+ kubectl describe backups.mobility.storage.dell.com/backup1 -n application-mobility
+ ```
+
+1. If the Storage Class name on the target cluster is different than the Storage Class name on the source cluster where the backup was created, a mapping between source and target Storage Class names must be defined. See [Changing PV/PVC Storage Classes](#changing-pvpvc-storage-classes).
+1. The application and its data can be restored on either the same cluster or another cluster by referring to the backup name and providing an optional mapping of the original namespace to the target namespace.
+
+ Using dellctl:
+ ```
+ dellctl restore create restore1 --from-backup backup1 \
+ --namespace-mappings "demo1:restorens1" --namespace application-mobility
+ ```
+
+ Using Restore Custom Resource:
+ ```
+ apiVersion: mobility.storage.dell.com/v1alpha1
+ kind: Restore
+ metadata:
+ name: restore1
+ namespace: application-mobility
+ spec:
+ backupName: backup1
+ namespaceMapping:
+ "demo1" : "restorens1"
+ ```
+1. Monitor the restore status until it is marked as Completed.
+
+ Using dellctl:
+ ```
+ dellctl restore get --namespace application-mobility
+ ```
+
+ Using kubectl:
+ ```
+ kubectl describe restores.mobility.storage.dell.com/restore1 -n application-mobility
+ ```
+
+
+## Clone an Application
+This example details the steps when an application in namespace `demo1` is cloned from a source cluster to a target cluster in a single operation. In this sample, both Application Mobility and Velero are installed in the `application-mobility` namespace.
+
+1. If Velero is not installed in the default `velero` namespace and `dellctl` is being used, set this environment variable to the namespace where it is installed:
+ ```
+ export VELERO_NAMESPACE=application-mobility
+ ```
+1. Register the target cluster if using `dellctl`
+ ```
+ dellctl cluster add -n targetcluster -u -f ~/kubeconfigs/target-cluster-kubeconfig
+ ```
+1. If the Storage Class name on the target cluster is different than the Storage Class name on the source cluster where the backup was created, a mapping between source and target Storage Class names must be defined. See [Changing PV/PVC Storage Classes](#changing-pvpvc-storage-classes).
+1. Create a Backup by providing a name, the included namespace where the application is installed, and the target cluster and namespace mapping where the application will be restored.
+
+ Using dellctl:
+ ```
+ dellctl backup create backup1 --include-namespaces demo1 --clones "targetcluster/demo1:restore-ns2" \
+ --namespace application-mobility
+ ```
+
+ Using Backup Custom Resource:
+ ```
+ apiVersion: mobility.storage.dell.com/v1alpha1
+ kind: Backup
+ metadata:
+ name: backup1
+ namespace: application-mobility
+ spec:
+ includedNamespaces: [demo1]
+ datamover: Restic
+ clones:
+ - namespaceMapping:
+ "demo1": "restore-ns2"
+ restoreOnceAvailable: true
+ targetCluster: targetcluster
+ ```
+
+1. Monitor the restore status on the target cluster until it is marked as Completed.
+
+ Using dellctl:
+ ```
+ dellctl restore get --namespace application-mobility
+ ```
+
+ Using kubectl:
+ ```
+ kubectl get restores.mobility.storage.dell.com -n application-mobility
+ kubectl describe restores.mobility.storage.dell.com/ -n application-mobility
+ ```
+
+## Changing PV/PVC Storage Classes
+Create a ConfigMap on the target cluster in the same namespace where Application Mobility is installed. The data field must contain a mapping of source Storage Class name to target Storage Class name. See Velero's documentation for [Changing PV/PVC Storage Classes](https://velero.io/docs/v1.9/restore-reference/#changing-pvpvc-storage-classes) for additional details.
+```
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: change-storage-class-config
+ namespace:
+ labels:
+ velero.io/plugin-config: ""
+ velero.io/change-storage-class: RestoreItemAction
+data:
+ :
+```
diff --git a/content/v2/authorization/Backup and Restore/_index.md b/content/v2/authorization/Backup and Restore/_index.md
new file mode 100644
index 0000000000..816195bbd7
--- /dev/null
+++ b/content/v2/authorization/Backup and Restore/_index.md
@@ -0,0 +1,12 @@
+---
+title: Backup and Restore
+linktitle: Backup and Restore
+weight: 2
+description: Methods to backup and restore CSM Authorization
+tags:
+ - backup
+ - restore
+ - csm-authorization
+---
+
+Backup and Restore information for CSM Authorization can be found in this section.
\ No newline at end of file
diff --git a/content/v2/authorization/Backup and Restore/helm/_index.md b/content/v2/authorization/Backup and Restore/helm/_index.md
new file mode 100644
index 0000000000..7ba38bff0b
--- /dev/null
+++ b/content/v2/authorization/Backup and Restore/helm/_index.md
@@ -0,0 +1,115 @@
+---
+title: Helm
+linktitle: Helm
+description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Helm backup and restore
+---
+
+## Roles
+
+
+Role data is stored in the `common` Config Map.
+
+### Steps to execute in the existing Authorization deployment
+
+1. Save the role data by saving the `common` configMap to a file.
+
+```
+kubectl -n get configMap common -o yaml > roles.yaml
+```
+
+### Steps to execute in the Authorization deployment to restore
+
+1. Delete the existing `common` configMap.
+
+```
+kubectl -n delete configMap common
+```
+
+2. Apply the file containing the backed-up role data.
+
+```
+kubectl apply -f roles.yaml
+```
+
+3. Restart the `proxy-server` deployment.
+
+```
+kubectl -n rollout restart deploy/proxy-server
+deployment.apps/proxy-server restarted
+```
+
+## Storage
+
+Storage data is stored in the `karavi-storage-secret` Secret.
+
+### Steps to execute in the existing Authorization deployment
+
+1. Save the storage data by saving the `karavi-storage-secret` Secret to a file.
+
+```
+kubectl -n get secret karavi-storage-secret -o yaml > storage.yaml
+```
+
+### Steps to execute in the Authorization deployment to restore
+
+1. Delete the existing `karavi-storage-secret` secret.
+
+```
+kubectl -n delete secret karavi-storage-secret
+```
+
+2. Apply the file containing the storage data created in step 1.
+
+```
+kubectl apply -f storage.yaml
+```
+
+3. Restart the `proxy-server` deployment.
+
+```
+kubectl -n rollout restart deploy/proxy-server
+deployment.apps/proxy-server restarted
+```
+
+## Tenants, Quota, and Volume ownership
+
+Redis is used to store application data regarding [tenants, quota, and volume ownership](../../design#quota--volume-ownership) with the Storage Class specified in the `redis.storageClass` parameter in the values file, or with the default Storage Class if that parameter was not specified.
+
+The Persistent Volume for Redis is dynamically provisioned by this Storage Class with the `redis-primary-pv-claim` Persistent Volume Claim. See the example.
+
+```
+kubectl get persistentvolume
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+k8s-ab74921ab9 8Gi RWO Delete Bound authorization/redis-primary-pv-claim 112m
+```
+
+### Steps to execute in the existing Authorization deployment
+
+1. Create a backup of this volume, typically via snapshot and/or replication, and create a Persistent Volume Claim using this backup by following the Storage Class's provisioner documentation.
+
+### Steps to execute in the Authorization deployment to restore
+
+1. Edit the `redis-primary` Deployment to use the Persistent Volume Claim associated with the backup by running:
+
+`kubectl -n edit deploy/redis-primary`
+
+The Deployment has a volumes field that should look like this:
+
+```
+volumes:
+- name: redis-primary-volume
+ persistentVolumeClaim:
+ claimName: redis-primary-pv-claim
+```
+
+Replace the value of `claimName` with the name of the Persisent Volume Claim associated with the backup. If the new Persisent Volume Claim name is `redis-backup`, you would edit the deployment to look like this:
+
+```
+volumes:
+- name: redis-primary-volume
+ persistentVolumeClaim:
+ claimName: redis-backup
+```
+
+Once saved, Redis will now use the backup volume.
\ No newline at end of file
diff --git a/content/v2/authorization/Backup and Restore/rpm/_index.md b/content/v2/authorization/Backup and Restore/rpm/_index.md
new file mode 100644
index 0000000000..4821c6b89c
--- /dev/null
+++ b/content/v2/authorization/Backup and Restore/rpm/_index.md
@@ -0,0 +1,121 @@
+---
+title: RPM
+linktitle: RPM
+description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization RPM backup and restore
+---
+
+## Roles
+
+Role data is stored in the `common` Config Map in the underlying `k3s` deployment.
+
+### Steps to execute in the existing Authorization deployment
+
+1. Save the role data by saving the `common` configMap to a file.
+
+```
+k3s kubectl -n karavi get configMap common -o yaml > roles.yaml
+```
+
+### Steps to execute in the Authorization deployment to restore
+
+1. Delete the existing `common` configMap.
+
+```
+k3s kubectl -n karavi delete configMap common
+```
+
+2. Apply the file containing the role data created in step 1.
+
+```
+k3s kubectl apply -f roles.yaml
+```
+
+3. Restart the `proxy-server` deployment.
+
+```
+k3s kubectl -n karavi rollout restart deploy/proxy-server
+deployment.apps/proxy-server restarted
+```
+
+## Storage
+
+Storage data is stored in the `karavi-storage-secret` Secret in the underlying `k3s` deployment.
+
+### Steps to execute in the existing Authorization deployment
+
+1. Save the storage data by saving the `karavi-storage-secret` secret to a file.
+
+```
+k3s kubectl -n karavi get secret karavi-storage-secret -o yaml > storage.yaml
+```
+
+### Steps to execute in the Authorization deployment to restore
+
+1. Delete the existing `karavi-storage-secret` secret.
+
+```
+k3s kubectl -n karavi delete secret karavi-storage-secret
+```
+
+2. Apply the file containing the storage data created in step 1.
+
+```
+k3s kubectl apply -f storage.yaml
+```
+
+3. Restart the `proxy-server` deployment.
+
+```
+k3s kubectl -n karavi rollout restart deploy/proxy-server
+deployment.apps/proxy-server restarted
+```
+
+## Tenants, Quota, and Volume ownership
+
+Redis is used to store application data regarding [tenants, quota, and volume ownership](../../design#quota--volume-ownership). This data is stored on the system under `/var/lib/rancher/k3s/storage//appendonly.aof`.
+
+`appendonly.aof` can be copied and used to restore this appliation data in Authorization deployments. See the example.
+
+### Steps to execute in the existing Authorization deployment
+
+1. Determine the Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim.
+
+```
+k3s kubectl -n karavi get pvc
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+redis-primary-pv-claim Bound pvc-12d8cc05-910d-45bd-9f30-f6807b287a69 8Gi RWO local-path 65m
+```
+
+The Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim is `pvc-12d8cc05-910d-45bd-9f30-f6807b287a69`.
+
+2. Copy `appendonly.aof` from the appropriate path to another location.
+
+```
+cp /var/lib/rancher/k3s/storage/pvc-12d8cc05-910d-45bd-9f30-f6807b287a69/appendonly.aof /path/to/copy/appendonly.aof
+```
+
+### Steps to execute in the Authorization deployment to restore
+
+1. Determine the Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim.
+
+```
+k3s kubectl -n karavi get pvc
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+redis-primary-pv-claim Bound pvc-e7ea31bf-3d79-41fc-88d8-50ba356a298b 8Gi RWO local-path 65m
+```
+
+The Persistent Volume related to the `redis-primary-pv-claim` Persistent Volume Claim is `pvc-e7ea31bf-3d79-41fc-88d8-50ba356a298b`.
+
+2. Copy/Overwrite the `appendonly.aof` in the appropriate path using the file copied in step 2.
+
+```
+cp /path/to/copy/appendonly.aof /var/lib/rancher/k3s/storage/pvc-e7ea31bf-3d79-41fc-88d8-50ba356a298b/appendonly.aof
+```
+
+3. Restart the `redis-primary` deployment.
+
+```
+k3s kubectl -n karavi rollout restart deploy/redis-primary
+deployment.apps/redis-primary restarted
+```
diff --git a/content/v2/authorization/_index.md b/content/v2/authorization/_index.md
index 62f7c46c36..33a7425eda 100644
--- a/content/v2/authorization/_index.md
+++ b/content/v2/authorization/_index.md
@@ -43,7 +43,7 @@ The following diagram shows a high-level overview of CSM for Authorization with
{{
}}
## Supported CSI Drivers
@@ -69,6 +69,7 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t
| dellemc/csm-authorization-sidecar:v1.0.0 | v1.0.0, v1.1.0 |
| dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 |
| dellemc/csm-authorization-sidecar:v1.3.0 | v1.1.0, v1.2.0, v1.3.0 |
+| dellemc/csm-authorization-sidecar:v1.4.0 | v1.1.0, v1.2.0, v1.3.0, v1.4.0 |
{{
}}
## Roles and Responsibilities
diff --git a/content/v2/authorization/cli.md b/content/v2/authorization/cli.md
index b282d7c3fd..cb0b5242fc 100644
--- a/content/v2/authorization/cli.md
+++ b/content/v2/authorization/cli.md
@@ -256,6 +256,8 @@ karavictl role get [flags]
```
-h, --help help for get
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -303,6 +305,8 @@ karavictl role list [flags]
```
-h, --help help for list
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -365,6 +369,8 @@ karavictl role create [flags]
```
-f, --from-file string role data from a file
--role strings role in the form ====
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
-h, --help help for create
```
@@ -411,6 +417,8 @@ karavictl role update [flags]
```
-f, --from-file string role data from a file
--role strings role in the form ====
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
-h, --help help for update
```
@@ -452,6 +460,8 @@ karavictl role delete [flags]
```
-h, --help help for delete
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -523,8 +533,9 @@ karavictl rolebinding create [flags]
```
-h, --help help for create
- -r, --role string Role name
- -t, --tenant string Tenant name
+ -r, --role string Role name
+ -t, --tenant string Tenant name
+ --insecure boolean insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
@@ -562,8 +573,9 @@ karavictl rolebinding delete [flags]
```
-h, --help help for create
- -r, --role string Role name
- -t, --tenant string Tenant name
+ -r, --role string Role name
+ -t, --tenant string Tenant name
+ --insecure boolean insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
@@ -638,6 +650,8 @@ karavictl storage get [flags]
-h, --help help for get
-s, --system-id string System identifier (default "systemid")
-t, --type string Type of storage system ("powerflex", "powermax")
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -680,6 +694,8 @@ karavictl storage list [flags]
```
-h, --help help for list
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -730,11 +746,13 @@ karavictl storage create [flags]
```
-e, --endpoint string Endpoint of REST API gateway
-h, --help help for create
- -i, --insecure Insecure skip verify
- -p, --password string Password (default "****")
+ -a, --array-insecure Array insecure skip verify
+ -p, --password string Password (default "****")
-s, --system-id string System identifier (default "systemid")
-t, --type string Type of storage system ("powerflex", "powermax")
-u, --user string Username (default "admin")
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -746,7 +764,7 @@ karavictl storage create [flags]
##### Output
```
-$ karavictl storage create --endpoint https://1.1.1.1 --insecure --system-id 3000000000011111 --type powerflex --user admin --password ********
+$ karavictl storage create --endpoint https://1.1.1.1 --insecure --array-insecure --system-id 3000000000011111 --type powerflex --user admin --password ********
```
On success, there will be no output. You may run `karavictl storage get --type --system-id ` to confirm the creation occurred.
@@ -772,11 +790,13 @@ karavictl storage update [flags]
```
-e, --endpoint string Endpoint of REST API gateway
-h, --help help for update
- -i, --insecure Insecure skip verify
+ -a, --array-insecure Array insecure skip verify
-p, --pass string Password (default "****")
-s, --system-id string System identifier (default "systemid")
-t, --type string Type of storage system ("powerflex", "powermax")
-u, --user string Username (default "admin")
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -788,7 +808,7 @@ karavictl storage update [flags]
##### Output
```
-$ karavictl storage update --endpoint https://1.1.1.1 --insecure --system-id 3000000000011111 --type powerflex --user admin --password ********
+$ karavictl storage update --endpoint https://1.1.1.1 --insecure --array-insecure --system-id 3000000000011111 --type powerflex --user admin --password ********
```
On success, there will be no output. You may run `karavictl storage get --type --system-id ` to confirm the update occurred.
@@ -816,6 +836,8 @@ karavictl storage delete [flags]
-h, --help help for delete
-s, --system-id string System identifier (default "systemid")
-t, --type string Type of storage system ("powerflex", "powermax")
+ --insecure insecure skip verify flag for Helm deployment
+ --addr address of the container for Helm deployment (pod:port)
```
##### Options inherited from parent commands
@@ -887,6 +909,7 @@ karavictl tenant create [flags]
```
-h, --help help for create
-n, --name string Tenant name
+ --insecure insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
@@ -926,6 +949,7 @@ karavictl tenant get [flags]
```
-h, --help help for create
-n, --name string Tenant name
+ --insecure insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
@@ -969,6 +993,7 @@ karavictl tenant list [flags]
```
-h, --help help for create
+ --insecure insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
@@ -1016,6 +1041,7 @@ karavictl tenant revoke [flags]
```
-h, --help help for create
-n, --name string Tenant name
+ --insecure insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
@@ -1054,6 +1080,7 @@ karavictl tenant delete [flags]
```
-h, --help help for create
-n, --name string Tenant name
+ --insecure insecure skip verify flag for Helm deployment
```
##### Options inherited from parent commands
diff --git a/content/v2/authorization/deployment/helm/_index.md b/content/v2/authorization/deployment/helm/_index.md
index 76d0f47c1a..d1dd59e0de 100644
--- a/content/v2/authorization/deployment/helm/_index.md
+++ b/content/v2/authorization/deployment/helm/_index.md
@@ -13,7 +13,7 @@ The following CSM Authorization components are installed in the specified namesp
- role-service, which configures roles for tenants to be bound to
- storage-service, which configures backend storage arrays for the proxy-server to foward requests to
-The folloiwng third-party components are installed in the specified namespace:
+The following third-party components are installed in the specified namespace:
- redis, which stores data regarding tenants and their volume ownership, quota, and revokation status
- redis-commander, a web management tool for Redis
@@ -47,7 +47,7 @@ The following third-party components are optionally installed in the specified n
Use the following command to replace or update the secret:
- `kubectl create secret generic karavi-config-secret -n authorization --from-file=config=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -`
+ `kubectl create secret generic karavi-config-secret -n authorization --from-file=config.yaml=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -`
4. Copy the default values.yaml file `cp charts/csm-authorization/values.yaml myvalues.yaml`
@@ -108,9 +108,26 @@ helm -n authorization install authorization -f myvalues.yaml charts/csm-authoriz
## Install Karavictl
-The Karavictl CLI can be obtained directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
+1. Download the latest release of karavictl
-In order to run `karavictl` commands, the binary needs to exist in your PATH, for example /usr/local/bin.
+```
+curl -LO https://github.com/dell/karavi-authorization/releases/latest/download/karavictl
+```
+
+2. Install karavictl
+
+```
+sudo install -o root -g root -m 0755 karavictl /usr/local/bin/karavictl
+```
+
+If you do not have root access on the target system, you can still install karavictl to the ~/.local/bin directory:
+
+```
+chmod +x karavictl
+mkdir -p ~/.local/bin
+mv ./karavictl ~/.local/bin/karavictl
+# and then append (or prepend) ~/.local/bin to $PATH
+```
Karavictl commands and intended use can be found [here](../../cli/).
@@ -129,23 +146,23 @@ Run `kubectl -n authorization get ingress` and `kubectl -n authorization get ser
```
# kubectl -n authorization get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
-proxy-server nginx csm-authorization.com 80, 443 86s
-role-service nginx role.csm-authorization.com 80, 443 86s
-storage-service nginx storage.csm-authorization.com 80, 443 86s
-tenant-service nginx tenant.csm-authorization.com 80, 443 86s
+proxy-server nginx csm-authorization.com 00, 000 86s
+role-service nginx role.csm-authorization.com 00, 000 86s
+storage-service nginx storage.csm-authorization.com 00, 000 86s
+tenant-service nginx tenant.csm-authorization.com 00, 000 86s
# kubectl -n auth get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-authorization-cert-manager ClusterIP 10.104.35.150 9402/TCP 28s
-authorization-cert-manager-webhook ClusterIP 10.97.179.94 443/TCP 27s
-authorization-ingress-nginx-controller LoadBalancer 10.108.115.217 80:30080/TCP,443:30016/TCP 27s
-authorization-ingress-nginx-controller-admission ClusterIP 10.103.143.215 443/TCP 27s
-proxy-server ClusterIP 10.111.86.51 8080/TCP 28s
-redis ClusterIP 10.111.158.17 6379/TCP 28s
-redis-commander ClusterIP 10.107.22.41 8081/TCP 27s
-role-service ClusterIP 10.96.113.230 50051/TCP 27s
-storage-service ClusterIP 10.101.144.37 50051/TCP 27s
-tenant-service ClusterIP 10.109.60.141 50051/TCP 28s
+authorization-cert-manager ClusterIP 00.000.000.000 000/TCP 28s
+authorization-cert-manager-webhook ClusterIP 00.000.000.000 000/TCP 27s
+authorization-ingress-nginx-controller LoadBalancer 00.000.000.000 00:00000/TCP,000:00000/TCP 27s
+authorization-ingress-nginx-controller-admission ClusterIP 00.000.000.000 000/TCP 27s
+proxy-server ClusterIP 00.000.000.000 000/TCP 28s
+redis ClusterIP 00.000.000.000 000/TCP 28s
+redis-commander ClusterIP 00.000.000.000 000/TCP 27s
+role-service ClusterIP 00.000.000.000 000/TCP 27s
+storage-service ClusterIP 00.000.000.000 000/TCP 27s
+tenant-service ClusterIP 00.000.000.000 000/TCP 28s
```
On the machine running `karavictl`, the `/etc/hosts` file needs to be updated with the Ingress hosts for the storage, tenant, and role services. For example:
@@ -208,17 +225,17 @@ karavictl rolebinding create --tenant Finance --role FinanceRole --insecure --ad
Now that the tenant is bound to a role, a JSON Web Token can be generated for the tenant. For example, to generate a token for the `Finance` tenant:
```
-karavictl generate token --tenant Finance --insecure --addr --addr tenant.csm-authorization.com:30016
+karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:30016
{
"Token": "\napiVersion: v1\nkind: Secret\nmetadata:\n name: proxy-authz-tokens\ntype: Opaque\ndata:\n access: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRNek1qUXhPRFlzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLmJIODN1TldmaHoxc1FVaDcweVlfMlF3N1NTVnEyRzRKeGlyVHFMWVlEMkU=\n refresh: ZXlKaGJHY2lPaUpJVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SmhkV1FpT2lKcllYSmhkbWtpTENKbGVIQWlPakUyTlRVNU1UWXhNallzSW1keWIzVndJam9pWm05dklpd2lhWE56SWpvaVkyOXRMbVJsYkd3dWEyRnlZWFpwSWl3aWNtOXNaWE1pT2lKaVlYSWlMQ0p6ZFdJaU9pSnJZWEpoZG1rdGRHVnVZVzUwSW4wLkxNbWVUSkZlX2dveXR0V0lUUDc5QWVaTy1kdmN5SHAwNUwyNXAtUm9ZZnM=\n"
}
```
-With [jq](https://stedolan.github.io/jq/), you process the above response to filter the secret manifest. For example:
+Process the above response to filter the secret manifest. For example using sed you can run the following:
```
-karavictl generate token --tenant Finance --insecure --addr --addr tenant.csm-authorization.com:30016 | jq -r '.Token'
+karavictl generate token --tenant Finance --insecure --addr tenant.csm-authorization.com:30016 | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g'
apiVersion: v1
kind: Secret
metadata:
@@ -257,7 +274,7 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
| intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
| endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
| systemID | System ID of the backend storage array. | Yes | " " |
- | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
+ | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
| isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
diff --git a/content/v2/authorization/deployment/rpm/_index.md b/content/v2/authorization/deployment/rpm/_index.md
index 3c037dad45..9e9d413db2 100644
--- a/content/v2/authorization/deployment/rpm/_index.md
+++ b/content/v2/authorization/deployment/rpm/_index.md
@@ -19,7 +19,29 @@ The CSM for Authorization proxy server requires a Linux host with the following
These packages need to be installed on the Linux host:
- container-selinux
-- https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm
+- k3s-selinux-0.4-1
+
+Use the appropriate package manager on the machine to install the packages.
+
+### Using yum on CentOS/RedHat 7:
+
+yum install -y container-selinux
+
+yum install -y https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm
+
+### Using yum on CentOS/RedHat 8:
+
+yum install -y container-selinux
+
+yum install -y https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm
+
+### Dark Sites
+
+For environments where `yum` will not work, obtain the supported version of container-selinux for your OS version and install it.
+
+The container-selinux RPMs for CentOS/RedHat 7 and 8 can be downloaded from [https://centos.pkgs.org/7/centos-extras-x86_64/](https://centos.pkgs.org/7/centos-extras-x86_64/) and [https://centos.pkgs.org/8/centos-appstream-x86_64/](https://centos.pkgs.org/8/centos-appstream-x86_64/), respectively.
+
+The k3s-selinux-0.4-1 RPM can be obtained from [https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm](https://rpm.rancher.io/k3s/stable/common/centos/7/noarch/k3s-selinux-0.4-1.el7.noarch.rpm) or [https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm](https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-0.4-1.el8.noarch.rpm) for CentOS/RedHat 7 and 8, respectively. Download the supported version of k3s-selinux-0.4-1 for your OS version and install it.
## Deploying the CSM Authorization Proxy Server
@@ -188,7 +210,7 @@ After creating the role bindings, the next logical step is to generate the acces
```
echo === Generating token ===
- karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml
+ karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' > token.yaml
echo === Copy token to Driver Host ===
sshpass -p ${DriverHostPassword} scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
@@ -230,7 +252,7 @@ Given a setup where Kubernetes, a storage system, and the CSM for Authorization
| intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
| endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
| systemID | System ID of the backend storage array. | Yes | " " |
- | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
+ | skipCertificateValidation | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
| isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
@@ -330,7 +352,7 @@ Replace the data in `config.yaml` under the `data` field with your new, encoded
>__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`
-`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -`
+`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | sed -e 's/"Token": //' -e 's/[{}"]//g' -e 's/\\n/\n/g' | kubectl -n $namespace apply -f -`
## CSM for Authorization Proxy Server Dynamic Configuration Settings
diff --git a/content/v2/authorization/release/_index.md b/content/v2/authorization/release/_index.md
index 9e877ab1b9..4352059fbe 100644
--- a/content/v2/authorization/release/_index.md
+++ b/content/v2/authorization/release/_index.md
@@ -6,19 +6,22 @@ Description: >
Dell Container Storage Modules (CSM) release notes for authorization
---
-## Release Notes - CSM Authorization 1.3.0
+## Release Notes - CSM Authorization 1.4.0
### New Features/Changes
-- [CSM-Authorization can deployed with helm](https://github.com/dell/csm/issues/261)
-
-### Fixed Issues
-
-- [Authorization proxy server install fails due to missing container-selinux](https://github.com/dell/csm/issues/313)
-- [Permissions on karavictl and k3s binaries are incorrect](https://github.com/dell/csm/issues/277)
-
-
-
-### Known Issues
-
-- [Authorization NGINX Ingress Controller fails to install on OpenShift](https://github.com/dell/csm/issues/317)
\ No newline at end of file
+- CSM 1.4 Release specific changes. ([#350](https://github.com/dell/csm/issues/350))
+- CSM Authorization insecure related entities are renamed to skipCertificateValidation. ([#368](https://github.com/dell/csm/issues/368))
+
+### Bugs
+
+- PowerScale volumes unable to be created with Helm deployment of CSM Authorization. ([#419](https://github.com/dell/csm/issues/419))
+- Authorization CLI documentation does not mention --array-insecure flag when creating or updating storage systems. ([#416](https://github.com/dell/csm/issues/416))
+- Authorization: Add documentation for backing up and restoring redis data. ([#410](https://github.com/dell/csm/issues/410))
+- CSM Authorization doesn't recognize storage with capital letters. ([#398](https://github.com/dell/csm/issues/398))
+- Update Authorization documentation with supported versions of k3s-selinux and container-selinux packages. ([#393](https://github.com/dell/csm/issues/393))
+- Using Authorization without dependency on jq. ([#390](https://github.com/dell/csm/issues/390))
+- Authorization Documentation Improvement. ([#384](https://github.com/dell/csm/issues/384))
+- Unit test failing for csm-authorization. ([#382](https://github.com/dell/csm/issues/382))
+- Karavictl has incorrect permissions after download. ([#360](https://github.com/dell/csm/issues/360))
+- Helm deployment of Authorization denies a valid request path from csi-powerflex. ([#353](https://github.com/dell/csm/issues/353))
\ No newline at end of file
diff --git a/content/v2/csidriver/_index.md b/content/v2/csidriver/_index.md
index edf939671e..774e3e0762 100644
--- a/content/v2/csidriver/_index.md
+++ b/content/v2/csidriver/_index.md
@@ -23,18 +23,17 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
| SLES | 15SP3 | 15SP3 | 15SP3 | 15SP3 | 15SP3 |
| Red Hat OpenShift | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS | 4.9, 4.10, 4.10 EUS |
| Mirantis Kubernetes Engine | 3.5.x | 3.5.x | 3.5.x | 3.5.x | 3.5.x |
-| Google Anthos | 1.9 | 1.8 | no | 1.9 | 1.9 |
-| VMware Tanzu | no | no | NFS | NFS | NFS |
+| Google Anthos | 1.12 | 1.12 | no | 1.12 | 1.12 |
+| VMware Tanzu | no | no | NFS | NFS | NFS,iSCSI |
| Rancher Kubernetes Engine | yes | yes | yes | yes | yes |
| Amazon Elastic Kubernetes Service Anywhere | no | yes | no | no | yes |
-
{{
}}
diff --git a/content/v2/csidriver/features/powerflex.md b/content/v2/csidriver/features/powerflex.md
index cfc331a718..f39abd8d26 100644
--- a/content/v2/csidriver/features/powerflex.md
+++ b/content/v2/csidriver/features/powerflex.md
@@ -522,6 +522,69 @@ Then run:
this test deploys the pod with two ephemeral volumes, and write some data to them before deleting the pod.
When creating ephemeral volumes, it is important to specify the following within the volumeAttributes section: volumeName, size, storagepool, and if you want to use a non-default array, systemID.
+## Consuming Existing Volumes with Static Provisioning
+
+To use existing volumes from PowerFlex array as Peristent volumes in your Kubernetes environment, perform these steps:
+1. Log into one of the MDMs of the PowerFlex cluster.
+2. Execute these commands to retrieve the `systemID` and `volumeID`.
+ 1. `scli --mdm_ip --login --username --password `
+ - **Output:** `Logged in. User role is SuperUser. System ID is `
+ 2. `scli --query_volume --volume_name `
+ - **Output:** `Volume ID: Name: `
+3. Create PersistentVolume and use this volume ID in the volumeHandle with the format `systemID`-`volumeID` in the manifest. Modify other parameters according to your needs.
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: existingVol
+spec:
+ capacity:
+ storage: 8Gi
+ csi:
+ driver: csi-vxflexos.dellemc.com
+ volumeHandle: -
+ volumeMode: Filesystem
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: vxflexos
+```
+4. Create PersistentVolumeClaim to use this PersistentVolume.
+```yaml
+kind: PersistentVolumeClaim
+apiVersion: v1
+metadata:
+ name: pvol
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 8Gi
+ storageClassName: vxflexos
+```
+5. Then use this PVC as a volume in a pod.
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: static-prov-pod
+spec:
+ containers:
+ - name: test
+ image: busybox
+ command: [ "sleep", "3600" ]
+ volumeMounts:
+ - mountPath: "/data0"
+ name: pvol
+ volumes:
+ - name: pvol
+ persistentVolumeClaim:
+ claimName: pvol
+```
+6. After the pod is `Ready` and `Running`, you can start to use this pod and volume.
+
+**Note:** Retrieval of the volume ID is possible through the UI. You must select the volume, navigate to `Details` section and click the volume in the graph. This selection will set the filter to the desired volume. At this point the volume ID can be found in the URL.
## Dynamic Logging Configuration
diff --git a/content/v2/csidriver/features/powermax.md b/content/v2/csidriver/features/powermax.md
index 2b6bee97c3..53ddcc5f1a 100644
--- a/content/v2/csidriver/features/powermax.md
+++ b/content/v2/csidriver/features/powermax.md
@@ -103,7 +103,7 @@ spec:
## iSCSI CHAP
-Starting from version 1.3.0, unidirectional Challenge Handshake Authentication Protocol (CHAP) for iSCSI has been supported.
+Starting from v1.3.0, the unidirectional Challenge Handshake Authentication Protocol (CHAP) for iSCSI has been supported.
To enable CHAP authentication:
1. Create secret `powermax-creds` with the key `chapsecret` set to the iSCSI CHAP secret. If the secret exists, delete and re-create the secret with this newly added key.
2. Set the parameter `enableCHAP` in `my-powermax-settings.yaml` to true.
@@ -126,7 +126,7 @@ When challenged, the host initiator transmits a CHAP credential and CHAP secret
## Custom Driver Name
-Starting from version 1.3.0 of the driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell PowerMax in the same Kubernetes/OpenShift cluster.
+Starting from version 1.3.0 driver, a custom name can be assigned to the driver at the time of installation. This enables installation of the CSI driver in a different namespace and installation of multiple CSI drivers for Dell PowerMax in the same Kubernetes/OpenShift cluster.
To use this feature, set the following values under `customDriverName` in `my-powermax-settings.yaml`.
- Value: Set this to the custom name of the driver.
@@ -162,8 +162,6 @@ To install multiple CSI drivers, follow these steps:
Starting in v1.4, the CSI PowerMax driver supports the expansion of Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node.
->Note: This feature is not supported for replicated volumes.
-
To use this feature, enable in `values.yaml`
```yaml
diff --git a/content/v2/csidriver/features/powerscale.md b/content/v2/csidriver/features/powerscale.md
index acaee8b878..085ee57ffd 100644
--- a/content/v2/csidriver/features/powerscale.md
+++ b/content/v2/csidriver/features/powerscale.md
@@ -22,6 +22,9 @@ You can use existent volumes from the PowerScale array as Persistent Volumes in
1. Open your volume in One FS, and take a note of volume-id.
2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs.
3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_=
+4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format:
+`CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA`
+5. Quota ID can be identified by querying the PowerScale system.
```yaml
apiVersion: v1
diff --git a/content/v2/csidriver/features/powerstore.md b/content/v2/csidriver/features/powerstore.md
index e4a3103b11..df8ab6544e 100644
--- a/content/v2/csidriver/features/powerstore.md
+++ b/content/v2/csidriver/features/powerstore.md
@@ -188,7 +188,7 @@ provisioner: csi-powerstore.dellemc.com
reclaimPolicy: Delete
allowVolumeExpansion: true # Set this attribute to true if you plan to expand any PVCs created using this storage class
parameters:
- FsType: xfs
+ csi.storage.k8s.io/fstype: xfs
```
To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC pstore-pvc-demo of size 3Gi, then you can resize it to 30Gi by updating the PVC.
@@ -494,7 +494,7 @@ allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
parameters:
arrayID: "GlobalUniqueID"
- FsType: "ext4"
+ csi.storage.k8s.io/fstype: "ext4"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
@@ -506,7 +506,7 @@ allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
parameters:
arrayID: "GlobalUniqueID"
- FsType: "xfs"
+ csi.storage.k8s.io/fstype: "xfs"
```
Here we specify two storage classes: one of them uses the first array and `ext4` filesystem, and the other uses the second array and `xfs` filesystem.
diff --git a/content/v2/csidriver/installation/helm/isilon.md b/content/v2/csidriver/installation/helm/isilon.md
index d1ba801503..3488f66182 100644
--- a/content/v2/csidriver/installation/helm/isilon.md
+++ b/content/v2/csidriver/installation/helm/isilon.md
@@ -26,6 +26,7 @@ The following are requirements to be met before installing the CSI Driver for De
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
- If enabling CSM for Replication, please refer to the [Replication deployment steps](../../../../replication/deployment/) first
- If enabling CSM for Resiliency, please refer to the [Resiliency deployment steps](../../../../resiliency/deployment/) first
+- If enabling Encryption, please refer to the [Encryption deployment steps](../../../../secure/encryption/deployment/) first
### Install Helm 3.0
@@ -46,14 +47,14 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -102,7 +103,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
### (Optional) Replication feature Requirements
@@ -121,7 +122,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
+1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace isilon` to create a new one. The use of "isilon" as the namespace is just an example. You can choose any name for the namespace.
3. Collect information from the PowerScale Systems like IP address, IsiPath, username, and password. Make a note of the value for these parameters as they must be entered in the *secret.yaml*.
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
@@ -174,10 +175,13 @@ CRDs should be configured during replication prepare stage with repctl as descri
| sidecarProxyImage | Image for csm-authorization-sidecar. | No | " " |
| proxyHost | Hostname of the csm-authorization server. | No | Empty |
| skipCertificateValidation | A boolean that enables/disables certificate validation of the csm-authorization server. | No | true |
- | **podmon** | Podmon is an optional feature under development and tech preview. Enable this feature only after contact support for additional information. | - | - |
- | enabled | A boolean that enable/disable podmon feature. | No | false |
+ | **podmon** | [Podmon](../../../../resiliency/deployment) is an optional feature to enable application pods to be resilient to node failure. | - | - |
+ | enabled | A boolean that enables/disables podmon feature. | No | false |
| image | image for podmon. | No | " " |
-
+ | **encryption** | [Encryption](../../../../secure/encryption/deployment) is an optional feature to apply encryption to CSI volumes. | - | - |
+ | enabled | A boolean that enables/disables Encryption feature. | No | false |
+ | image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.1.0" |
+
*NOTE:*
- ControllerCount parameter value must not exceed the number of nodes in the Kubernetes cluster. Otherwise, some of the controller pods remain in a "Pending" state till new nodes are available for scheduling. The installer exits with a WARNING on the same.
@@ -267,7 +271,7 @@ The CSI driver for Dell PowerScale version 1.5 and later, `dell-csi-helm-install
### What happens to my existing storage classes?
-*Upgrading from CSI PowerScale v2.2 driver*:
+*Upgrading from CSI PowerScale v2.3 driver*:
The storage classes created as part of the installation have an annotation - "helm.sh/resource-policy": keep set. This ensures that even after an uninstall or upgrade, the storage classes are not deleted. You can continue using these storage classes if you wish so.
*NOTE*:
@@ -287,11 +291,3 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c
Starting CSI PowerScale v1.6, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. Sample volume snapshot class manifests are available at `samples/volumesnapshotclass/`. Use these sample manifests to create a volumesnapshotclass for creating volume snapshots; uncomment/ update the manifests as per the requirements.
-### What happens to my existing Volume Snapshot Classes?
-
-*Upgrading from CSI PowerScale v2.2 driver*:
-The existing volume snapshot class will be retained.
-
-*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerScale to 1.6 or higher before upgrading to 2.2.
-
diff --git a/content/v2/csidriver/installation/helm/powerflex.md b/content/v2/csidriver/installation/helm/powerflex.md
index c021fb43e9..af80f767db 100644
--- a/content/v2/csidriver/installation/helm/powerflex.md
+++ b/content/v2/csidriver/installation/helm/powerflex.md
@@ -78,14 +78,14 @@ controller:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -104,7 +104,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- When using Kubernetes 1.21/1.22/1.23 it is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+- When using Kubernetes it is recommended to use 6.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
## Install the Driver
@@ -158,7 +158,7 @@ Use the below command to replace or update the secret:
- "insecure" parameter has been changed to "skipCertificateValidation" as insecure is deprecated and will be removed from use in config.yaml or secret.yaml in a future release. Users can continue to use any one of "insecure" or "skipCertificateValidation" for now. The driver would return an error if both parameters are used.
- Please note that log configuration parameters from v1.5 will no longer work in v2.0 and higher. Please refer to the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features for more information.
- If the user is using complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check in helm/csi-unity/Chart.yaml file.
- kubeVersion: ">= 1.21.0-0 < 1.24.0-0"
+ kubeVersion: ">= 1.21.0-0 < 1.25.0-0"
5. Default logging options are set during Helm install. To see possible configuration options, see the [Dynamic Logging Configuration](../../../features/powerflex#dynamic-logging-configuration) section in Features.
@@ -208,8 +208,8 @@ Use the below command to replace or update the secret:
| **vgsnapshotter** | This section allows the configuration of the volume group snapshotter(vgsnapshotter) pod. | - | - |
| enabled | A boolean that enable/disable vg snapshotter feature. | No | false |
| image | Image for vg snapshotter. | No | " " |
-| **podmon** | Podmon is an optional feature under development and tech preview. Enable this feature only after contact support for additional information. | - | - |
-| enabled | A boolean that enable/disable podmon feature. | No | false |
+| **podmon** | [Podmon](../../../../resiliency/deployment) is an optional feature to enable application pods to be resilient to node failure. | - | - |
+| enabled | A boolean that enables/disables podmon feature. | No | false |
| image | image for podmon. | No | " " |
| **authorization** | [Authorization](../../../../authorization/deployment) is an optional feature to apply credential shielding of the backend PowerFlex. | - | - |
| enabled | A boolean that enables/disables authorization feature. | No | false |
@@ -312,10 +312,3 @@ Deleting a storage class has no impact on a running Pod with mounted PVCs. You c
Starting CSI PowerFlex v1.5, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
-### What happens to my existing Volume Snapshot Classes?
-
-*Upgrading from CSI PowerFlex v2.2 driver*:
-The existing volume snapshot class will be retained.
-
-*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerFlex to 1.5 or higher, before upgrading to 2.3.
diff --git a/content/v2/csidriver/installation/helm/powermax.md b/content/v2/csidriver/installation/helm/powermax.md
index 60bed5ec8f..383eac559b 100644
--- a/content/v2/csidriver/installation/helm/powermax.md
+++ b/content/v2/csidriver/installation/helm/powermax.md
@@ -33,6 +33,7 @@ The following requirements must be met before installing CSI Driver for Dell Pow
- Linux multipathing requirements
- If using Snapshot feature, satisfy all Volume Snapshot requirements
- If enabling CSM for Authorization, please refer to the [Authorization deployment steps](../../../../authorization/deployment/) first
+- If using Powerpath , install the PowerPath for Linux requirements
### Install Helm 3
@@ -104,6 +105,16 @@ path_selector "round-robin 0"
no_path_retry 10
```
+### PowerPath for Linux requirements
+
+CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver.
+
+Set up the PowerPath for Linux as follows:
+
+- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers).
+- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm`
+- Start the PowerPath service using `systemctl start PowerPath`
+
### (Optional) Volume Snapshot Requirements
Applicable only if you decided to enable snapshot feature in `values.yaml`
@@ -114,7 +125,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. For installation, use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers to support Volume snapshots.
@@ -122,7 +133,7 @@ The CSI external-snapshotter sidecar is split into two controllers to support Vo
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster, irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -141,7 +152,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
### (Optional) Replication feature Requirements
@@ -162,7 +173,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
**Steps**
-1. Run `git clone -b v2.3.1 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
+1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository. This will include the Helm charts and dell-csi-helm-installer scripts.
2. Ensure that you have created a namespace where you want to install the driver. You can run `kubectl create namespace powermax` to create a new one
3. Edit the `samples/secret/secret.yaml file, point to the correct namespace, and replace the values for the username and password parameters.
These values can be obtained using base64 encoding as described in the following example:
@@ -174,7 +185,8 @@ CRDs should be configured during replication prepare stage with repctl as descri
4. Create the secret by running `kubectl create -f samples/secret/secret.yaml`.
5. If you are going to install the new CSI PowerMax ReverseProxy service, create a TLS secret with the name - _csireverseproxy-tls-secret_ which holds an SSL certificate and the corresponding private key in the namespace where you are installing the driver.
6. Copy the default values.yaml file `cd helm && cp csi-powermax/values.yaml my-powermax-settings.yaml`
-7. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml`
+7. Ensure the unisphere have 10.0 REST endpoint support by clicking on Unisphere -> Help (?) -> About in Unisphere for PowerMax GUI.
+8. Edit the newly created file and provide values for the following parameters `vi my-powermax-settings.yaml`
| Parameter | Description | Required | Default |
|-----------|--------------|------------|----------|
@@ -277,14 +289,6 @@ Upgrading from an older version of the driver: The storage classes will be delet
Starting with CSI PowerMax v1.7.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
-### What happens to my existing Volume Snapshot Classes?
-
-*Upgrading from CSI PowerMax v2.1.0 driver*:
-The existing volume snapshot class will be retained.
-
-*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerMax to 1.7.0 or higher, before upgrading to 2.3.1.
-
## Sample values file
The following sections have useful snippets from `values.yaml` file which provides more information on how to configure the CSI PowerMax driver along with CSI PowerMax ReverseProxy in various modes
@@ -332,7 +336,7 @@ global:
csireverseproxy:
# Set enabled to true if you want to use proxy
enabled: true
- image: dellemc/csipowermax-reverseproxy:v1.4.0
+ image: dellemc/csipowermax-reverseproxy:v2.3.0
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
@@ -380,7 +384,7 @@ global:
csireverseproxy:
# Set enabled to true if you want to use proxy
enabled: true
- image: dellemc/csipowermax-reverseproxy:v1.4.0
+ image: dellemc/csipowermax-reverseproxy:v2.3.0
tlsSecret: csirevproxy-tls-secret
deployAsSidecar: true
port: 2222
diff --git a/content/v2/csidriver/installation/helm/powerstore.md b/content/v2/csidriver/installation/helm/powerstore.md
index 858b0385db..974db4a545 100644
--- a/content/v2/csidriver/installation/helm/powerstore.md
+++ b/content/v2/csidriver/installation/helm/powerstore.md
@@ -22,8 +22,8 @@ The node section of the Helm chart installs the following component in a _Daemon
The following are requirements to be met before installing the CSI Driver for Dell PowerStore:
- Install Kubernetes or OpenShift (see [supported versions](../../../../csidriver/#features-and-capabilities))
- Install Helm 3
-- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe/TCP Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP configuration.
-> You can use either the Fibre Channel or iSCSI or NVMe/TCP protocol, but you do not need all the three.
+- If you plan to use either the Fibre Channel or iSCSI or NVMe/TCP or NVMe/FC protocol, refer to either _Fibre Channel requirements_ or _Set up the iSCSI Initiator_ or _Set up the NVMe Initiator_ sections below. You can use NFS volumes without FC or iSCSI or NVMe/TCP or NVMe/FC configuration.
+> You can use either the Fibre Channel or iSCSI or NVMe/TCP or NVMe/FC protocol, but you do not need all the four.
> If you want to use preconfigured iSCSI/FC hosts be sure to check that they are not part of any host group
- Linux native multipathing requirements
@@ -102,7 +102,7 @@ snapshot:
```
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation.
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
@@ -110,15 +110,14 @@ The CSI external-snapshotter sidecar is split into two controllers:
- A CSI external-snapshotter sidecar
The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available:
-Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation.
+Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation.
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
- [quay.io/k8scsi/csi-snapshotter:v4.0.x](https://quay.io/repository/k8scsi/csi-snapshotter?tag=v4.0.0&tab=tags)
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
-#### Installation example
-
+#### Installation example
You can install CRDs and default snapshot controller by running these commands:
```bash
git clone https://github.com/kubernetes-csi/external-snapshotter/
@@ -129,7 +128,7 @@ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl
```
*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
### Volume Health Monitoring
@@ -147,7 +146,7 @@ controller:
# Default value: None
enabled: false
- # healthMonitorInterval: Interval of monitoring volume health condition
+ # volumeHealthMonitorInterval: Interval of monitoring volume health condition
# Allowed values: Number followed by unit (s,m,h)
# Examples: 60s, 5m, 1h
# Default value: 60s
@@ -162,7 +161,6 @@ node:
# Default value: None
enabled: false
```
-
### (Optional) Replication feature Requirements
Applicable only if you decided to enable the Replication feature in `values.yaml`
@@ -180,11 +178,10 @@ CRDs should be configured during replication prepare stage with repctl as descri
## Install the Driver
**Steps**
-1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
+1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerstore.git` to clone the git repository.
2. Ensure that you have created namespace where you want to install the driver. You can run `kubectl create namespace csi-powerstore` to create a new one. "csi-powerstore" is just an example. You can choose any name for the namespace.
But make sure to align to the same namespace during the whole installation.
-3. Check `helm/csi-powerstore/driver-image.yaml` and confirm the driver image points to new image.
-4. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters:
+3. Edit `samples/secret/secret.yaml` file and configure connection information for your PowerStore arrays changing following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
- *username*, *password*: defines credentials for connecting to array.
@@ -196,12 +193,12 @@ CRDs should be configured during replication prepare stage with repctl as descri
NFSv4 ACls are supported for NFSv4 shares on NFSv4 enabled NAS servers only. POSIX ACLs are not supported and only POSIX mode bits are supported for NFSv3 shares.
Add more blocks similar to above for each PowerStore array if necessary.
-5. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
-6. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
+4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
+5. Create storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
> If you do not specify `arrayID` parameter in the storage class then the array that was specified as the default would be used for provisioning volumes.
-7. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml`
-8. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`:
+6. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml`
+7. Edit the newly created values file and provide values for the following parameters `vi my-powerstore-settings.yaml`:
| Parameter | Description | Required | Default |
|-----------|-------------|----------|---------|
@@ -228,6 +225,9 @@ CRDs should be configured during replication prepare stage with repctl as descri
| node.tolerations | Defines tolerations that would be applied to node daemonset | Yes | " " |
| fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| controller.vgsnapshot.enabled | To enable or disable the volume group snapshot feature | No | "true" |
+| images.driverRepository | To use an image from custom repository | No | dockerhub |
+| version | To use any driver version | No | Latest driver version |
+| allowAutoRoundOffFilesystemSize | Allows the controller to round off filesystem to 3Gi which is the minimum supported value | No | false |
8. Install the driver using `csi-install.sh` bash script by running `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml`
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n csi-powerstore`
@@ -257,7 +257,7 @@ There are samples storage class yaml files available under `samples/storageclass
1. Edit the sample storage class yaml file and update following parameters:
- *arrayID*: specifies what storage cluster the driver should use, if not specified driver will use storage cluster specified as `default` in `samples/secret/secret.yaml`
-- *FsType*: specifies what filesystem type driver should use, possible variants `ext3`, `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default.
+- *csi.storage.k8s.io/fstype*: specifies what filesystem type driver should use, possible variants `ext3`, `ext4`, `xfs`, `nfs`, if not specified driver will use `ext4` by default.
- *nfsAcls* (Optional): defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory.
- *allowedTopologies* (Optional): If you want you can also add topology constraints.
```yaml
@@ -281,14 +281,6 @@ kubectl create -f
Starting CSI PowerStore v1.4.0, `dell-csi-helm-installer` will not create any Volume Snapshot Class during the driver installation. There is a sample Volume Snapshot Class manifest present in the _samples/volumesnapshotclass_ folder. Please use this sample to create a new Volume Snapshot Class to create Volume Snapshots.
-### What happens to my existing Volume Snapshot Classes?
-
-*Upgrading from CSI PowerStore v2.1.0 driver*:
-The existing volume snapshot class will be retained.
-
-*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI PowerStore to 1.4.0 or higher, before upgrading to 2.3.0.
-
## Dynamically update the powerstore secrets
Users can dynamically add delete array information from secret. Whenever an update happens the driver updates the “Host” information in an array. User can update secret using the following command:
diff --git a/content/v2/csidriver/installation/helm/unity.md b/content/v2/csidriver/installation/helm/unity.md
index 38000db82b..9f666f7ca5 100644
--- a/content/v2/csidriver/installation/helm/unity.md
+++ b/content/v2/csidriver/installation/helm/unity.md
@@ -88,7 +88,7 @@ Install CSI Driver for Unity XT using this procedure.
*Before you begin*
- * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.3.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
+ * You must have the downloaded files, including the Helm chart from the source [git repository](https://github.com/dell/csi-unity) with the command ```git clone -b v2.4.0 https://github.com/dell/csi-unity.git```, as a pre-requisite for running this procedure.
* In the top-level dell-csi-helm-installer directory, there should be two scripts, `csi-install.sh` and `csi-uninstall.sh`.
* Ensure _unity_ namespace exists in Kubernetes cluster. Use the `kubectl create namespace unity` command to create the namespace if the namespace is not present.
@@ -123,6 +123,7 @@ Procedure
| podmon.enabled | service to monitor failing jobs and notify | false | - |
| podmon.image| pod man image name | false | - |
| tenantName | Tenant name added while adding host entry to the array | No | |
+ | fsGroupPolicy | Defines which FS Group policy mode to be used, Supported modes `None, File and ReadWriteOnceWithFSType` | No | "ReadWriteOnceWithFSType" |
| **controller** | Allows configuration of the controller-specific parameters.| - | - |
| controllerCount | Defines the number of csi-unity controller pods to deploy to the Kubernetes release| Yes | 2 |
| volumeNamePrefix | Defines a string prefix for the names of PersistentVolumes created | Yes | "k8s" |
@@ -169,6 +170,7 @@ Procedure
allowRWOMultiPodAccess: false
syncNodeInfoInterval: 5
maxUnityVolumesPerNode: 0
+ fsGroupPolicy: ReadWriteOneFSType
```
4. For certificate validation of Unisphere REST API calls refer [here](#certificate-validation-for-unisphere-rest-api-calls). Otherwise, create an empty secret with file `csi-unity/samples/secret/emptysecret.yaml` file by running the `kubectl create -f csi-unity/samples/secret/emptysecret.yaml` command.
@@ -250,14 +252,14 @@ Procedure
In order to use the Kubernetes Volume Snapshot feature, you must ensure the following components have been deployed on your Kubernetes cluster
#### Volume Snapshot CRD's
- The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd) for the installation.
+ The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd) for the installation.
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
- Use [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller) for the installation.
+ Use [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller) for the installation.
#### Installation example
@@ -271,7 +273,7 @@ Procedure
```
**Note**:
- - It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+ - It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
- The CSI external-snapshotter sidecar is still installed along with the driver and does not involve any extra configuration.
@@ -399,14 +401,6 @@ If the Unisphere certificate is self-signed or if you are using an embedded Unis
A wide set of annotated storage class manifests have been provided in the [csi-unity/samples/volumesnapshotclass/](https://github.com/dell/csi-unity/tree/main/samples/volumesnapshotclass) folder. Use these samples to create new Volume Snapshot to provision storage.
-### What happens to my existing Volume Snapshot Classes?
-
-*Upgrading from CSI Unity XT v2.1.0 driver*:
-The existing volume snapshot class will be retained.
-
-*Upgrading from an older version of the driver*:
-It is strongly recommended to upgrade the earlier versions of CSI Unity XT to v1.6.0 or higher, before upgrading to v2.3.0.
-
## Storage Classes
Storage Classes are an essential Kubernetes construct for Storage provisioning. To know more about Storage Classes, refer to https://kubernetes.io/docs/concepts/storage/storage-classes/
@@ -469,4 +463,4 @@ cd dell-csi-helm-installer
./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade
```
-Note: myvalues.yaml is a values.yaml file which user has used for driver installation.
\ No newline at end of file
+Note: myvalues.yaml is a values.yaml file which user has used for driver installation.
diff --git a/content/v2/csidriver/installation/offline/_index.md b/content/v2/csidriver/installation/offline/_index.md
index 127d35c937..4d15df3b06 100644
--- a/content/v2/csidriver/installation/offline/_index.md
+++ b/content/v2/csidriver/installation/offline/_index.md
@@ -65,7 +65,7 @@ The resulting offline bundle file can be copied to another machine, if necessary
For example, here is the output of a request to build an offline bundle for the Dell CSI Operator:
```
-git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git
+git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git
```
```
cd dell-csi-operator/scripts
@@ -78,9 +78,9 @@ cd dell-csi-operator/scripts
dellemc/csi-isilon:v2.0.0
dellemc/csi-isilon:v2.1.0
- dellemc/csipowermax-reverseproxy:v1.4.0
- dellemc/csi-powermax:v2.0.0
- dellemc/csi-powermax:v2.1.0
+ dellemc/csipowermax-reverseproxy:v2.3.0
+ dellemc/csi-powermax:v2.3.1
+ dellemc/csi-powermax:v2.4.0
dellemc/csi-powerstore:v2.0.0
dellemc/csi-powerstore:v2.1.0
dellemc/csi-unity:v2.0.0
diff --git a/content/v2/csidriver/installation/operator/_index.md b/content/v2/csidriver/installation/operator/_index.md
index 68113a0e90..65bd661ba1 100644
--- a/content/v2/csidriver/installation/operator/_index.md
+++ b/content/v2/csidriver/installation/operator/_index.md
@@ -11,14 +11,14 @@ The Dell CSI Operator is a Kubernetes Operator, which can be used to install and
## Prerequisites
#### Volume Snapshot CRD's
-The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd)
+The Kubernetes Volume Snapshot CRDs can be obtained and installed from the external-snapshotter project on Github. Manifests are available here:[v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/client/config/crd)
#### Volume Snapshot Controller
The CSI external-snapshotter sidecar is split into two controllers:
- A common snapshot controller
- A CSI external-snapshotter sidecar
-The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v5.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/deploy/kubernetes/snapshot-controller)
+The common snapshot controller must be installed only once in the cluster irrespective of the number of CSI drivers installed in the cluster. On OpenShift clusters 4.4 and later, the common snapshot-controller is pre-installed. In the clusters where it is not present, it can be installed using `kubectl` and the manifests are available here: [v6.0.x](https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1/deploy/kubernetes/snapshot-controller)
*NOTE:*
- The manifests available on GitHub install the snapshotter image:
@@ -37,7 +37,7 @@ kubectl create -f deploy/kubernetes/snapshot-controller
```
*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
## Installation
@@ -50,21 +50,21 @@ If you have installed an old version of the `dell-csi-operator` which was availa
#### Full list of CSI Drivers and versions supported by the Dell CSI Operator
| CSI Driver | Version | ConfigVersion | Kubernetes Version | OpenShift Version |
| ------------------ | --------- | -------------- | -------------------- | --------------------- |
-| CSI PowerMax | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerMax | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerMax | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerFlex | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerMax | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerFlex | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerFlex | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerScale | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerFlex | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerScale | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerScale | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI Unity XT | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
+| CSI PowerScale | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI Unity XT | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI Unity XT | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
-| CSI PowerStore | 2.1.0 | v2.1.0 | 1.20, 1.21, 1.22 | 4.8, 4.8 EUS, 4.9 |
+| CSI Unity XT | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
| CSI PowerStore | 2.2.0 | v2.2.0 | 1.21, 1.22, 1.23 | 4.8, 4.8 EUS, 4.9 |
| CSI PowerStore | 2.3.0 | v2.3.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
+| CSI PowerStore | 2.4.0 | v2.4.0 | 1.22, 1.23, 1.24 | 4.9, 4.10, 4.10 EUS |
@@ -76,7 +76,7 @@ The installation process involves the creation of a `Subscription` object either
* _Automatic_ - If you want the Operator to be automatically installed or upgraded (once an upgrade becomes available)
* _Manual_ - If you want a Cluster Administrator to manually review and approve the `InstallPlan` for installation/upgrades
-**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**.
+**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**.
#### Pre-Requisite for installation with OLM
Please run the following commands for creating the required `ConfigMap` before installing the `dell-csi-operator` using OLM.
@@ -97,7 +97,7 @@ $ kubectl create configmap dell-csi-operator-config --from-file config.tar.gz -n
#### Steps
>**Skip step 1 for "offline bundle installation" and continue using the workspace created by untar of dell-csi-operator-bundle.tar.gz.**
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Run `bash scripts/install.sh` to install the operator.
>NOTE: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
@@ -274,12 +274,12 @@ The below notes explain some of the general items to take care of.
1. If you are trying to upgrade the CSI driver from an older version, make sure to modify the _configVersion_ field if required.
```yaml
driver:
- configVersion: v2.3.0
+ configVersion: v2.4.0
```
2. Volume Health Monitoring feature is optional and by default this feature is disabled for drivers when installed via operator.
To enable this feature, we will have to modify the below block while upgrading the driver.To get the volume health state add
external-health-monitor sidecar in the sidecar section and `value`under controller set to true and the `value` under node set
- to true as shown below:
+ to true as shown below:
i. Add controller and node section as below:
```yaml
controller:
@@ -298,26 +298,26 @@ The below notes explain some of the general items to take care of.
- args:
- --volume-name-prefix=csiunity
- --default-fstype=ext4
- image: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
+ image: k8s.gcr.io/sig-storage/csi-provisioner:v3.2.0
imagePullPolicy: IfNotPresent
name: provisioner
- args:
- --snapshot-name-prefix=csiunitysnap
- image: k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
+ image: k8s.gcr.io/sig-storage/csi-snapshotter:v6.0.1
imagePullPolicy: IfNotPresent
name: snapshotter
- args:
- --monitor-interval=60s
- image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.5.0
+ image: gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:v0.6.0
imagePullPolicy: IfNotPresent
name: external-health-monitor
- - image: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
+ - image: k8s.gcr.io/sig-storage/csi-attacher:v3.5.0
imagePullPolicy: IfNotPresent
name: attacher
- image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.1
imagePullPolicy: IfNotPresent
name: registrar
- - image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
+ - image: k8s.gcr.io/sig-storage/csi-resizer:v1.5.0
imagePullPolicy: IfNotPresent
name: resizer
```
diff --git a/content/v2/csidriver/installation/operator/powermax.md b/content/v2/csidriver/installation/operator/powermax.md
index 7c1e13c246..1290b00418 100644
--- a/content/v2/csidriver/installation/operator/powermax.md
+++ b/content/v2/csidriver/installation/operator/powermax.md
@@ -36,6 +36,35 @@ Set up the iSCSI initiators as follows:
For more information about configuring iSCSI, see [Dell Host Connectivity guide](https://www.delltechnologies.com/asset/zh-tw/products/storage/technical-support/docu5128.pdf).
+#### Linux multipathing requirements
+
+CSI Driver for Dell PowerMax supports Linux multipathing. Configure Linux multipathing before installing the CSI Driver.
+
+Set up Linux multipathing as follows:
+
+- All the nodes must have the _Device Mapper Multipathing_ package installed.
+ *NOTE:* When this package is installed it creates a multipath configuration file which is located at `/etc/multipath.conf`. Please ensure that this file always exists.
+- Enable multipathing using `mpathconf --enable --with_multipathd y`
+- Enable `user_friendly_names` and `find_multipaths` in the `multipath.conf` file.
+
+As a best practice, use these options to help the operating system and the mulitpathing software detect path changes efficiently:
+```text
+path_grouping_policy multibus
+path_checker tur
+features "1 queue_if_no_path"
+path_selector "round-robin 0"
+no_path_retry 10
+```
+
+#### PowerPath for Linux requirements
+
+CSI Driver for Dell PowerMax supports PowerPath for Linux. Configure Linux PowerPath before installing the CSI Driver.
+
+Follow this procedure to set up PowerPath for Linux:
+
+- All the nodes must have the PowerPath package installed . Download the PowerPath archive for the environment from [Dell Online Support](https://www.dell.com/support/home/en-in/product-support/product/powerpath-for-linux/drivers).
+- `Untar` the PowerPath archive, Copy the RPM package into a temporary folder and Install PowerPath using `rpm -ivh DellEMCPower.LINUX--..x86_64.rpm`
+- Start the PowerPath service using `systemctl start PowerPath`
#### Create secret for client-side TLS verification (Optional)
Create a secret named powermax-certs in the namespace where the CSI PowerMax driver will be installed. This is an optional step and is only required if you are setting the env variable X_CSI_POWERMAX_SKIP_CERTIFICATE_VALIDATION to false. See the detailed documentation on how to create this secret [here](../../helm/powermax#certificate-validation-for-unisphere-rest-api-calls).
@@ -179,7 +208,7 @@ metadata:
namespace: test-powermax # <- Set the namespace to where you will install the CSI PowerMax driver
spec:
# Image for CSI PowerMax ReverseProxy
- image: dellemc/csipowermax-reverseproxy:v2.1.0 # <- CSI PowerMax Reverse Proxy image
+ image: dellemc/csipowermax-reverseproxy:v2.3.0 # <- CSI PowerMax Reverse Proxy image
imagePullPolicy: Always
# TLS secret which contains SSL certificate and private key for the Reverse Proxy server
tlsSecret: csirevproxy-tls-secret
@@ -265,8 +294,8 @@ metadata:
namespace: test-powermax
spec:
driver:
- # Config version for CSI PowerMax v2.3.0 driver
- configVersion: v2.3.0
+ # Config version for CSI PowerMax v2.4.0 driver
+ configVersion: v2.4.0
# replica: Define the number of PowerMax controller nodes
# to deploy to the Kubernetes release
# Allowed values: n, where n > 0
@@ -275,8 +304,8 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- # Image for CSI PowerMax driver v2.3.0
- image: dellemc/csi-powermax:v2.3.0
+ # Image for CSI PowerMax driver v2.4.0
+ image: dellemc/csi-powermax:v2.4.0
# imagePullPolicy: Policy to determine if the image should be pulled prior to starting the container.
# Allowed values:
# Always: Always pull the image.
diff --git a/content/v2/csidriver/installation/operator/powerstore.md b/content/v2/csidriver/installation/operator/powerstore.md
index d2b74a2896..78c374f19c 100644
--- a/content/v2/csidriver/installation/operator/powerstore.md
+++ b/content/v2/csidriver/installation/operator/powerstore.md
@@ -138,7 +138,7 @@ data:
| X_CSI_POWERSTORE_EXTERNAL_ACCESS | allows specifying additional entries for hostAccess of NFS volumes. Both single IP address and subnet are valid entries | No | " "|
| X_CSI_NFS_ACLS | Defines permissions - POSIX mode bits or NFSv4 ACLs, to be set on NFS target mount directory. | No | "0777" |
| ***Node parameters*** |
-| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false |
+| X_CSI_POWERSTORE_ENABLE_CHAP | Set to true if you want to enable iSCSI CHAP feature | No | false |
6. Execute the following command to create PowerStore custom resource:`kubectl create -f `. The above command will deploy the CSI-PowerStore driver.
- After that the driver should be installed, you can check the condition of driver pods by running `kubectl get all -n `
diff --git a/content/v2/csidriver/installation/operator/unity.md b/content/v2/csidriver/installation/operator/unity.md
index 89e8b9a699..d728919dde 100644
--- a/content/v2/csidriver/installation/operator/unity.md
+++ b/content/v2/csidriver/installation/operator/unity.md
@@ -97,12 +97,12 @@ metadata:
namespace: test-unity
spec:
driver:
- configVersion: v2.3.0
+ configVersion: v2.4.0
replicas: 2
dnsPolicy: ClusterFirstWithHostNet
forceUpdate: false
common:
- image: "dellemc/csi-unity:v2.3.0"
+ image: "dellemc/csi-unity:v2.4.0"
imagePullPolicy: IfNotPresent
sideCars:
- name: provisioner
@@ -210,7 +210,6 @@ kubectl edit configmap -n unity unity-config-params
3. Also, snapshotter and resizer sidecars are not optional to choose, it comes default with Driver installation.
## Volume Health Monitoring
-This feature is introduced in CSI Driver for Unity XT version v2.1.0.
### Operator based installation
diff --git a/content/v2/csidriver/installation/test/unity.md b/content/v2/csidriver/installation/test/unity.md
index db32d53c98..d969ead6aa 100644
--- a/content/v2/csidriver/installation/test/unity.md
+++ b/content/v2/csidriver/installation/test/unity.md
@@ -28,9 +28,9 @@ You can find all the created resources in `test-unity` namespace.
kubectl delete -f ./test/sample.yaml
```
-## Support for SLES 15 SP2
+## Support for SLES 15
-The CSI Driver for Dell Unity XT requires the following set of packages installed on all worker nodes that run on SLES 15 SP2.
+The CSI Driver for Dell Unity XT requires these of packages installed on all worker nodes that run on SLES 15.
- open-iscsi **open-iscsi is required in order to make use of iSCSI protocol for provisioning**
- nfs-utils **nfs-utils is required in order to make use of NFS protocol for provisioning**
diff --git a/content/v2/csidriver/release/operator.md b/content/v2/csidriver/release/operator.md
index 9696d83067..924c939f57 100644
--- a/content/v2/csidriver/release/operator.md
+++ b/content/v2/csidriver/release/operator.md
@@ -3,14 +3,9 @@ title: Operator
description: Release notes for Dell CSI Operator
---
-## Release Notes - Dell CSI Operator 1.8.0
+## Release Notes - Dell CSI Operator 1.9.0
->**Note:** There will be a delay in certification of Dell CSI Operator 1.8.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.8.0 release.
-
-### New Features/Changes
-
-- Added support for Kubernetes 1.24.
-- Added support for OpenShift 4.10.
+>**Note:** There will be a delay in certification of Dell CSI Operator 1.9.0 and it will not be available for download from the Red Hat OpenShift certified catalog right away. The operator will still be available for download from the Red Hat OpenShift Community Catalog soon after the 1.9.0 release.
### Fixed Issues
There are no fixed issues in this release.
diff --git a/content/v2/csidriver/release/powerflex.md b/content/v2/csidriver/release/powerflex.md
index b77837c82e..9a3b0cd0fa 100644
--- a/content/v2/csidriver/release/powerflex.md
+++ b/content/v2/csidriver/release/powerflex.md
@@ -6,13 +6,12 @@ description: Release notes for PowerFlex CSI driver
## Release Notes - CSI PowerFlex v2.4.0
### New Features/Changes
-- Added InstallationID annotation for volume attributes.
-- Added optional parameter protectionDomain to storageclass.
+- [Added optional parameter protectionDomain to storageclass](https://github.com/dell/csm/issues/415)
+- [Added InstallationID annotation for volume attributes.](https://github.com/dell/csm/issues/434)
- RHEL 8.6 support added
-### Fixed Issues
-
-- Enhancements to volume group snapshotter.
+### Fixed Issues
+- [Enhancements and fixes to volume group snapshotter](https://github.com/dell/csm/issues/371)
### Known Issues
diff --git a/content/v2/csidriver/release/powermax.md b/content/v2/csidriver/release/powermax.md
index 4da3a97c41..273de37a5a 100644
--- a/content/v2/csidriver/release/powermax.md
+++ b/content/v2/csidriver/release/powermax.md
@@ -3,35 +3,27 @@ title: PowerMax
description: Release notes for PowerMax CSI driver
---
-## Release Notes - CSI PowerMax v2.3.1
+## Release Notes - CSI PowerMax v2.4.0
+
+> Note: Starting from CSI v2.4.0, Only Unisphere 10.0 REST endpoints are supported. It is mandatory that Unisphere should be updated to 10.0. Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
### New Features/Changes
-- Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype.
-- Added support for Standalone Helm Charts.
-- Removed beta volumesnapshotclass sample files.
-- Added mapping of PV/PVC to namespace.
-- Added support to configure fsGroupPolicy.
-- Added support to filter topology keys based on user inputs.
-- Added support for SRDF Metro group sharing multiple namespaces.
-- Added support for Kubernetes 1.24.
-- Added support for OpenShift 4.10.
-- Added support to convert replicated volume to non-replicated volume and vice versa for Sync and Async modes.
-- Added expansion support for replicated volumes.
-- Added concurrency enhancements for replicated volumes
-
->Note: v2.3.1 has been qualified with helm installation only. For using it via operator installation please change the image tag to v2.3.1 [here](https://github.com/dell/dell-csi-operator/blob/main/config/samples/storage_v1_csipowermax.yaml) for installing via UI and [here](https://github.com/dell/dell-csi-operator/tree/main/samples) for installating via CLI.
+- [Online volume expansion for replicated volumes.](https://github.com/dell/csm/issues/336)
+- [Added support for PowerMaxOS 10.](https://github.com/dell/csm/issues/389)
+- [Removed 9.x Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389)
+- [Added 10.0 Unisphere REST endpoints support.](https://github.com/dell/csm/issues/389)
+- [Automatic SRDF group creation for PowerMax arrays (PowerMaxOS 10 and above).](https://github.com/dell/csm/issues/411)
+- [Added PowerPath support.](https://github.com/dell/csm/issues/436)
### Fixed Issues
- - [Volume Attachment failure due to WWN mismatch](https://github.com/dell/csm/issues/548)
+There are no fixed issues in this release.
### Known Issues
| Issue | Workaround |
|-------|------------|
-| Delete Volume fails with the error message: volume is part of masking view | This issue is due to limitations in Unisphere and occurs when Unisphere is overloaded. Currently, there is no workaround for this but it can be avoided by ensuring that Unisphere is not overloaded during such operations. The Unisphere team is assessing a fix for this in a future Unisphere release|
-| Getting initiators list fails with context deadline error | The following error can occur during the driver installation if a large number of initiators are present on the array. There is no workaround for this but it can be avoided by deleting stale initiators on the array|
+|[Volume Attachment failure due to WWN mismatch](https://github.com/dell/csm/issues/548)| Please upgrade the driver to 2.5.0+|
| Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains|
-| GetSnapVolumeList fails with context deadline error | The following error can occur if a large number of snapshots are present on the array. There is no workaround for this but it can be avoided by deleting unused snapshots on the array|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node |
| After expanding file system volume , new size is not getting reflected inside the container | This is a known issue and has been reported at https://github.com/dell/csm/issues/378 . Workaround : Remount the volumes 1. Edit the replica count as 0 in application StatefulSet 2. Change the replica count as 1 for same StatefulSet. |
diff --git a/content/v2/csidriver/release/powerscale.md b/content/v2/csidriver/release/powerscale.md
index 1a14c62bb6..01909ced74 100644
--- a/content/v2/csidriver/release/powerscale.md
+++ b/content/v2/csidriver/release/powerscale.md
@@ -3,15 +3,11 @@ title: PowerScale
description: Release notes for PowerScale CSI driver
---
-## Release Notes - CSI Driver for PowerScale v2.3.0
+## Release Notes - CSI Driver for PowerScale v2.4.0
### New Features/Changes
-- Removed beta volumesnapshotclass sample files.
-- Added support for Kubernetes 1.24.
-- Added support to increase volume path limit.
-- Added support for OpenShift 4.10.
-- Added support for CSM Resiliency sidecar via Helm.
+- [Added support to add client only to root clients when RO volume is created from snapshot and RootClientEnabled is set to true.](https://github.com/dell/csm/issues/362)
### Fixed Issues
@@ -23,7 +19,8 @@ There are no fixed issues in this release.
| If the length of the nodeID exceeds 128 characters, the driver fails to update the CSINode object and installation fails. This is due to a limitation set by CSI spec which doesn't allow nodeID to be greater than 128 characters. | The CSI PowerScale driver uses the hostname for building the nodeID which is set in the CSINode resource object, hence we recommend not having very long hostnames in order to avoid this issue. This current limitation of 128 characters is likely to be relaxed in future Kubernetes versions as per this issue in the community: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/581
**Note:** In kubernetes 1.22 this limit has been relaxed to 192 characters. |
| If some older NFS exports /terminated worker nodes still in NFS export client list, CSI driver tries to add a new worker node it fails (For RWX volume). | User need to manually clean the export client list from old entries to make successful addition of new worker nodes. |
| Delete namespace that has PVCs and pods created with the driver. The External health monitor sidecar crashes as a result of this operation. | Deleting the namespace deletes the PVCs first and then removes the pods in the namespace. This brings a condition where pods exist without their PVCs and causes the external-health-monitor sidecar to crash. This is a known issue and has been reported at https://github.com/kubernetes-csi/external-health-monitor/issues/100 |
-| fsGroupPolicy may not work as expected without root privileges for NFS only https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter |
+| fsGroupPolicy may not work as expected without root privileges for NFS only https://github.com/kubernetes/examples/issues/260 | To get the desired behavior set "RootClientEnabled" = "true" in the storage class parameter |
+| Driver logs shows "VendorVersion=2.3.0+dirty" | Update the driver to csi-powerscale 2.4.0 |
### Note:
diff --git a/content/v2/csidriver/release/powerstore.md b/content/v2/csidriver/release/powerstore.md
index f0bbb59e8a..b11c3b8d86 100644
--- a/content/v2/csidriver/release/powerstore.md
+++ b/content/v2/csidriver/release/powerstore.md
@@ -3,16 +3,13 @@ title: PowerStore
description: Release notes for PowerStore CSI driver
---
-## Release Notes - CSI PowerStore v2.3.0
+## Release Notes - CSI PowerStore v2.4.0
### New Features/Changes
-- Support Volume Group Snapshots.
-- Removed beta volumesnapshotclass sample files.
-- Support Configurable Volume Attributes.
-- Added support for Kubernetes 1.24.
-- Added support for OpenShift 4.10.
-- Added support for NVMe/FC protocol.
+- [Updated deprecated StorageClass parameter fsType with csi.storage.k8s.io/fstype](https://github.com/dell/csm/issues/188)
+- [Added support for iSCSI in TKG Qualification](https://github.com/dell/csm/issues/363)
+- [Added support for Stand alone Helm Chart](https://github.com/dell/csm/issues/355)
### Fixed Issues
diff --git a/content/v2/csidriver/release/unity.md b/content/v2/csidriver/release/unity.md
index 701d0778d4..9a0668e3c3 100644
--- a/content/v2/csidriver/release/unity.md
+++ b/content/v2/csidriver/release/unity.md
@@ -3,16 +3,11 @@ title: Unity XT
description: Release notes for Unity XT CSI driver
---
-## Release Notes - CSI Unity XT v2.3.0
+## Release Notes - CSI Unity XT v2.4.0
### New Features/Changes
-- Removed beta volumesnapshotclass sample files.
-- Added support for Kubernetes 1.24.
-- Added support for OpenShift 4.10.
-
-### Fixed Issues
-CSM Resiliency: Occasional failure unmounting Unity volume for raw block devices via iSCSI.
+- [Added support to configure fsGroupPolicy](https://github.com/dell/csm/issues/361)
### Known Issues
diff --git a/content/v2/csidriver/troubleshooting/powerflex.md b/content/v2/csidriver/troubleshooting/powerflex.md
index 373605cc8e..f53deb66cd 100644
--- a/content/v2/csidriver/troubleshooting/powerflex.md
+++ b/content/v2/csidriver/troubleshooting/powerflex.md
@@ -22,6 +22,7 @@ description: Troubleshooting PowerFlex Driver
| Volume metrics are missing | Enable [Volume Health Monitoring](../../features/powerflex#volume-health-monitoring) |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
| CSI-PowerFlex volumes cannot mount; are being recognized as multipath devices | CSI-PowerFlex does not support multipath; to fix: 1. Remove any multipath mapping involving a powerflex volume with `multipath -f ` 2. Blacklist CSI-PowerFlex volumes in multipath config file |
+ | When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../upgradation/drivers/powerflex) for more details |
>*Note*: `vxflexos-controller-*` is the controller pod that acquires leader lease
diff --git a/content/v2/csidriver/troubleshooting/powermax.md b/content/v2/csidriver/troubleshooting/powermax.md
index 76cc3d4b23..ba6db41fbf 100644
--- a/content/v2/csidriver/troubleshooting/powermax.md
+++ b/content/v2/csidriver/troubleshooting/powermax.md
@@ -11,3 +11,4 @@ description: Troubleshooting PowerMax Driver
| `kubectl logs powermax-controller- –n driver` logs show that the driver failed to connect to the U4P because it could not verify the certificates | Check the powermax-certs secret and ensure it is not empty or it has the valid certificates|
|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powermax/blob/main/helm/csi-powermax/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
+| When attempting a driver upgrade, you see: ```spec.fsGroupPolicy: Invalid value: "xxx": field is immutable``` | You cannot upgrade between drivers with different fsGroupPolicies. See [upgrade documentation](../../upgradation/drivers/powermax) for more details |
diff --git a/content/v2/csidriver/troubleshooting/powerscale.md b/content/v2/csidriver/troubleshooting/powerscale.md
index e3f233a76c..8c35ed482a 100644
--- a/content/v2/csidriver/troubleshooting/powerscale.md
+++ b/content/v2/csidriver/troubleshooting/powerscale.md
@@ -18,3 +18,4 @@ Here are some installation failures that might be encountered and how to mitigat
| The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **Authentication failed. Trying to re-authenticate** when using Session-based authentication | The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in `values.yaml` to 0 |
| When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state `Pending`, with a warning `another RO volume from this snapshot is already present`. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. | Wait for the deletion of the first RO PVC created from the same volume snapshot. |
| While attaching a ReadOnly PVC from a volume snapshot to a pod, the mount operation will fail with error `mounting ... failed, reason given by server: No such file or directory`, if RO volume's access zone(non System access zone) on Isilon is configured with a dedicated service IP(which is same as `AzServiceIP` storage class parameter). This operation results in accessing the snapshot base directory(`/ifs`) and results in overstepping the RO volume's access zone's base directory, which the OneFS doesn't allow. | Provide a service ip that belongs to RO volume's access zone which set the highest level `/ifs` as its zone base directory. |
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerscale/blob/main/helm/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
diff --git a/content/v2/csidriver/troubleshooting/powerstore.md b/content/v2/csidriver/troubleshooting/powerstore.md
index 62c1622262..7ba746fb2a 100644
--- a/content/v2/csidriver/troubleshooting/powerstore.md
+++ b/content/v2/csidriver/troubleshooting/powerstore.md
@@ -11,4 +11,5 @@ description: Troubleshooting PowerStore Driver
| If PVC is not getting created and getting the following error in PVC description: ```failed to provision volume with StorageClass "powerstore-iscsi": rpc error: code = Internal desc = : Unknown error:```| Check if you've created a secret with correct credentials |
| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the volumeattachment to the node that went down. Now the volume can be attached to the new node. |
-| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. |
\ No newline at end of file
+| If the pod creation for NVMe takes time when the connections between the host and the array are more than 2 and considerable volumes are mounted on the host | Reduce the number of connections between the host and the array to 2. |
+|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerstore/blob/main/helm/csi-powerstore/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
\ No newline at end of file
diff --git a/content/v2/csidriver/troubleshooting/unity.md b/content/v2/csidriver/troubleshooting/unity.md
index 9905215390..cd398664b5 100644
--- a/content/v2/csidriver/troubleshooting/unity.md
+++ b/content/v2/csidriver/troubleshooting/unity.md
@@ -14,3 +14,4 @@ description: Troubleshooting Unity XT Driver
| PVC creation fails on a fresh cluster with **iSCSI** and **NFS** protocols alone enabled with error **failed to provision volume with StorageClass "unity-iscsi": error generating accessibility requirements: no available topology found**. | This is because iSCSI initiator login takes longer than the node pod startup time. This can be overcome by bouncing the node pods in the cluster using the below command the driver pods with **kubectl get pods -n unity --no-headers=true \| awk '/unity-/{print $1}'\| xargs kubectl delete -n unity pod** |
| Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: `Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.21.0 < 1.25.0 which is incompatible with Kubernetes V1.21.11-mirantis-1` | If you are using an extended Kubernetes version, please see the helm Chart at `helm/csi-unity/Chart.yaml` and use the alternate `kubeVersion` check that is provided in the comments. *Please note* that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported. |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | 1. Force delete the pod running on the node that went down 2. Delete the VolumeAttachment to the node that went down. Now the volume can be attached to the new node. |
+| Volume attachments are not removed after deleting the pods | If you are using Kubernetes version < 1.24, assign the volume name prefix such that the total length of volume name created in array should be more than 68 bytes. From Kubernetes version >= 1.24, this issue is taken care. Please refer the kubernetes issue https://github.com/kubernetes/kubernetes/issues/97230 which has detailed explanation. |
diff --git a/content/v2/csidriver/upgradation/drivers/isilon.md b/content/v2/csidriver/upgradation/drivers/isilon.md
index 75fca2acda..5fcdd65f99 100644
--- a/content/v2/csidriver/upgradation/drivers/isilon.md
+++ b/content/v2/csidriver/upgradation/drivers/isilon.md
@@ -8,12 +8,12 @@ Description: Upgrade PowerScale CSI driver
---
You can upgrade the CSI Driver for Dell PowerScale using Helm or Dell CSI Operator.
-## Upgrade Driver from version 2.2.0 to 2.3.0 using Helm
+## Upgrade Driver from version 2.3.0 to 2.4.0 using Helm
**Note:** While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Clone the repository using `git clone -b v2.3.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
+1. Clone the repository using `git clone -b v2.4.0 https://github.com/dell/csi-powerscale.git`, copy the helm/csi-isilon/values.yaml into a new location with a custom name say _my-isilon-settings.yaml_, to customize settings for installation. Edit _my-isilon-settings.yaml_ as per the requirements.
2. Change to directory dell-csi-helm-installer to install the Dell PowerScale `cd dell-csi-helm-installer`
3. Upgrade the CSI Driver for Dell PowerScale using following command:
diff --git a/content/v2/csidriver/upgradation/drivers/offline.md b/content/v2/csidriver/upgradation/drivers/offline.md
new file mode 100644
index 0000000000..752de08e0f
--- /dev/null
+++ b/content/v2/csidriver/upgradation/drivers/offline.md
@@ -0,0 +1,9 @@
+---
+title: Offline Upgrade of Dell CSI Storage Providers
+linktitle: Offline Upgrade
+description: Offline Upgrade of Dell CSI Storage Providers
+---
+
+1. To perform offline upgrade of the driver, please create an offline bundle as mentioned [here](./../../../installation/offline#building-an-offline-bundle).
+2. Once the bundle is created, please unpack the bundle by following the steps mentioned [here](./../../../installation/offline#unpacking-the-offline-bundle-and-preparing-for-installation).
+3. Please use the driver specific upgrade steps to upgrade.
\ No newline at end of file
diff --git a/content/v2/csidriver/upgradation/drivers/operator.md b/content/v2/csidriver/upgradation/drivers/operator.md
index eab8bedd28..51298cee83 100644
--- a/content/v2/csidriver/upgradation/drivers/operator.md
+++ b/content/v2/csidriver/upgradation/drivers/operator.md
@@ -13,7 +13,7 @@ Dell CSI Operator can be upgraded based on the supported platforms in one of the
### Using Installation Script
-1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.8.0 https://github.com/dell/dell-csi-operator.git`.
+1. Clone and checkout the required dell-csi-operator version using `git clone -b v1.9.0 https://github.com/dell/dell-csi-operator.git`.
2. cd dell-csi-operator
3. Execute `bash scripts/install.sh --upgrade` . This command will install the latest version of the operator.
>Note: Dell CSI Operator version 1.4.0 and higher would install to the 'dell-csi-operator' namespace by default.
@@ -25,5 +25,5 @@ The `Update approval` (**`InstallPlan`** in OLM terms) strategy plays a role whi
- If the **`Update approval`** is set to `Automatic`, OpenShift automatically detects whenever the latest version of dell-csi-operator is available in the **`Operator hub`**, and upgrades it to the latest available version.
- If the upgrade policy is set to `Manual`, OpenShift notifies of an available upgrade. This notification can be viewed by the user in the **`Installed Operators`** section of the OpenShift console. Clicking on the hyperlink to `Approve` the installation would trigger the dell-csi-operator upgrade process.
-**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.5.0`.
+**NOTE**: The recommended version of OLM for Upstream Kubernetes is **`v0.18.3`** when upgrading operator to `v1.9.0`.
diff --git a/content/v2/csidriver/upgradation/drivers/powerflex.md b/content/v2/csidriver/upgradation/drivers/powerflex.md
index 5c181f183e..75fbe21a34 100644
--- a/content/v2/csidriver/upgradation/drivers/powerflex.md
+++ b/content/v2/csidriver/upgradation/drivers/powerflex.md
@@ -23,6 +23,20 @@ You can upgrade the CSI Driver for Dell PowerFlex using Helm or Dell CSI Operato
- To update any installation parameter after the driver has been installed, change the `myvalues.yaml` file and run the install script with the option _\-\-upgrade_, for example: `./csi-install.sh --namespace vxflexos --values ./myvalues.yaml --upgrade`.
- The logging configuration from v1.5 will not work in v2.1, since the log configuration parameters are now set in the values.yaml file located at helm/csi-vxflexos/values.yaml. Please set the logging configuration parameters in the values.yaml file.
+- You cannot upgrade between drivers with different fsGroupPolicies. To check the current driver's fsGroupPolicy, use this command:
+``` kubectl describe csidriver csi-vxflexos.dellemc.com```
+and check the "Spec" section:
+```
+...
+Spec:
+ Attach Required: true
+ Fs Group Policy: ReadWriteOnceWithFSType
+ Pod Info On Mount: true
+ Requires Republish: false
+ Storage Capacity: false
+...
+```
+
## Upgrade using Dell CSI Operator:
**Note:** Upgrading the Operator does not upgrade the CSI Driver.
diff --git a/content/v2/csidriver/upgradation/drivers/powermax.md b/content/v2/csidriver/upgradation/drivers/powermax.md
index abf069a5b3..de810ef264 100644
--- a/content/v2/csidriver/upgradation/drivers/powermax.md
+++ b/content/v2/csidriver/upgradation/drivers/powermax.md
@@ -10,16 +10,37 @@ Description: Upgrade PowerMax CSI driver
You can upgrade CSI Driver for Dell PowerMax using Helm or Dell CSI Operator.
-## Update Driver from v2.2 to v2.3.1 using Helm
+**Note:** CSI Driver for Powermax v2.4.0 requires 10.0 REST endpoint support of Unisphere.
+### Updating the CSI Driver to use 10.0 Unisphere
+
+1. Upgrade the Unisphere to have 10.0 endpoint support.Please find the instructions [here.](https://dl.dell.com/content/manual34878027-dell-unisphere-for-powermax-10-0-0-installation-guide.pdf?language=en-us&ps=true)
+2. Update the `my-powermax-settings.yaml` to have endpoint with 10.0 support.
+
+## Update Driver from v2.3 to v2.4 using Helm
**Steps**
-1. Run `git clone -b v2.3.1 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.3.1 driver.
+1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powermax.git` to clone the git repository and get the v2.4 driver.
2. Update the values file as needed.
2. Run the `csi-install` script with the option _\-\-upgrade_ by running: `cd ../dell-csi-helm-installer && ./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml --upgrade`.
*NOTE:*
- If you are upgrading from a driver version that was installed using Helm v2, ensure that you install Helm3 before installing the driver.
- To update any installation parameter after the driver has been installed, change the `my-powermax-settings.yaml` file and run the install script with the option _\-\-upgrade_, for example: `./csi-install.sh --namespace powermax --values ./my-powermax-settings.yaml –upgrade`.
+- You cannot upgrade between drivers with different fsGroupPolicies. To check the current driver's fsGroupPolicy, use this command:
+``` kubectl describe csidriver csi-powermax```
+and check the "Spec" section:
+
+```
+...
+Spec:
+ Attach Required: true
+ Fs Group Policy: ReadWriteOnceWithFSType
+ Pod Info On Mount: false
+ Requires Republish: false
+ Storage Capacity: false
+...
+
+```
## Upgrade using Dell CSI Operator:
**Note:** Upgrading the Operator does not upgrade the CSI Driver.
diff --git a/content/v2/csidriver/upgradation/drivers/powerstore.md b/content/v2/csidriver/upgradation/drivers/powerstore.md
index 089fa38c68..aa24207cef 100644
--- a/content/v2/csidriver/upgradation/drivers/powerstore.md
+++ b/content/v2/csidriver/upgradation/drivers/powerstore.md
@@ -9,12 +9,12 @@ Description: Upgrade PowerStore CSI driver
You can upgrade the CSI Driver for Dell PowerStore using Helm or Dell CSI Operator.
-## Update Driver from v2.2 to v2.3 using Helm
+## Update Driver from v2.3 to v2.4 using Helm
Note: While upgrading the driver via helm, controllerCount variable in myvalues.yaml can be at most one less than the number of worker nodes.
**Steps**
-1. Run `git clone -b v2.3.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
+1. Run `git clone -b v2.4.0 https://github.com/dell/csi-powerstore.git` to clone the git repository and get the driver.
2. Edit `helm/config.yaml` file and configure connection information for your PowerStore arrays changing the following parameters:
- *endpoint*: defines the full URL path to the PowerStore API.
- *globalID*: specifies what storage cluster the driver should use
@@ -28,10 +28,10 @@ Note: While upgrading the driver via helm, controllerCount variable in myvalues.
Add more blocks similar to above for each PowerStore array if necessary.
3. (optional) create new storage classes using ones from `samples/storageclass` folder as an example and apply them to the Kubernetes cluster by running `kubectl create -f `
- >Storage classes created by v1.4/v2.0/v2.1 driver will not be deleted, v2.2 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1 driver and make it default in the `config.yaml` file.
+ >Storage classes created by v1.4/v2.0/v2.1/v2.2/v2.3 driver will not be deleted, v2.4 driver will use default array to manage volumes provisioned with old storage classes. Thus, if you still have volumes provisioned by v1.4/v2.0/v2.1/v2.2/v2.3 in your cluster then be sure to include the same array you have used for the v1.4/v2.0/v2.1/v2.2/v2.3 driver and make it default in the `config.yaml` file.
4. Create the secret by running ```kubectl create secret generic powerstore-config -n csi-powerstore --from-file=config=secret.yaml```
-5. Copy the default values.yaml file `cp ./helm/csi-powerstore/values.yaml ./dell-csi-helm-installer/my-powerstore-settings.yaml` and update parameters as per the requirement.
-6. Run the `csi-install` script with the option _\-\-upgrade_ by running: `./dell-csi-helm-installer/csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade`.
+5. Copy the default values.yaml file `cd dell-csi-helm-installer && cp ../helm/csi-powerstore/values.yaml ./my-powerstore-settings.yaml` and update parameters as per the requirement.
+6. Run the `csi-install` script with the option _\-\-upgrade_ by running: `./csi-install.sh --namespace csi-powerstore --values ./my-powerstore-settings.yaml --upgrade`.
## Upgrade using Dell CSI Operator:
diff --git a/content/v2/csidriver/upgradation/drivers/unity.md b/content/v2/csidriver/upgradation/drivers/unity.md
index 26b4e4d47d..a1bfe7a3cc 100644
--- a/content/v2/csidriver/upgradation/drivers/unity.md
+++ b/content/v2/csidriver/upgradation/drivers/unity.md
@@ -20,9 +20,9 @@ You can upgrade the CSI Driver for Dell Unity XT using Helm or Dell CSI Operator
Preparing myvalues.yaml is the same as explained in the install section.
-To upgrade the driver from csi-unity v2.2.0 to csi-unity v2.3.0
+To upgrade the driver from csi-unity v2.3.0 to csi-unity v2.4.0
-1. Get the latest csi-unity v2.3.0 code from Github using using `git clone -b v2.3.0 https://github.com/dell/csi-unity.git`.
+1. Get the latest csi-unity v2.4.0 code from Github using using `git clone -b v2.4.0 https://github.com/dell/csi-unity.git`.
2. Copy the helm/csi-unity/values.yaml to the new location csi-unity/dell-csi-helm-installer and rename it to myvalues.yaml. Customize settings for installation by editing myvalues.yaml as needed.
3. Navigate to csi-unity/dell-csi-hem-installer folder and execute this command:
`./csi-install.sh --namespace unity --values ./myvalues.yaml --upgrade`
diff --git a/content/v2/deployment/_index.md b/content/v2/deployment/_index.md
index 23b93beb33..8e6ecf2e58 100644
--- a/content/v2/deployment/_index.md
+++ b/content/v2/deployment/_index.md
@@ -1,13 +1,76 @@
---
title: "Deployment"
linkTitle: "Deployment"
+no_list: true
description: Deployment of CSM for Replication
weight: 1
---
+The Container Storage Modules along with the required CSI Drivers can each be deployed using CSM operator.
+>Note: Currently CSM operator is in tech preview and is not supported in production environments.
+
+{{< cardpane >}}
+ {{< card header="[**CSM Operator**](csmoperator/)"
+ footer="Supports driver [PowerScale](csmoperator/drivers/powerscale/), modules [Authorization](csmoperator/modules/authorization/) [Replication](csmoperator/modules/replication/)">}}
+ Dell CSM Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually.
+[...More on installation instructions](csmoperator/)
+ {{< /card >}}
+{{< /cardpane >}}
The Container Storage Modules and the required CSI Drivers can each be deployed following the links below:
-- [Dell CSI Drivers Installation](../csidriver/installation)
-- [Dell Container Storage Module for Observability](../observability/deployment)
-- [Dell Container Storage Module for Authorization](../authorization/deployment)
-- [Dell Container Storage Module for Resiliency](../resiliency/deployment)
-- [Dell Container Storage Module for Replication](../replication/deployment)
\ No newline at end of file
+
+
+{{< cardpane >}}
+ {{< card header="[Dell CSI Drivers Installation via Helm](../csidriver/installation/helm)"
+ footer="Installs [PowerStore](../csidriver/installation/helm/powerstore/) [PowerMax](../csidriver/installation/helm/powermax/) [PowerScale](../csidriver/installation/helm/isilon/) [PowerFlex](../csidriver/installation/helm/powerflex/) [Unity](../csidriver/installation/helm/unity/)">}}
+ Dell CSI Helm installer installs the CSI Driver components using the provided Helm charts.
+ [...More on installation instructions](../csidriver/installation/helm)
+ {{< /card >}}
+ {{< card header="[Dell CSI Drivers Installation via offline installer](../csidriver/installation/offline)"
+ footer="[Offline installation for all drivers](../csidriver/installation/offline)">}}
+ Both Helm and Dell CSI opetor supports offline installation of the Dell CSI Storage Providers via `csi-offline-bundle.sh` script by creating a usable package.
+ [...More on installation instructions](../csidriver/installation/offline)
+ {{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+ {{< card header="[Dell CSI Drivers Installation via operator](../csidriver/installation/operator)"
+ footer="Installs [PowerStore](../csidriver/installation/operator/powerstore/) [PowerMax](../csidriver/installation/operator/powermax/) [PowerScale](../csidriver/installation/operator/isilon/) [PowerFlex](../csidriver/installation/operator/powerflex/) [Unity](../csidriver/installation/operator/unity/)">}}
+ Dell CSI Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. It is also available as a certified operator for OpenShift clusters and can be deployed using the OpenShift Container Platform. Both these methods of installation use OLM (Operator Lifecycle Manager). The operator can also be deployed manually.
+ [...More on installation instructions](../csidriver/installation/operator)
+ {{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+ {{< card header="[Dell Container Storage Module for Observability](../observability/deployment)"
+ footer="Installs Observability Module">}}
+ CSM for Observability can be deployed either via Helm or CSM for Observability Installer or CSM for Observability Offline Installer
+ [...More on installation instructions](../observability/deployment)
+ {{< /card >}}
+ {{< card header="[Dell Container Storage Module for Authorization](../authorization/deployment)"
+ footer="Installs Authorization Module">}}
+ CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms.
+ [...More on installation instructions](../authorization/deployment)
+ {{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+ {{< card header="[Dell Container Storage Module for Resiliency](../resiliency/deployment)"
+ footer="Installs Resiliency Module">}}
+ CSI drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. It can be updated via _podmon_ block specified in the _values.yaml_
+ [...More on installation instructions](../resiliency/deployment)
+ {{< /card >}}
+ {{< card header="[Dell Container Storage Module for Replication](../replication/deployment)"
+ footer="Installs Replication Module">}}
+ Replication module can be installed by installing repctl,Container Storage Modules (CSM) for Replication Controller,CSI driver after enabling replication.
+ [...More on installation instructions](../replication/deployment)
+ {{< /card >}}
+{{< /cardpane >}}
+{{< cardpane >}}
+ {{< card header="[Dell Container Storage Module for Application Mobility](../applicationmobility/deployment)"
+ footer="Installs Application Mobility Module">}}
+ Application mobility module can be installed via helm charts. This is a tech preview release and it requires a license for installation.
+ [...More on installation instructions](../applicationmobility/deployment)
+ {{< /card >}}
+ {{< card header="[Dell Container Storage Module for Encryption](../secure/encryption/deployment)"
+ footer="Installs Encryption Module">}}
+ Encryption can be optionally installed via the PowerScale CSI driver Helm chart.
+ [...More on installation instructions](../secure/encryption//deployment)
+ {{< /card >}}
+{{< /cardpane >}}
diff --git a/content/v2/deployment/csminstaller/_index.md b/content/v2/deployment/csminstaller/_index.md
deleted file mode 100644
index 4527ddfd9f..0000000000
--- a/content/v2/deployment/csminstaller/_index.md
+++ /dev/null
@@ -1,193 +0,0 @@
----
-title: "CSM Installer"
-linkTitle: "CSM Installer"
-description: Container Storage Modules Installer
-weight: 1
----
-
-{{% pageinfo color="primary" %}}
-The CSM Installer is currently deprecated and will no longer be supported as of CSM v1.4.0
-{{% /pageinfo %}}
-
->>**Note: The CSM Installer only supports installation of CSM 1.0 Modules and CSI Drivers in environments that do not have any existing deployments of CSM or CSI Drivers. The CSM Installer does not support the upgrade of existing CSM or CSI Driver deployments.**
-
-The CSM (Container Storage Modules) Installer simplifies the deployment and management of Dell Container Storage Modules and CSI Drivers to provide persistent storage for your containerized workloads.
-
-## CSM Installer Supported Modules and Dell CSI Drivers
-
-| Modules/Drivers | CSM 1.0 |
-| - | :-: |
-| Authorization | 1.0 |
-| Observability | 1.0 |
-| Replication | 1.0 |
-| Resiliency | 1.0 |
-| CSI Driver for PowerScale | v2.0 |
-| CSI Driver for Unity XT | v2.0 |
-| CSI Driver for PowerStore | v2.0 |
-| CSI Driver for PowerFlex | v2.0 |
-| CSI Driver for PowerMax | v2.0 |
-
-The CSM Installer must first be deployed in a Kubernetes environment using Helm. After which, the CSM Installer can be used through the following interfaces:
-- [CSM CLI](./csmcli)
-- [REST API](./csmapi)
-
-## How to Deploy the Container Storage Modules Installer
-
-1. Add the `dell` helm repository:
-
-```
-helm repo add dell https://dell.github.io/helm-charts
-```
-
-**If securing the API service and database, following steps 2 to 4 to generate the certificates, or skip to step 5 to deploy without certificates**
-
-2. Generate self-signed certificates using the following commands:
-
-```
-mkdir api-certs
-
-openssl req \
- -newkey rsa:4096 -nodes -sha256 -keyout api-certs/ca.key \
- -x509 -days 365 -out api-certs/ca.crt -subj '/'
-
-openssl req \
- -newkey rsa:4096 -nodes -sha256 -keyout api-certs/cert.key \
- -out api-certs/cert.csr -subj '/'
-
-openssl x509 -req -days 365 -in api-certs/cert.csr -CA api-certs/ca.crt \
- -CAkey api-certs/ca.key -CAcreateserial -out api-certs/cert.crt
-```
-
-3. If required, download the `cockroach` binary used to generate certificates for the cockroach-db:
-```
-curl https://binaries.cockroachdb.com/cockroach-v21.1.8.linux-amd64.tgz | tar -xz && sudo cp -i cockroach-v21.1.8.linux-amd64/cockroach /usr/local/bin/
-```
-
-4. Generate the certificates required for the cockroach-db service:
-```
-mkdir db-certs
-
-cockroach cert create-ca --certs-dir=db-certs --ca-key=db-certs/ca.key
-
-cockroach cert create-node cockroachdb-0.cockroachdb.csm-installer.svc.cluster.local cockroachdb-public cockroachdb-0.cockroachdb --certs-dir=db-certs/ --ca-key=db-certs/ca.key
-
-```
- In case multiple instances of cockroachdb are required add all nodes names while creating nodes on the certificates
-```
-cockroach cert create-node cockroachdb-0.cockroachdb.csm-installer.svc.cluster.local cockroachdb-1.cockroachdb.csm-installer.svc.cluster.local cockroachdb-2.cockroachdb.csm-installer.svc.cluster.local cockroachdb-public cockroachdb-0.cockroachdb cockroachdb-1.cockroachdb cockroachdb-2.cockroachdb --certs-dir=db-certs/ --ca-key=db-certs/ca.key
-```
-
-```
-cockroach cert create-client root --certs-dir=db-certs/ --ca-key=db-certs/ca.key
-
-cockroach cert list --certs-dir=db-certs/
-```
-
-5. Create a values.yaml file that contains JWT, Cipher key, and Admin username and password of CSM Installer that are required by the installer during helm installation. See the [Configuration](#configuration) section for other values that can be set during helm installation.
-
-> __Note__: `jwtKey` will be used as a shared secret in HMAC algorithm for generating jwt token, `cipherKey` will be used as a symmetric key in AES cipher for encryption of storage system credentials. Those parameters are arbitrary, and you can set them to whatever you like. Just ensure that `cipherKey` is exactly 32 characters long.
-
-```
-# string of any length
-jwtKey:
-
-# string of exactly 32 characters
-cipherKey: ""
-
-# Admin username of CSM Installer
-adminUserName:
-
-# Admin password of CSM Installer
-adminPassword:
-```
-
-6. Follow step `a` if certificates are being used or step `b` if certificates are not being used:
-
-a) Install the helm chart, specifying the certificates generated in the previous steps:
-```
-helm install -n csm-installer --create-namespace \
- --set-file serviceCertificate=api-certs/cert.crt \
- --set-file servicePrivateKey=api-certs/cert.key \
- --set-file databaseCertificate=db-certs/node.crt \
- --set-file databasePrivateKey=db-certs/node.key \
- --set-file dbClientCertificate=db-certs/client.root.crt \
- --set-file dbClientPrivateKey=db-certs/client.root.key \
- --set-file caCrt=db-certs/ca.crt \
- -f values.yaml \
- csm-installer dell/csm-installer
-```
-b) If not deploying with certificates, execute the following command:
-```
-helm install -n csm-installer --create-namespace \
- --set-string scheme=http \
- --set-string dbSSLEnabled="false" \
- -f values.yaml \
- csm-installer dell/csm-installer
-```
-
-> __Note__: In an OpenShift environment, the cockroachdb StatefulSet will run privileged pods so that it can mount the Persistent Volume used for storage. Follow the documentation for your OpenShift version to enable privileged pods.
-
-### Configuration
-
-| Parameter | Description | Default |
-|----------------------------------|-----------------------------------------------|---------------------------------------------------------|
-| `csmInstallerCount` | Number of replicas for the CSM Installer Deployment | `1`|
-| `dbInstanceCount` | Number of replicas for the CSM Database StatefulSet | `2` |
-| `imagePullPolicy` | Image pull policy for the CSM Installer images | `Always` |
-| `host` | Host or IP that will be used to bind to the CSM Installer API service | `0.0.0.0` |
-| `port` | Port that will be used to bind to the CSM Installer API service | `8080` |
-| `scheme` | Scheme used for the CSM Installer API service. Valid values are `https` and `http` | `https` |
-| `jwtKey` | Key used to sign the JWT token | |
-| `cipherKey` | Key used to encrypt/decrypt user and storage system credentials. Must be 32 characters in length. | |
-| `logLevel` | Log level used for the CSM Installer. Valid values are `DEBUG`, `INFO`, `WARN`, `ERROR`, and `FATAL` | `INFO` |
-| `dbHost` | Host name of the Cockroach DB instance | `cockroachdb-public` |
-| `dbPort` | Port number to access the Cockroach DB instance | `26257` |
-| `dbSSLEnabled` | Enable SSL for the Cockroach DB connectiong | `true` |
-| `installerImage` | Location of the CSM Installer Docker Image | `dellemc/dell-csm-installer:v1.0.0` |
-| `dataCollectorImage`| Location of the CSM Data Collector Docker Image | `dellemc/csm-data-collector:v1.0.0` |
-| `adminUserName` | Username to authenticate with the CSM Installer | |
-| `adminPassword` | Password to authenticate with the CSM Installer | |
-| `dbVolumeDirectory` | Directory on the worker node to use for the Persistent Volume | `/var/lib/cockroachdb` |
-| `api_server_ip` | If using Swagger, set to public IP or host of the CSM Installer API service | `localhost` |
-
-## How to Upgrade the Container Storage Modules Installer
-
-When a new version of the CSM Installer helm chart is available, the following steps can be used to upgrade to the latest version.
-
->Note: Upgrading the CSM Installer does not upgrade the Dell CSI Drivers or modules that were previously deployed with the installer. The CSM Installer does not support upgrading of the Dell CSI Drivers or modules. The Dell CSI Drivers and modules must be deleted and re-deployed using the latest CSM Installer in order to get the most recent version of the Dell CSI Driver and modules.
-
-1. Update the helm repository.
-```
-helm repo update
-```
-
-2. Follow step `a` if certificates were used during the initial installation of the helm chart or step `b` if certificates were not used:
-
-a) Upgrade the helm chart, specifying the certificates used during initial installation:
-```
-helm upgrade -n csm-installer \
- --set-file serviceCertificate=api-certs/cert.crt \
- --set-file servicePrivateKey=api-certs/cert.key \
- --set-file databaseCertificate=db-certs/node.crt \
- --set-file databasePrivateKey=db-certs/node.key \
- --set-file dbClientCertificate=db-certs/client.root.crt \
- --set-file dbClientPrivateKey=db-certs/client.root.key \
- --set-file caCrt=db-certs/ca.crt \
- -f values.yaml \
- csm-installer dell/csm-installer
-```
-
-b) If not deploying with certificates, execute the following command:
-```
-helm upgrade -n csm-installer \
- --set-string scheme=http \
- --set-string dbSSLEnabled="false" \
- -f values.yaml \
- csm-installer dell/csm-installer
-```
-## How to Uninstall the Container Storage Modules Installer
-
-1. Delete the Helm chart
-```
-helm delete -n csm-installer csm-installer
-```
diff --git a/content/v2/deployment/csminstaller/csmapi.md b/content/v2/deployment/csminstaller/csmapi.md
deleted file mode 100644
index 812f36b835..0000000000
--- a/content/v2/deployment/csminstaller/csmapi.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: "CSM REST API"
-type: swagger
-weight: 1
-description: Reference for the CSM REST API
----
-
-{{< swaggerui src="../swagger.yaml" >}}
\ No newline at end of file
diff --git a/content/v2/deployment/csminstaller/csmcli.md b/content/v2/deployment/csminstaller/csmcli.md
deleted file mode 100644
index 3711351969..0000000000
--- a/content/v2/deployment/csminstaller/csmcli.md
+++ /dev/null
@@ -1,269 +0,0 @@
----
-title : CSM CLI
-linktitle: CSM CLI
-weight: 2
-description: >
- Dell Container Storage Modules (CSM) Command Line Interface(CLI) Deployment and Management
----
-`csm` is a command-line client for installation of Dell Container Storage Modules and CSI Drivers for Kubernetes clusters.
-
-## Pre-requisites
-
-1. [Deploy the Container Storage Modules Installer](../../deployment)
-2. Download/Install the `csm` binary from Github: https://github.com/dell/csm. Alternatively, you can build the binary by:
- - cloning the `csm` repository
- - changing into `csm/cmd/csm` directory
- - running `make build`
-3. create a `cli_env.sh` file that contains the correct values for the below variables. And export the variables by running `source ./cli_env.sh`
-
-```console
-# Change this to CSM API Server IP
-export API_SERVER_IP="127.0.0.1"
-
-# Change this to CSM API Server Port
-export API_SERVER_PORT="31313"
-
-# CSM API Server protocol - allowed values are https & http
-export SCHEME="https"
-
-# Path to store JWT
-export AUTH_CONFIG_PATH="/home/user/installer-token/"
-```
-
-## Usage
-
-```console
-~$ ./csm -h
-csm is command line tool for csm application
-
-Usage:
- csm [flags]
- csm [command]
-
-Available Commands:
- add add cluster, configuration or storage
- approve-task approve task for application
- authenticate authenticate user
- change change - subcommand is password
- create create application
- delete delete storage, cluster, configuration or application
- get get storage, cluster, application, configuration, supported driver, module, storage type
- help Help about any command
- reject-task reject task for an application
- update update storage, configuration or cluster
-
-Flags:
- -h, --help help for csm-cli
-
-Use "csm [command] --help" for more information about a command.
-```
-
-### Authenticate the User
-
-To begin with, you need to authenticate the user who will be managing the CSM Installer and its components.
-
-```console
-./csm authenticate --username= --password=
-```
-Or more securely, run the above command without `--password` to be prompted for one
-
-```console
-./csm authenticate --username=
-Enter user's password:
-
-```
-
-### Change Password
-
-To change password follow below command
-
-```console
-./csm change password --username=
-```
-
-### View Supported Platforms
-
-You can now view the supported DellCSI Drivers
-
-```console
-./csm get supported-drivers
-```
-
-You can also view the supported Modules
-
-```console
-./csm get supported-modules
-```
-
-And also view the supported Storage Array Types
-
-```console
-./csm get supported-storage-arrays
-```
-
-### Add a Cluster
-
-You can now add a cluster by providing cluster detail name and Kubeconfig path
-
-```console
-./csm add cluster --clustername --configfilepath
-```
-
-### Upload Configuration Files
-
-You can now add a configuration file that can be used for creating application by providing filename and path
-
-```console
-./csm add configuration --filename --filepath
-```
-
-### Add a Storage System
-
-You can now add storage endpoints, array type and its unique id
-
-```console
-./csm add storage --endpoint --storage-type --unique-id --username
-```
-
-The optional `--meta-data` flag can be used to provide additional meta-data for the storage system that is used when creating Secrets for the CSI Driver. These fields include:
- - isDefault: Set to true if this storage system is used as default for multi-array configuration
- - skipCertificateValidation: Set to true to skip certificate validation
- - mdmId: Comma separated list of MDM IPs for PowerFlex
- - nasName: NAS Name for PowerStore
- - blockProtocol: Block Protocol for PowerStore
- - port: Port for PowerScale
- - portGroups: Comma separated list of port group names for PowerMax
-
-### Create an Application
-
-You may now create an application depending on the specific use case. Below are the common use cases:
-
-
- CSI Driver
-
-```console
-./csm create application --clustername \
- --driver-type powerflex: --name \
- --storage-arrays
-```
-
-
-
- CSI Driver with CSM Authorization
-
-CSM Authorization requires a `token.yaml` issued by storage Admin from the CSM Authorization Server, a certificate file, and the of the authorization server. The `token.yaml` and `cert` should be added by following the steps in [adding configuration file](#upload-configuration-files). CSM Authorization does not yet support all CSI Drivers/platforms(See [supported platforms documentation](../../authorization/#supported-platforms) or [supported platforms via CLI](#view-supported-platforms))).
-Finally, run the command below:
-
-```console
-./csm create application --clustername \
- --driver-type powerflex: --name \
- --storage-arrays \
- --module-type authorization: \
- --module-configuration "karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost="
-
-```
-
-
-
- CSM Observability(Standalone)
-
-CSM Observability depends on driver config secret(s) corresponding to the metric(s) you want to enable. Please see [CSM Observability](../../observability/metrics) for all Supported Metrics. For the sake of demonstration, assuming we want to enable [CSM Metrics for PowerFlex](../../observability/metrics/powerflex), the PowerFlex secret yaml should be added by following the steps in [adding configuration file](#upload-configuration-files).
-Once this is done, run the command below:
-
-```console
-./csm create application --clustername \
- --name \
- --module-type observability: \
- --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true"
-```
-
-
-
- CSM Observability(Standalone) with CSM Authorization
-
-See the individual steps for configuaration file pre-requisites for CSM Observability (Standalone) with CSM Authorization
-
-```console
-./csm create application --clustername \
- --name \
- --module-type "observability:,authorization:" \
- --module-configuration "karaviMetricsPowerflex.driverConfig.filename=,karaviMetricsPowerflex.enabled=true,karaviAuthorizationProxy.proxyAuthzToken.filename=,karaviAuthorizationProxy.rootCertificate.filename=,karaviAuthorizationProxy.proxyHost="
-```
-
-
-
- CSI Driver for Dell PowerMax with reverse proxy module
-
- To deploy CSI Driver for Dell PowerMax with reverse proxy module, first upload reverse proxy tls crt and tls key via [adding configuration file](#upload-configuration-files). Then, use the below command to create application:
-
-```console
-./csm create application --clustername \
- --driver-type powermax: --name \
- --storage-arrays \
- --module-type reverse-proxy: \
- --module-configuration reverseProxy.tlsSecretKeyFile=,reverseProxy.tlsSecretCertFile=
-```
-
-
-
- CSI Driver with replication module
-
- To deploy CSI driver with replication module, first add a target cluster through [adding cluster](#add-a-cluster). Then, use the below command(this command is an example to deploy CSI Driver for Dell PowerStore with replication module) to create application::
-
-```console
-./csm create application --clustername \
- --driver-type powerstore: --name \
- --storage-arrays \
- --module-configuration target_cluster= \
- --module-type replication:
-```
-
-
-
-
- CSI Driver with other module(s) not covered above
-
- Assuming you want to deploy a driver with `module A` and `module B`. If they have specific configurations of `A.image="docker:v1"`,`A.filename=hello`, and `B.namespace=world`.
-
-```console
-./csm create application --clustername \
- --driver-type powerflex: --name \
- --storage-arrays \
- --module-type "module A:,module B:" \
- --module-configuration "A.image=docker:v1,A.filename=hello,B.namespace=world"
-```
-
-
-
-> __Note__:
- - `--driver-type` and `--module-type` flags in create application command MUST match the values from the [supported CSM platforms](#view-supported-platforms)
- - Replication module supports only using a pair of clusters at a time (source and a target/or single cluster) from CSM installer, However `repctl` can be used if needed to add multiple pairs of target clusters. Using replication module with other modules during application creation is not yet supported.
-
-### Approve application/task
-
-You may now approve the task so that you can continue to work with the application
-
-```console
-./csm approve-task --applicationname
-```
-
-### Reject application/task
-
-You may want to reject a task or application to discontinue the ongoing process
-
-```console
-./csm reject-task --applicationname
-```
-
-### Delete application/task
-
-If you want to delete an application
-
-```console
-./csm delete application --name
-```
-
-> __Note__: When deleting an application, the namespace and Secrets are not deleted. These resources need to be deleted manually. See more in [Troubleshooting](../troubleshooting#after-deleting-an-application-why-cant-i-re-create-the-same-application).
-
-> __Note__: All commands and associated syntax can be displayed with -h or --help
-
diff --git a/content/v2/deployment/csminstaller/swagger.yaml b/content/v2/deployment/csminstaller/swagger.yaml
deleted file mode 100644
index 15a9b8b227..0000000000
--- a/content/v2/deployment/csminstaller/swagger.yaml
+++ /dev/null
@@ -1,1395 +0,0 @@
-basePath: /api/v1
-definitions:
- ApplicationCreateRequest:
- properties:
- cluster_id:
- type: string
- driver_configuration:
- items:
- type: string
- type: array
- driver_type_id:
- type: string
- module_configuration:
- items:
- type: string
- type: array
- module_types:
- items:
- type: string
- type: array
- name:
- type: string
- storage_arrays:
- items:
- type: string
- type: array
- required:
- - cluster_id
- - driver_type_id
- - name
- type: object
- ApplicationResponse:
- properties:
- application_output:
- type: string
- cluster_id:
- type: string
- driver_configuration:
- items:
- type: string
- type: array
- driver_type_id:
- type: string
- id:
- type: string
- module_configuration:
- items:
- type: string
- type: array
- module_types:
- items:
- type: string
- type: array
- name:
- type: string
- storage_arrays:
- items:
- type: string
- type: array
- type: object
- ClusterResponse:
- properties:
- cluster_id:
- type: string
- cluster_name:
- type: string
- nodes:
- description: The nodes
- type: string
- type: object
- ConfigFileResponse:
- properties:
- id:
- type: string
- name:
- type: string
- type: object
- DriverResponse:
- properties:
- id:
- type: string
- storage_array_type_id:
- type: string
- version:
- type: string
- type: object
- ErrorMessage:
- properties:
- arguments:
- items:
- type: string
- type: array
- code:
- description: HTTPStatusEnum Possible HTTP status values of completed or failed
- jobs
- enum:
- - 200
- - 201
- - 202
- - 204
- - 400
- - 401
- - 403
- - 404
- - 422
- - 429
- - 500
- - 503
- type: integer
- message:
- description: Message string.
- type: string
- message_l10n:
- description: Localized message
- type: object
- severity:
- description: |-
- SeverityEnum - The severity of the condition
- * INFO - Information that may be of use in understanding the failure. It is not a problem to fix.
- * WARNING - A condition that isn't a failure, but may be unexpected or a contributing factor. It may be necessary to fix the condition to successfully retry the request.
- * ERROR - An actual failure condition through which the request could not continue.
- * CRITICAL - A failure with significant impact to the system. Normally failed commands roll back and are just ERROR, but this is possible
- enum:
- - INFO
- - WARNING
- - ERROR
- - CRITICAL
- type: string
- type: object
- ErrorResponse:
- properties:
- http_status_code:
- description: HTTPStatusEnum Possible HTTP status values of completed or failed
- jobs
- enum:
- - 200
- - 201
- - 202
- - 204
- - 400
- - 401
- - 403
- - 404
- - 422
- - 429
- - 500
- - 503
- type: integer
- messages:
- description: |-
- A list of messages describing the failure encountered by this request. At least one will
- be of Error severity because Info and Warning conditions do not cause the request to fail
- items:
- $ref: '#/definitions/ErrorMessage'
- type: array
- type: object
- ModuleResponse:
- properties:
- id:
- type: string
- name:
- type: string
- standalone:
- type: boolean
- version:
- type: string
- type: object
- StorageArrayCreateRequest:
- properties:
- management_endpoint:
- type: string
- meta_data:
- items:
- type: string
- type: array
- password:
- type: string
- storage_array_type:
- type: string
- unique_id:
- type: string
- username:
- type: string
- required:
- - management_endpoint
- - password
- - storage_array_type
- - unique_id
- - username
- type: object
- StorageArrayResponse:
- properties:
- id:
- type: string
- management_endpoint:
- type: string
- meta_data:
- items:
- type: string
- type: array
- storage_array_type_id:
- type: string
- unique_id:
- type: string
- username:
- type: string
- type: object
- StorageArrayTypeResponse:
- properties:
- id:
- type: string
- name:
- type: string
- type: object
- StorageArrayUpdateRequest:
- properties:
- management_endpoint:
- type: string
- meta_data:
- items:
- type: string
- type: array
- password:
- type: string
- storage_array_type:
- type: string
- unique_id:
- type: string
- username:
- type: string
- type: object
- TaskResponse:
- properties:
- _links:
- additionalProperties:
- additionalProperties:
- type: string
- type: object
- type: object
- application_name:
- type: string
- id:
- type: string
- logs:
- type: string
- status:
- type: string
- type: object
-info:
- contact: {}
- description: CSM Deployment API
- title: CSM Deployment API
- version: "1.0"
-paths:
- /applications:
- get:
- consumes:
- - application/json
- description: List all applications
- operationId: list-applications
- parameters:
- - description: Application Name
- in: query
- name: name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ApplicationResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all applications
- tags:
- - application
- post:
- consumes:
- - application/json
- description: Create a new application
- operationId: create-application
- parameters:
- - description: Application info for creation
- in: body
- name: application
- required: true
- schema:
- $ref: '#/definitions/ApplicationCreateRequest'
- produces:
- - application/json
- responses:
- "202":
- description: Accepted
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new application
- tags:
- - application
- /applications/{id}:
- delete:
- consumes:
- - application/json
- description: Delete an application
- operationId: delete-application
- parameters:
- - description: Application ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: ""
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete an application
- tags:
- - application
- get:
- consumes:
- - application/json
- description: Get an application
- operationId: get-application
- parameters:
- - description: Application ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ApplicationResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get an application
- tags:
- - application
- /clusters:
- get:
- consumes:
- - application/json
- description: List all clusters
- operationId: list-clusters
- parameters:
- - description: Cluster Name
- in: query
- name: cluster_name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ClusterResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all clusters
- tags:
- - cluster
- post:
- consumes:
- - application/json
- description: Create a new cluster
- operationId: create-cluster
- parameters:
- - description: Name of the cluster
- in: formData
- name: name
- required: true
- type: string
- - description: kube config file
- in: formData
- name: file
- required: true
- type: file
- produces:
- - application/json
- responses:
- "201":
- description: Created
- schema:
- $ref: '#/definitions/ClusterResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new cluster
- tags:
- - cluster
- /clusters/{id}:
- delete:
- consumes:
- - application/json
- description: Delete a cluster
- operationId: delete-cluster
- parameters:
- - description: Cluster ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: ""
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete a cluster
- tags:
- - cluster
- get:
- consumes:
- - application/json
- description: Get a cluster
- operationId: get-cluster
- parameters:
- - description: Cluster ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ClusterResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a cluster
- tags:
- - cluster
- patch:
- consumes:
- - application/json
- description: Update a cluster
- operationId: update-cluster
- parameters:
- - description: Cluster ID
- in: path
- name: id
- required: true
- type: string
- - description: Name of the cluster
- in: formData
- name: name
- type: string
- - description: kube config file
- in: formData
- name: file
- type: file
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ClusterResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Update a cluster
- tags:
- - cluster
- /configuration-files:
- get:
- consumes:
- - application/json
- description: List all configuration files
- operationId: list-config-file
- parameters:
- - description: Name of the configuration file
- in: query
- name: config_name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ConfigFileResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all configuration files
- tags:
- - configuration-file
- post:
- consumes:
- - application/json
- description: Create a new configuration file
- operationId: create-config-file
- parameters:
- - description: Name of the configuration file
- in: formData
- name: name
- required: true
- type: string
- - description: Configuration file
- in: formData
- name: file
- required: true
- type: file
- produces:
- - application/json
- responses:
- "201":
- description: Created
- schema:
- $ref: '#/definitions/ConfigFileResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new configuration file
- tags:
- - configuration-file
- /configuration-files/{id}:
- delete:
- consumes:
- - application/json
- description: Delete a configuration file
- operationId: delete-config-file
- parameters:
- - description: Configuration file ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: ""
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete a configuration file
- tags:
- - configuration-file
- get:
- consumes:
- - application/json
- description: Get a configuration file
- operationId: get-config-file
- parameters:
- - description: Configuration file ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ConfigFileResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a configuration file
- tags:
- - configuration-file
- patch:
- consumes:
- - application/json
- description: Update a configuration file
- operationId: update-config-file
- parameters:
- - description: Configuration file ID
- in: path
- name: id
- required: true
- type: string
- - description: Name of the configuration file
- in: formData
- name: name
- required: true
- type: string
- - description: Configuration file
- in: formData
- name: file
- required: true
- type: file
- produces:
- - application/json
- responses:
- "204":
- description: No Content
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Update a configuration file
- tags:
- - configuration-file
- /driver-types:
- get:
- consumes:
- - application/json
- description: List all driver types
- operationId: list-driver-types
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/DriverResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all driver types
- tags:
- - driver-type
- /driver-types/{id}:
- get:
- consumes:
- - application/json
- description: Get a driver type
- operationId: get-driver-type
- parameters:
- - description: Driver Type ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/DriverResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a driver type
- tags:
- - driver-type
- /module-types:
- get:
- consumes:
- - application/json
- description: List all module types
- operationId: list-module-type
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/ModuleResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all module types
- tags:
- - module-type
- /module-types/{id}:
- get:
- consumes:
- - application/json
- description: Get a module type
- operationId: get-module-type
- parameters:
- - description: Module Type ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/ModuleResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a module type
- tags:
- - module-type
- /storage-array-types:
- get:
- consumes:
- - application/json
- description: List all storage array types
- operationId: list-storage-array-type
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/StorageArrayTypeResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all storage array types
- tags:
- - storage-array-type
- /storage-array-types/{id}:
- get:
- consumes:
- - application/json
- description: Get a storage array type
- operationId: get-storage-array-type
- parameters:
- - description: Storage Array Type ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/StorageArrayTypeResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a storage array type
- tags:
- - storage-array-type
- /storage-arrays:
- get:
- consumes:
- - application/json
- description: List all storage arrays
- operationId: list-storage-arrays
- parameters:
- - description: Unique ID
- in: query
- name: unique_id
- type: string
- - description: Storage Type
- in: query
- name: storage_type
- type: string
- produces:
- - application/json
- responses:
- "202":
- description: Accepted
- schema:
- items:
- $ref: '#/definitions/StorageArrayResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all storage arrays
- tags:
- - storage-array
- post:
- consumes:
- - application/json
- description: Create a new storage array
- operationId: create-storage-array
- parameters:
- - description: Storage Array info for creation
- in: body
- name: storageArray
- required: true
- schema:
- $ref: '#/definitions/StorageArrayCreateRequest'
- produces:
- - application/json
- responses:
- "201":
- description: Created
- schema:
- $ref: '#/definitions/StorageArrayResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Create a new storage array
- tags:
- - storage-array
- /storage-arrays/{id}:
- delete:
- consumes:
- - application/json
- description: Delete storage array
- operationId: delete-storage-array
- parameters:
- - description: Storage Array ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: Success
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Delete storage array
- tags:
- - storage-array
- get:
- consumes:
- - application/json
- description: Get storage array
- operationId: get-storage-array
- parameters:
- - description: Storage Array ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/StorageArrayResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get storage array
- tags:
- - storage-array
- patch:
- consumes:
- - application/json
- description: Update a storage array
- operationId: update-storage-array
- parameters:
- - description: Storage Array ID
- in: path
- name: id
- required: true
- type: string
- - description: Storage Array info for update
- in: body
- name: storageArray
- required: true
- schema:
- $ref: '#/definitions/StorageArrayUpdateRequest'
- produces:
- - application/json
- responses:
- "204":
- description: No Content
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Update a storage array
- tags:
- - storage-array
- /tasks:
- get:
- consumes:
- - application/json
- description: List all tasks
- operationId: list-tasks
- parameters:
- - description: Application Name
- in: query
- name: application_name
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- items:
- $ref: '#/definitions/TaskResponse'
- type: array
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: List all tasks
- tags:
- - task
- /tasks/{id}:
- get:
- consumes:
- - application/json
- description: Get a task
- operationId: get-task
- parameters:
- - description: Task ID
- in: path
- name: id
- required: true
- type: string
- produces:
- - application/json
- responses:
- "200":
- description: OK
- schema:
- $ref: '#/definitions/TaskResponse'
- "303":
- description: See Other
- schema:
- $ref: '#/definitions/TaskResponse'
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Get a task
- tags:
- - task
- /tasks/{id}/approve:
- post:
- consumes:
- - application/json
- description: Approve state change for an application
- operationId: approve-state-change-application
- parameters:
- - description: Task ID
- in: path
- name: id
- required: true
- type: string
- - description: Task is associated with an Application update operation
- in: query
- name: updating
- type: boolean
- produces:
- - application/json
- responses:
- "202":
- description: Accepted
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Approve state change for an application
- tags:
- - task
- /tasks/{id}/cancel:
- post:
- consumes:
- - application/json
- description: Cancel state change for an application
- operationId: cancel-state-change-application
- parameters:
- - description: Task ID
- in: path
- name: id
- required: true
- type: string
- - description: Task is associated with an Application update operation
- in: query
- name: updating
- type: boolean
- produces:
- - application/json
- responses:
- "200":
- description: Success
- schema:
- type: string
- "400":
- description: Bad Request
- schema:
- $ref: '#/definitions/ErrorResponse'
- "404":
- description: Not Found
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - ApiKeyAuth: []
- summary: Cancel state change for an application
- tags:
- - task
- /users/change-password:
- patch:
- consumes:
- - application/json
- description: Change password for existing user
- operationId: change-password
- parameters:
- - description: Enter New Password
- format: password
- in: query
- name: password
- required: true
- type: string
- produces:
- - application/json
- responses:
- "204":
- description: No Content
- "401":
- description: Unauthorized
- schema:
- $ref: '#/definitions/ErrorResponse'
- "403":
- description: Forbidden
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - BasicAuth: []
- summary: Change password for existing user
- tags:
- - user
- /users/login:
- post:
- consumes:
- - application/json
- description: Login for existing user
- operationId: login
- produces:
- - application/json
- responses:
- "200":
- description: Bearer Token for Logged in User
- schema:
- type: string
- "401":
- description: Unauthorized
- schema:
- $ref: '#/definitions/ErrorResponse'
- "403":
- description: Forbidden
- schema:
- $ref: '#/definitions/ErrorResponse'
- "500":
- description: Internal Server Error
- schema:
- $ref: '#/definitions/ErrorResponse'
- security:
- - BasicAuth: []
- summary: Login for existing user
- tags:
- - user
-securityDefinitions:
- ApiKeyAuth:
- in: header
- name: Authorization
- type: apiKey
- BasicAuth:
- type: basic
-swagger: "2.0"
diff --git a/content/v2/deployment/csminstaller/troubleshooting.md b/content/v2/deployment/csminstaller/troubleshooting.md
deleted file mode 100644
index 3fa403c8da..0000000000
--- a/content/v2/deployment/csminstaller/troubleshooting.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: "Troubleshooting"
-linkTitle: "Troubleshooting"
-weight: 3
-Description: >
- Troubleshooting guide
----
-
-## Frequently Asked Questions
-
- - [Why does the installation fail due to an invalid cipherKey value?](#why-does-the-installation-fail-due-to-an-invalid-cipherkey-value)
- - [Why does the cluster-init pod show the error "cluster has already been initialized"?](#why-does-the-cluster-init-pod-show-the-error-cluster-has-already-been-initialized)
- - [Why does the precheck fail when creating an application?](#why-does-the-precheck-fail-when-creating-an-application)
- - [How can I view detailed logs for the CSM Installer?](#how-can-i-view-detailed-logs-for-the-csm-installer)
- - [After deleting an application, why can't I re-create the same application?](#after-deleting-an-application-why-cant-i-re-create-the-same-application)
- - [How can I upgrade CSM if I've used the CSM Installer to deploy CSM 1.0?](#how-can-i-upgrade-csm-if-ive-used-the-csm-installer-to-deploy-csm-10)
-
-### Why does the installation fail due to an invalid cipherKey value?
-The `cipherKey` value used during deployment of the CSM Installer must be exactly 32 characters in length and contained within quotes.
-
-### Why does the cluster-init pod show the error "cluster has already been initialized"?
-During the initial start-up of the CSM Installer, the database will be initialized by the cluster-init job. If the CSM Installer is uninstalled and then re-installed on the same cluster, this error may be shown due to the Persistent Volume for the database already containing an initialized database. The CSM Installer will function as normal and the cluster-init job can be ignored.
-
-If a clean installation of the CSM Installer is required, the `dbVolumeDirectory` (default location `/var/lib/cockroachdb`) must be deleted from the worker node which is hosting the Persistent Volume. After this directory is deleted, the CSM Installer can be re-installed.
-
-Caution: Deleting the `dbVolumeDirectory` location will remove any data persisted by the CSM Installer including clusters, storage systems, and installed applications.
-
-### Why does the precheck fail when creating an application?
-Each CSI Driver and CSM Module has required software or CRDs that must be installed before the application can be deployed in the cluster. These prechecks are verified when the `csm create application` command is executed. If the error message "create application failed" is displayed, [review the CSM Installer logs](#how-can-i-view-detailed-logs-for-the-csm-installer) to view details about the failed prechecks.
-
-If the precheck fails due to required software (e.g. iSCSI, NFS, SDC) not installed on the cluster nodes, follow these steps to address the issue:
-1. Delete the cluster from the CSM Installer using the `csm delete cluster` command.
-2. Update the nodes in the cluster by installing required software.
-3. Add the cluster to the CSM Installer using the `csm add cluster` command.
-
-### How can I view detailed logs for the CSM Installer?
-Detailed logs of the CSM Installer can be displayed using the following command:
-```
-kubectl logs -f -n deploy/dell-csm-installer
-```
-
-### After deleting an application, why can't I re-create the same application?
-After deleting an application using the `csm delete application` command, the namespace and other non-application resources including Secrets are not deleted from the cluster. This is to prevent removing any resources that may not have been created by the CSM Installer. The namespace must be manually deleted before attempting to re-create the same application using the CSM Installer.
-
-### How can I upgrade CSM if I've used the CSM Installer to deploy CSM 1.0?
-The CSM Installer currently does not support upgrade. If you used the CSM Installer to deploy CSM 1.0 you will need to perform the following steps to upgrade:
-1. Using the CSM installer, [delete](../csmcli#delete-applicationtask) any driver/module applications that were installed (ex: `csm delete application --name `).
-2. Uninstall the CSM Installer (ex: helm delete -n )
-3. Follow the deployment instructions [here](../../) to redeploy the CSI driver and modules.
\ No newline at end of file
diff --git a/content/v2/deployment/csmoperator/_index.md b/content/v2/deployment/csmoperator/_index.md
index c89d7e9d74..887c1abb50 100644
--- a/content/v2/deployment/csmoperator/_index.md
+++ b/content/v2/deployment/csmoperator/_index.md
@@ -6,10 +6,10 @@ weight: 1
---
{{% pageinfo color="primary" %}}
-The Dell CSM Operator is currently in tech-preview and is not supported in production environments. It can be used in environments where no other Dell CSI Drivers or CSM Modules are installed.
+The Dell Container Storage Modules Operator Operator is currently in tech-preview and is not supported in production environments. It can be used in environments where no other Dell CSI Drivers or CSM Modules are installed.
{{% /pageinfo %}}
-The Dell CSM Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually.
+The Dell Container Storage Modules Operator Operator is a Kubernetes Operator, which can be used to install and manage the CSI Drivers and CSM Modules provided by Dell for various storage platforms. This operator is available as a community operator for upstream Kubernetes and can be deployed using OperatorHub.io. The operator can be installed using OLM (Operator Lifecycle Manager) or manually.
## Supported Platforms
Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenShift. Supported versions are listed below.
@@ -29,6 +29,7 @@ Dell CSM Operator has been tested and qualified on Upstream Kubernetes and OpenS
| CSM Modules | Version | ConfigVersion |
| ------------------ | --------- | -------------- |
| CSM Authorization | 1.2.0 + | v1.2.0 + |
+| CSM Authorization | 1.3.0 + | v1.3.0 + |
## Installation
Dell CSM Operator can be installed manually or via Operator Hub.
@@ -62,7 +63,7 @@ Dell CSM Operator can be installed manually or via Operator Hub.
{{< imgproc install_olm_pods.jpg Resize "2500x" >}}{{< /imgproc >}}
->**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.2`**.
+>**NOTE**: The recommended version of OLM for upstream Kubernetes is **`v0.18.3`**.
### Installation via Operator Hub
`dell-csm-operator` can be installed via Operator Hub on upstream Kubernetes clusters & Red Hat OpenShift Clusters.
@@ -119,7 +120,7 @@ The specification for the Custom Resource is the same for all the drivers.Below
#### Mandatory fields
-**configVersion** - Configuration version - refer [here](#full-list-of-csi-drivers-and-versions-supported-by-the-dell-csm-operator) for appropriate config version.
+**configVersion** - Configuration version - refer [here](#supported-csi-drivers) for appropriate config version.
**replicas** - Number of replicas for controller plugin - must be set to 1 for all drivers.
diff --git a/content/v2/deployment/csmoperator/drivers/_index.md b/content/v2/deployment/csmoperator/drivers/_index.md
index 18129d5071..91c428b596 100644
--- a/content/v2/deployment/csmoperator/drivers/_index.md
+++ b/content/v2/deployment/csmoperator/drivers/_index.md
@@ -37,7 +37,7 @@ kubectl create -f client/config/crd
kubectl create -f deploy/kubernetes/snapshot-controller
```
*NOTE:*
-- It is recommended to use 5.0.x version of snapshotter/snapshot-controller.
+- It is recommended to use 6.0.x version of snapshotter/snapshot-controller.
## Installing CSI Driver via Operator
diff --git a/content/v2/license/_index.md b/content/v2/license/_index.md
new file mode 100644
index 0000000000..ec7bd9d734
--- /dev/null
+++ b/content/v2/license/_index.md
@@ -0,0 +1,20 @@
+---
+title: "License"
+linkTitle: "License"
+weight: 12
+Description: >
+ Dell Container Storage Modules (CSM) License
+---
+
+The tech-preview releases of [Container Storage Modules](https://github.com/dell/csm) for Application Mobility and Encryption require a license. This section details how to request a license.
+
+## Requesting a License
+1. Request a license using the [Container Storage Modules License Request](https://app.smartsheet.com/b/form/5e46fad643874d56b1f9cf4c9f3071fb) by providing these details:
+- **Full Name**: Full name of the person requesting the license
+- **Email Address**: The license will be emailed to this email address
+- **Company / Organization**: Company or organization where the license will be used
+- **License Type**: Select either *Application Mobility* or *Encryption*, depending on the CSM module that will be used with the license
+- **List of kube-system namespace UIDs**: The license will only function on the provided list of Kubernetes clusters. Find the UID of the kube-system namespace using `kubectl get ns kube-system -o yaml` or similar `oc` command. Provide as a comma separated list of UIDs.
+- (Optional) **Send me a copy of my responses**: A copy of the license request will be sent to the provided email address
+2. After submitting the form, a response will be provided within several business days with an attachment containing the license.
+3. Refer to the specific CSM module documentation for adding the license to the Kubernetes cluster.
\ No newline at end of file
diff --git a/content/v2/observability/_index.md b/content/v2/observability/_index.md
index 8f9f05fc63..cc8165d4a3 100644
--- a/content/v2/observability/_index.md
+++ b/content/v2/observability/_index.md
@@ -14,13 +14,14 @@ Description: >
Metrics data is collected and pushed to the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector), so it can be processed, and exported in a format consumable by Prometheus. SSL certificates for TLS between nodes are handled by [cert-manager](https://github.com/jetstack/cert-manager).
-CSM for Observability is composed of several services, each living in its own GitHub repository, that can be installed following one of the three deployments we support [here](deployment). Contributions can be made to this repository or any of the CSM for Observability repositories listed below.
+CSM for Observability is composed of several services, each living in its own GitHub repository, that can be installed following one of the four deployments we support [here](deployment). Contributions can be made to this repository or any of the CSM for Observability repositories listed below.
{{
}}
| Name | Repository | Description |
| ---- | --------- | ----------- |
-| Performance Metrics for PowerFlex | [CSM Metrics for PowerFlex](https://github.com/dell/karavi-metrics-powerflex) | Performance Metrics for PowerFlex captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerFlex. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. |
-| Performance Metrics for PowerStore | [CSM Metrics for PowerStore](https://github.com/dell/csm-metrics-powerstore) | Performance Metrics for PowerStore captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerStore. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics so they can be visualized in Grafana. Please visit the repository for more information. |
+| Metrics for PowerFlex | [CSM Metrics for PowerFlex](https://github.com/dell/karavi-metrics-powerflex) | Metrics for PowerFlex captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerFlex. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics, so they can be visualized in Grafana. Please visit the repository for more information. |
+| Metrics for PowerStore | [CSM Metrics for PowerStore](https://github.com/dell/csm-metrics-powerstore) | Metrics for PowerStore captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerStore. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics, so they can be visualized in Grafana. Please visit the repository for more information. |
+| Metrics for PowerScale | [CSM Metrics for PowerScale](https://github.com/dell/csm-metrics-powerscale) | Metrics for PowerScale captures telemetry data about Kubernetes storage usage and performance obtained through the CSI (Container Storage Interface) Driver for Dell PowerScale. The metrics service pushes it to the OpenTelemetry Collector, so it can be processed, and exported in a format consumable by Prometheus. Prometheus can then be configured to scrape the OpenTelemetry Collector exporter endpoint to provide metrics, so they can be visualized in Grafana. Please visit the repository for more information. |
| Volume Topology | [CSM Topology](https://github.com/dell/karavi-topology) | Topology provides Kubernetes administrators with the topology data related to containerized storage that is provisioned by a CSI (Container Storage Interface) Driver for Dell storage products. The Topology service is enabled by default as part of the CSM for Observability Helm Chart [values file](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml). Please visit the repository for more information. |
{{
}}
@@ -31,14 +32,14 @@ CSM for Observability provides the following capabilities:
{{
}}
| Capability | PowerMax | PowerFlex | Unity XT | PowerScale | PowerStore |
| - | :-: | :-: | :-: | :-: | :-: |
-| Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | no | yes |
+| Collect and expose Volume Metrics via the OpenTelemetry Collector | no | yes | no | yes | yes |
| Collect and expose File System Metrics via the OpenTelemetry Collector | no | no | no | no | yes |
| Collect and expose export (k8s) node metrics via the OpenTelemetry Collector | no | yes | no | no | no |
-| Collect and expose filesystem capacity metrics via the OpenTelemetry Collector | no | no | no | no | yes |
-| Collect and expose block storage capacity metrics via the OpenTelemetry Collector | no | yes | no | no | yes |
-| Non-disruptive config changes | no | yes | no | no | yes |
-| Non-disruptive log level changes | no | yes | no | no | yes |
-| Grafana Dashboards for displaying metrics and topology data | no | yes | no | no | yes |
+| Collect and expose block storage metrics via the OpenTelemetry Collector | no | yes | no | no | yes |
+| Collect and expose file storage metrics via the OpenTelemetry Collector | no | no | no | yes | yes |
+| Non-disruptive config changes | no | yes | no | yes | yes |
+| Non-disruptive log level changes | no | yes | no | yes | yes |
+| Grafana Dashboards for displaying metrics and topology data | no | yes | no | yes | yes |
{{
}}
## Supported Operating Systems/Container Orchestrator Platforms
@@ -56,9 +57,9 @@ CSM for Observability provides the following capabilities:
## Supported Storage Platforms
{{
}}
## Supported CSI Drivers
@@ -69,6 +70,7 @@ CSM for Observability supports the following CSI drivers and versions.
| ------------- | ---------- | ------------------ |
| CSI Driver for Dell PowerFlex | [csi-powerflex](https://github.com/dell/csi-powerflex) | v2.0 + |
| CSI Driver for Dell PowerStore | [csi-powerstore](https://github.com/dell/csi-powerstore) | v2.0 + |
+| CSI Driver for Dell PowerScale | [csi-powerscale](https://github.com/dell/csi-powerscale) | v2.0 + |
{{
}}
## Topology Data
@@ -78,17 +80,16 @@ CSM for Observability provides Kubernetes administrators with the topology data
| Field | Description |
| -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| Namespace | The namespace associated with the persistent volume claim |
+| Persistent Volume Claim | The name of the persistent volume claim associated with the persistent volume |
| Persistent Volume | The name of the persistent volume |
+| Storage Class | The storage class associated with the persistent volume |
+| Provisioned Size | The provisioned size of the persistent volume |
| Status | The status of the persistent volume. "Released" indicates the persistent volume does not have a claim. "Bound" indicates the persistent volume has a claim |
-| Persistent Volume Claim | The name of the persistent volume claim associated with the persistent volume |
-| CSI Driver | The name of the CSI driver that was responsible for provisioning the volume on the storage system |
| Created | The date the persistent volume was created |
-| Provisioned Size | The provisioned size of the persistent volume |
-| Storage Class | The storage class associated with the persistent volume |
-| Storage System Volume Name | The name of the volume on the storage system that is associated with the persistent volume |
-| Storage Pool | The storage pool name the volume/storage class is associated with |
| Storage System | The storage system ID or IP address the volume is associated with |
| Protocol | The storage system protocol type the volume/storage class is associated with |
+| Storage Pool | The storage pool name the volume/storage class is associated with |
+| Storage System Volume Name | The name of the volume on the storage system that is associated with the persistent volume |
{{
}}
## TLS Encryption
diff --git a/content/v2/observability/deployment/_index.md b/content/v2/observability/deployment/_index.md
index 50efaa2c3f..62b10741bb 100644
--- a/content/v2/observability/deployment/_index.md
+++ b/content/v2/observability/deployment/_index.md
@@ -239,8 +239,8 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste
dashboards:
enabled: true
- ## Additional grafana server CofigMap mounts
- ## Defines additional mounts with CofigMap. CofigMap must be manually created in the namespace.
+ ## Additional grafana server ConfigMap mounts
+ ## Defines additional mounts with ConfigMap. ConfigMap must be manually created in the namespace.
extraConfigmapMounts: [] # If you created a ConfigMap on the previous step, delete [] and uncomment the lines below
# - name: certs-configmap
# mountPath: /etc/ssl/certs/ca-certificates.crt
@@ -275,23 +275,29 @@ Below are the steps to deploy a new Grafana instance into your Kubernetes cluste
Once Grafana is properly configured, you can import the pre-built observability dashboards. Log into Grafana and click the + icon in the side menu. Then click Import. From here you can upload the JSON files or paste the JSON text directly into the text area. Below are the locations of the dashboards that can be imported:
-| Dashboard | Description |
-| ------------------- | --------------------------------- |
-| [PowerFlex: I/O Performance by Kubernetes Node](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/sdc_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by Kubernetes node |
-| [PowerFlex: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume |
-| [PowerFlex: Storage Pool Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/storage_consumption.json) | Provides visibility into the total, used, and available capacity for a storage class and associated underlying storage construct. |
-| [PowerStore: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/volume_io_metrics.json) | *As of Release 0.4.0:* Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume |
-| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
+| Dashboard | Description |
+|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [PowerFlex: I/O Performance by Kubernetes Node](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/sdc_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by Kubernetes node |
+| [PowerFlex: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume |
+| [PowerFlex: Storage Pool Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerflex/storage_consumption.json) | Provides visibility into the total, used and available capacity for a storage class and associated underlying storage construct |
+| [PowerStore: I/O Performance by Provisioned Volume](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/volume_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by volume |
+| [PowerStore: I/O Performance by File System](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/filesystem_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth, latency) by filesystem |
+| [PowerStore: Array and Storage Class Consumption By CSI Driver](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerstore/storage_consumption.json) | Provides visibility into the total, used and available capacity for a storage class and associated underlying storage construct |
+| [PowerScale: I/O Performance by Cluster](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/cluster_io_metrics.json) | Provides visibility into the I/O performance metrics (IOPS, bandwidth) by cluster |
+| [PowerScale: Capacity by Cluster](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/cluster_capacity.json) | Provides visibility into the total, used, available capacity and directory quota capacity by cluster |
+| [PowerScale: Capacity by Quota](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale/volume_capacity.json) | Provides visibility into the subscribed, remaining capacity and usage by quota |
+| [CSI Driver Provisioned Volume Topology](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/topology/topology.json) | Provides visibility into Dell CSI (Container Storage Interface) driver provisioned volume characteristics in Kubernetes correlated with volumes on the storage system. |
## Dynamic Configuration
Some parameters can be configured/updated during runtime without restarting the CSM for Observability services. These parameters will be stored in ConfigMaps that can be updated on the Kubernetes cluster. This will automatically change the settings on the services.
-| ConfigMap | Observability Service | Parameters |
-| - | - | - |
-| karavi-metrics-powerflex-configmap | karavi-metrics-powerflex |
|
To update any of these settings, run the following command on the Kubernetes cluster then save the updated ConfigMap data.
@@ -387,29 +393,57 @@ In this case, all storage system requests made by CSM for Observability will be
#### Update the Authorization Module Token
+##### CSI Driver for Dell PowerFlex
+
1. Delete the current `proxy-authz-tokens` Secret from the CSM namespace.
```console
$ kubectl delete secret proxy-authz-tokens -n [CSM_NAMESPACE]
```
-2. Copy the `proxy-authz-tokens` Secret from a CSI Driver to the CSM namespace.
+2. Copy the `proxy-authz-tokens` Secret from the CSI Driver for Dell PowerFlex to the CSM namespace.
```console
$ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
+##### CSI Driver for Dell PowerScale
+
+1. Delete the current `isilon-proxy-authz-tokens` Secret from the CSM namespace.
+ ```console
+ $ kubectl delete secret isilon-proxy-authz-tokens -n [CSM_NAMESPACE]
+ ```
+
+2. Copy the `isilon-proxy-authz-tokens` Secret from the CSI Driver for Dell PowerScale namespace to the CSM namespace.
+ ```console
+ $ kubectl get secret proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/'| sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f
+ ```
+
#### Update Storage Systems
If the list of storage systems managed by a Dell CSI Driver have changed, the following steps can be performed to update CSM for Observability to reference the updated systems:
+##### CSI Driver for Dell PowerFlex
+
1. Delete the current `karavi-authorization-config` Secret from the CSM namespace.
```console
$ kubectl delete secret proxy-authz-tokens -n [CSM_NAMESPACE]
```
-2. Copy the `karavi-authorization-config` Secret from the CSI Driver namespace to CSM for Observability namespace.
+2. Copy the `karavi-authorization-config` Secret from the CSI Driver for Dell PowerFlex namespace to CSM for Observability namespace.
```console
$ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
+##### CSI Driver for Dell PowerScale
+
+1. Delete the current `isilon-karavi-authorization-config` Secret from the CSM namespace.
+ ```console
+ $ kubectl delete secret isilon-karavi-authorization-config -n [CSM_NAMESPACE]
+ ```
+
+2. Copy the isilon-karavi-authorization-config Secret from the CSI Driver for Dell PowerScale namespace to CSM for Observability namespace.
+ ```console
+ $ kubectl get secret karavi-authorization-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSM_CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | kubectl create -f
+ ```
+
### When CSM for Observability does not use the Authorization module
In this case all storage system requests made by CSM for Observability will not be routed through the Authorization module. The following must be performed:
@@ -437,3 +471,15 @@ In this case all storage system requests made by CSM for Observability will not
```console
$ kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
+
+### CSI Driver for Dell PowerScale
+
+1. Delete the current `isilon-creds` Secret from the CSM namespace.
+ ```console
+ $ kubectl delete secret isilon-creds -n [CSM_NAMESPACE]
+ ```
+
+2. Copy the `isilon-creds` Secret from the CSI Driver for Dell PowerScale namespace to the CSM namespace.
+ ```console
+ $ kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
\ No newline at end of file
diff --git a/content/v2/observability/deployment/helm.md b/content/v2/observability/deployment/helm.md
index 02feb6186f..6433b60836 100644
--- a/content/v2/observability/deployment/helm.md
+++ b/content/v2/observability/deployment/helm.md
@@ -22,7 +22,8 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
3. Add the Dell Helm Charts repo `helm repo add dell https://dell.github.io/helm-charts`
4. Copy only the deployed CSI driver entities to the Observability namespace
- #### PowerFlex
+
+ ### PowerFlex
1. Copy the config Secret from the CSI PowerFlex namespace into the CSM for Observability namespace:
@@ -38,12 +39,30 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
`kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
- #### PowerStore
+ ### PowerStore
1. Copy the config Secret from the CSI PowerStore namespace into the CSM for Observability namespace:
`kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+ ### PowerScale
+
+ 1. Copy the config Secret from the CSI PowerScale namespace into the CSM for Observability namespace:
+
+ `kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerScale, perform these steps:
+
+ 2. Copy the driver configuration parameters ConfigMap from the CSI PowerScale namespace into the CSM for Observability namespace:
+
+ `kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -`
+
+ 3. Copy the `karavi-authorization-config`, `proxy-server-root-certificate`, `proxy-authz-tokens` Secret from the CSI PowerScale namespace into the CSM for Observability namespace:
+
+ `kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -`
+
+
+
5. Configure the [parameters](#configuration) and install the CSM for Observability Helm Chart
A default values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml) that can be used for installation. This can be copied into a file named `myvalues.yaml` and either used as is or modified accordingly.
@@ -51,6 +70,7 @@ The Container Storage Modules (CSM) for Observability Helm chart bootstraps an O
__Note:__
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in your values file for CSM Observability.
+ - If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured in your values file for CSM Observability.
```console
$ helm install karavi-observability dell/karavi-observability -n [CSM_NAMESPACE] -f myvalues.yaml
@@ -106,7 +126,7 @@ The following table lists the configurable parameters of the CSM for Observabili
| `karaviMetricsPowerstore.collectorAddr` | Metrics Collector accessible from the Kubernetes cluster | `otel-collector:55680` |
| `karaviMetricsPowerstore.provisionerNames` | Provisioner Names used to filter for determining PowerStore volumes (must be a Comma-separated list) | `csi-powerstore.dellemc.com` |
| `karaviMetricsPowerstore.volumePollFrequencySeconds` | The polling frequency (in seconds) to gather volume metrics | `10` |
-| `karaviMetricsPowerstore.concurrentPowerflexQueries` | The number of simultaneous metrics queries to make to PowerStore (must be less than 10; otherwise, several request errors from PowerStore will ensue.) | `10` |
+| `karaviMetricsPowerstore.concurrentPowerstoreQueries` | The number of simultaneous metrics queries to make to PowerStore (must be less than 10; otherwise, several request errors from PowerStore will ensue.) | `10` |
| `karaviMetricsPowerstore.volumeMetricsEnabled` | Enable PowerStore Volume Metrics Collection | `true` |
| `karaviMetricsPowerstore.endpoint` | Endpoint for pod leader election | `karavi-metrics-powerstore` |
| `karaviMetricsPowerstore.service.type` | Kubernetes service type | `ClusterIP` |
@@ -115,3 +135,23 @@ The following table lists the configurable parameters of the CSM for Observabili
| `karaviMetricsPowerstore.zipkin.uri` | URI of a Zipkin instance where tracing data can be forwarded | |
| `karaviMetricsPowerstore.zipkin.serviceName` | Service name used for Zipkin tracing data | `metrics-powerstore`|
| `karaviMetricsPowerstore.zipkin.probability` | Percentage of trace information to send to Zipkin (Valid range: 0.0 to 1.0) | `0` |
+| `karaviMetricsPowerscale.image` | CSM Metrics for PowerScale Service image | `dellemc/csm-metrics-powerscale:v1.0`|
+| `karaviMetricsPowerscale.enabled` | Enable CSM Metrics for PowerScale service | `true` |
+| `karaviMetricsPowerscale.collectorAddr` | Metrics Collector accessible from the Kubernetes cluster | `otel-collector:55680` |
+| `karaviMetricsPowerscale.provisionerNames` | Provisioner Names used to filter for determining PowerScale volumes (must be a Comma-separated list) | `csi-isilon.dellemc.com` |
+| `karaviMetricsPowerscale.capacityMetricsEnabled` | Enable PowerScale capacity metric Collection | `true` |
+| `karaviMetricsPowerscale.performanceMetricsEnabled` | Enable PowerScale performance metric Collection | `true` |
+| `karaviMetricsPowerscale.clusterCapacityPollFrequencySeconds` | The polling frequency (in seconds) to gather cluster capacity metrics | `30` |
+| `karaviMetricsPowerscale.clusterPerformancePollFrequencySeconds` | The polling frequency (in seconds) to gather cluster performance metrics | `20` |
+| `karaviMetricsPowerscale.quotaCapacityPollFrequencySeconds` | The polling frequency (in seconds) to gather volume capacity metrics | `30` |
+| `karaviMetricsPowerscale.concurrentPowerscaleQueries` | The number of simultaneous metrics queries to make to PowerScale(MUST be less than 10; otherwise, several request errors from PowerScale will ensue. | `10` |
+| `karaviMetricsPowerscale.endpoint` | Endpoint for pod leader election | `karavi-metrics-powerscale` |
+| `karaviMetricsPowerscale.service.type` | Kubernetes service type | `ClusterIP` |
+| `karaviMetricsPowerscale.logLevel` | Output logs that are at or above the given log level severity (Valid values: TRACE, DEBUG, INFO, WARN, ERROR, FATAL, PANIC) | `INFO`|
+| `karaviMetricsPowerscale.logFormat` | Output logs in the specified format (Valid values: text, json) | `text` |
+| `karaviMetricsPowerscale.isiClientOptions.isiSkipCertificateValidation` | Skip OneFS API server's certificates | `true` |
+| `karaviMetricsPowerscale.isiClientOptions.isiAuthType` | 0 to enable session-based Authentication; 1 to enables basic Authentication | `1` |
+| `karaviMetricsPowerscale.isiClientOptions.isiLogVerbose` | Decide High/Medium/Low content of the OneFS REST API message | `0` |
+| `karaviMetricsPowerscale.authorization.enabled` | [Authorization](../../../authorization) is an optional feature to apply credential shielding of the backend PowerScale. | `false` |
+| `karaviMetricsPowerscale.authorization.proxyHost` | Hostname of the csm-authorization server. | |
+| `karaviMetricsPowerscale.authorization.skipCertificateValidation` | A boolean that enables/disables certificate validation of the csm-authorization server. | |
diff --git a/content/v2/observability/deployment/offline.md b/content/v2/observability/deployment/offline.md
index b4c5ccd9d6..9017bff0b9 100644
--- a/content/v2/observability/deployment/offline.md
+++ b/content/v2/observability/deployment/offline.md
@@ -24,9 +24,9 @@ If one Linux system has both internet access and access to an internal registry,
Preparing an offline bundle requires the following utilities:
-| Dependency | Usage |
-| --------------------- | ----- |
-| `docker` | `docker` will be used to pull images from public image registries, tag them, and push them to a private registry. Required on both the system building the offline bundle as well as the system preparing for installation. Tested version is `docker` 18.09
+| Dependency | Usage |
+|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `docker` | `docker` will be used to pull images from public image registries, tag them, and push them to a private registry. Required on both the system building the offline bundle as well as the system preparing for installation. Tested version is `docker` 18.09+ |
### Executing the Installer
@@ -56,7 +56,7 @@ To perform an offline installation of a Helm chart, the following steps should b
[user@anothersystem /home/user]# chmod +x offline-installer.sh
```
-3. Build the bundle by providing the Helm chart name as the argument:
+3. Build the bundle by providing the Helm chart name as the argument. Below is a sample output that may be different on your machine.
```
[user@anothersystem /home/user]# ./offline-installer.sh -c dell/karavi-observability
@@ -72,10 +72,12 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Downloading and saving Docker images
- dellemc/csm-topology:v0.3.0
- dellemc/csm-metrics-powerflex:v0.3.0
- otel/opentelemetry-collector:0.9.0
- nginxinc/nginx-unprivileged:1.18
+ dellemc/csm-topology:v1.3.0
+ dellemc/csm-metrics-powerflex:v1.3.0
+ dellemc/csm-metrics-powerstore:v1.3.0
+ dellemc/csm-metrics-powerscale:v1.0.0
+ otel/opentelemetry-collector:0.42.0
+ nginxinc/nginx-unprivileged:1.20
*
* Compressing offline-karavi-observability-bundle.tar.gz
@@ -95,7 +97,7 @@ To perform an offline installation of a Helm chart, the following steps should b
[user@anothersystem /home/user]# cd offline-karavi-observability-bundle
```
-3. Prepare the bundle by providing the internal Docker registry URL.
+3. Prepare the bundle by providing the internal Docker registry URL. Below is a sample output that may be different on your machine.
```
[user@anothersystem /home/user/offline-karavi-observability-bundle]# ./offline-installer.sh -p :5000
@@ -103,10 +105,12 @@ To perform an offline installation of a Helm chart, the following steps should b
*
* Loading, tagging, and pushing Docker images to registry :5000/
- dellemc/csm-topology:v0.3.0 -> :5000/csm-topology:v0.3.0
- dellemc/csm-metrics-powerflex:v0.3.0 -> :5000/csm-metrics-powerflex:v0.3.0
- otel/opentelemetry-collector:0.9.0 -> :5000/opentelemetry-collector:0.9.0
- nginxinc/nginx-unprivileged:1.18 -> :5000/nginx-unprivileged:1.18
+ dellemc/csm-topology:v1.3.0 -> :5000/csm-topology:v1.3.0
+ dellemc/csm-metrics-powerflex:v1.3.0 -> :5000/csm-metrics-powerflex:v1.3.0
+ dellemc/csm-metrics-powerstore:v1.3.0 -> :5000/csm-metrics-powerstore:v1.3.0
+ dellemc/csm-metrics-powerscale:v1.0.0 -> :5000/csm-metrics-powerscale:v1.0.0
+ otel/opentelemetry-collector:0.42.0 -> :5000/opentelemetry-collector:0.42.0
+ nginxinc/nginx-unprivileged:1.20 -> :5000/nginx-unprivileged:1.20
```
### Perform Helm installation
@@ -145,12 +149,28 @@ To perform an offline installation of a Helm chart, the following steps should b
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret powerstore-config -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
```
-4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository.
+ CSI Driver for PowerScale:
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret isilon-creds -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ If [CSM for Authorization is enabled](../../../authorization/deployment/#configuring-a-dell-csi-driver-with-csm-for-authorization) for CSI PowerScale, perform these steps:
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get configmap isilon-config-params -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | kubectl create -f -
+ ```
+
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl get secret karavi-authorization-config proxy-server-root-certificate proxy-authz-tokens -n [CSI_DRIVER_NAMESPACE] -o yaml | sed 's/namespace: [CSI_DRIVER_NAMESPACE]/namespace: [CSM_NAMESPACE]/' | sed 's/name: karavi-authorization-config/name: isilon-karavi-authorization-config/' | sed 's/name: proxy-server-root-certificate/name: isilon-proxy-server-root-certificate/' | sed 's/name: proxy-authz-tokens/name: isilon-proxy-authz-tokens/' | kubectl create -f -
+ ```
+
+4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository.
**Note:**
- Optionally, you could provide your own [configurations](../helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml).
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured.
+ - If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured.
```
[user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# helm install -n install-namespace app-name karavi-observability
diff --git a/content/v2/observability/deployment/online.md b/content/v2/observability/deployment/online.md
index 60e83ef3a9..82524a658c 100644
--- a/content/v2/observability/deployment/online.md
+++ b/content/v2/observability/deployment/online.md
@@ -69,6 +69,8 @@ Options:
--namespace[=] Namespace where Karavi Observability will be installed
Optional
--csi-powerflex-namespace[=] Namespace where CSI PowerFlex is installed, default is 'vxflexos'
+ --csi-powerstore-namespace[=] Namespace where CSI PowerStore is installed, default is 'csi-powerstore'
+ --csi-powerscale-namespace[=] Namespace where CSI PowerScale is installed, default is 'isilon'
--set-file Set values from files used during helm installation (can be specified multiple times)
--skip-verify Skip verification of the environment
--values[=] Values file, which defines configuration values
@@ -77,7 +79,7 @@ Options:
--help Help
```
-__Note:__ CSM for Authorization currently does not support the Observability module for PowerStore. Therefore setting `enable-authorization` is not supported in this case.
+__Note:__ CSM for Authorization currently does not support the Observability module for PowerStore. Therefore setting `enable-authorization` is not supported in this case.
### Executing the Installer
@@ -101,6 +103,7 @@ To perform an online installation of CSM for Observability, the following steps
__Note:__
- The default `values.yaml` is configured to deploy the CSM for Observability Topology service on install.
- If CSM for Authorization is enabled for CSI PowerFlex, the `karaviMetricsPowerflex.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability.
+ - If CSM for Authorization is enabled for CSI PowerScale, the `karaviMetricsPowerscale.authorization` parameters must be properly configured in `myvalues.yaml` for CSM Observability.
```
[user@system /home/user/karavi-observability/installer]# ./karavi-observability-install.sh install --namespace [CSM_NAMESPACE] --values myvalues.yaml
diff --git a/content/v2/observability/design/_index.md b/content/v2/observability/design/_index.md
index e6dc2b93c1..adb56abcb8 100644
--- a/content/v2/observability/design/_index.md
+++ b/content/v2/observability/design/_index.md
@@ -19,7 +19,10 @@ The following prerequisites must be deployed into the namespace where CSM for Ob
- Prometheus for scraping the metrics from the OTEL collector.
- Grafana for visualizing the metrics from Prometheus and Topology services using custom dashboards.
-- CSM for Observability will use secrets to get details about the storage systems used by the CSI drivers. These secrets should be copied from the namespaces where the drivers are deployed. CSI Powerflex driver uses the 'vxflexos-config' secret and CSI PowerStore uses the 'powerstore-config' secret.
+- CSM for Observability will use secrets to get details about the storage systems used by the CSI drivers. These secrets should be copied from the namespaces where the drivers are deployed.
+ - CSI PowerFlex driver uses the 'vxflexos-config' secret.
+ - CSI PowerStore driver uses the 'powerstore-config' secret.
+ - CSI PowerScale driver uses the 'isilon-creds' secret.
## Deployment Architectures
diff --git a/content/v2/observability/metrics/powerscale.md b/content/v2/observability/metrics/powerscale.md
new file mode 100644
index 0000000000..d06d168902
--- /dev/null
+++ b/content/v2/observability/metrics/powerscale.md
@@ -0,0 +1,45 @@
+---
+title: PowerScale Metrics
+linktitle: PowerScale Metrics
+weight: 1
+description: >
+ Dell Container Storage Modules (CSM) for Observability PowerScale Metrics
+---
+
+This section outlines the metrics collected by the Container Storage Modules (CSM) Observability module for PowerScale. The [Grafana reference dashboards](https://github.com/dell/karavi-observability/blob/main/grafana/dashboards/powerscale) for PowerScale metrics can be uploaded to your Grafana instance.
+
+## I/O Performance Metrics
+
+Storage system I/O performance metrics (IOPS, bandwidth) are available by default and broken down by cluster and quota.
+
+To disable these metrics, set the ```performanceMetricsEnabled``` field to false in helm/values.yaml.
+
+The following I/O performance metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../../) for more information on deploying and configuring the OpenTelemetry collector.
+
+| Metric | Description |
+|--------------------------------------------------------------------|-------------------------------------------------------------------------------------|
+| powerscale_cluster_cpu_use_rate | Average CPU usage for all nodes in the monitored cluster |
+| powerscale_cluster_disk_read_operation_rate | Average rate at which the disks in the cluster servicing data read change requests |
+| powerscale_cluster_disk_write_operation_rate | Average rate at which the disks in the cluster servicing data write change requests |
+| powerscale_cluster_disk_throughput_read_rate_megabytes_per_second | Throughput rate of data being read from the disks in the cluster |
+| powerscale_cluster_disk_throughput_write_rate_megabytes_per_second | Throughput rate of data being written to the disks in the cluster |
+
+## Storage Capacity Metrics
+
+Provides visibility into the total, used, and available capacity for PowerScale cluster and quotas.
+
+To disable these metrics, set the ```capacityMetricsEnabled``` field to false in helm/values.yaml.
+
+The following storage capacity metrics are available from the OpenTelemetry collector endpoint. Please see the [CSM for Observability](../../) for more information on deploying and configuring the OpenTelemetry collector.
+
+| Metric | Description |
+|---------------------------------------------------|------------------------------------------------------------------|
+| powerscale_cluster_total_capacity_terabytes | Total cluster capacity (TB) |
+| powerscale_cluster_remaining_capacity_terabytes | Total unused cluster capacity (TB) |
+| powerscale_cluster_used_capacity_percentage | Percent of total cluster capacity that has been used |
+| powerscale_cluster_total_hard_quota_gigabytes | Amount of total capacity allocated in all directory hard quotas |
+| powerscale_cluster_total_hard_quota_percentage | Percent of total capacity allocated in all directory hard quotas |
+| powerscale_volume_quota_subscribed_gigabytes | Space used of Quota for a directory (GB) |
+| powerscale_volume_hard_quota_remaining_gigabytes | Unused spaced below the hard limit for a directory (GB) |
+| powerscale_volume_quota_subscribed_percentage | Percentage of space used in hard limit for a directory |
+| powerscale_volume_hard_quota_remaining_percentage | Percentage of the remaining space in hard limit for a directory |
diff --git a/content/v2/observability/release/_index.md b/content/v2/observability/release/_index.md
index 84a9c87ea2..07f248dc73 100644
--- a/content/v2/observability/release/_index.md
+++ b/content/v2/observability/release/_index.md
@@ -6,14 +6,15 @@ Description: >
Dell Container Storage Modules (CSM) release notes for observability
---
-## Release Notes - CSM Observability 1.2.0
+## Release Notes - CSM Observability 1.3.0
### New Features/Changes
+- [Support PowerScale in CSM Observability](https://github.com/dell/csm/issues/452)
+- [Set PV/PVC's namespace when using Observability Module](https://github.com/dell/csm/issues/453)
+- [CSM Observability modules stick with otel controller 0.42.0](https://github.com/dell/csm/issues/454)
### Fixed Issues
-- [PowerStore Grafana dashboard does not populate correctly ](https://github.com/dell/csm/issues/279)
-- [Grafana installation script - prometheus address is incorrect](https://github.com/dell/csm/issues/278)
-- [prometheus-values.yaml error in json](https://github.com/dell/csm/issues/259)
+- [Observability Topology: nil pointer error](https://github.com/dell/csm/issues/430)
### Known Issues
\ No newline at end of file
diff --git a/content/v2/observability/troubleshooting/_index.md b/content/v2/observability/troubleshooting/_index.md
index 4c094c212d..7a5fbac6d7 100644
--- a/content/v2/observability/troubleshooting/_index.md
+++ b/content/v2/observability/troubleshooting/_index.md
@@ -171,7 +171,7 @@ sidecar:
enabled: true
## Additional grafana server ConfigMap mounts
-## Defines additional mounts with ConfigMap. CofigMap must be manually created in the namespace.
+## Defines additional mounts with ConfigMap. ConfigMap must be manually created in the namespace.
extraConfigmapMounts: []
```
diff --git a/content/v2/observability/upgrade/_index.md b/content/v2/observability/upgrade/_index.md
index a44d38c615..932c107e02 100644
--- a/content/v2/observability/upgrade/_index.md
+++ b/content/v2/observability/upgrade/_index.md
@@ -55,7 +55,7 @@ CSM for Observability online installer upgrade can be used if the initial deploy
```
2. Update `values.yaml` file as needed. Configuration options are outlined in the [Helm chart deployment section](../deployment/helm#configuration).
-2. Execute the `./karavi-observability-install.sh` script:
+3. Execute the `./karavi-observability-install.sh` script:
```
[user@system /home/user/karavi-observability/installer]# ./karavi-observability-install.sh upgrade --namespace $namespace --values myvalues.yaml --version $latest_chart_version
---------------------------------------------------------------------------------
@@ -80,3 +80,42 @@ CSM for Observability online installer upgrade can be used if the initial deploy
|
|- Waiting for pods in namespace karavi to be ready Success
```
+
+## Offline Installer Upgrade
+
+Assuming that you have already installed the Karavi Observability Helm Chart by offline installer and meet its installation requirement.
+These instructions can be followed when a Helm chart was installed and will be upgraded in an environment that does not have an internet connection and will be unable to download the Helm chart and related Docker images.
+
+1. Build the Offline Bundle
+ Follow [Offline Karavi Observability Helm Chart Installer](../deployment/offline) to build the latest bundle.
+
+2. Unpack the Offline Bundle
+ Follow [Offline Karavi Observability Helm Chart Installer](../deployment/offline), copy and unpack the Offline Bundle to another Linux system, and push Docker images to the internal Docker registry.
+
+3. Perform Helm upgrade
+ 1. Change directory to `helm` which contains the updated Helm chart directory:
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle]# cd helm
+ ```
+ 2. Install necessary cert-manager CustomResourceDefinitions provided.
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# kubectl apply --validate=false -f cert-manager.crds.yaml
+ ```
+ 3. (Optional) Enable Karavi Observability for PowerFlex/PowerScale to use an existing instance of Karavi Authorization for accessing the REST API for the given storage systems.
+ **Note**: Assuming that if the Karavi Observability's Authorization has been enabled in the phase of [Offline Karavi Observability Helm Chart Installer](../deployment/offline), the Authorization Secrets/Configmap have been copied to the Karavi Observability namespace.
+ A sample configuration values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml).
+ In your own configuration values.yaml, you need to enable PowerFlex/PowerScale Authorization, and provide the location of the sidecar-proxy Docker image and URL of the Karavi Authorization proxyHost address.
+
+ 4. Now that the required images have been made available and the Helm chart's configuration updated with references to the internal registry location, installation can proceed by following the instructions that are documented within the Helm chart's repository.
+ **Note**: Assuming that Your Secrets from CSI Drivers have been copied to the Karavi Observability namespace in the phase of [Offline Karavi Observability Helm Chart Installer](../deployment/offline)
+ Optionally, you could provide your own [configurations](../deployment/helm/#configuration). A sample values.yaml file is located [here](https://github.com/dell/helm-charts/blob/main/charts/karavi-observability/values.yaml).
+ ```
+ [user@anothersystem /home/user/offline-karavi-observability-bundle/helm]# helm upgrade -n install-namespace app-name karavi-observability
+ NAME: app-name
+ LAST DEPLOYED: Wed Aug 17 14:44:04 2022
+ NAMESPACE: install-namespace
+ STATUS: deployed
+ REVISION: 1
+ TEST SUITE: None
+ ```
+
\ No newline at end of file
diff --git a/content/v2/references/_index.md b/content/v2/references/_index.md
index 28cae60329..ce3be78438 100644
--- a/content/v2/references/_index.md
+++ b/content/v2/references/_index.md
@@ -1,7 +1,7 @@
---
title: "References"
linkTitle: "References"
-weight: 13
+weight: 14
Description: >
Dell Technologies (Dell) Container Storage Modules (CSM) References
---
diff --git a/content/v2/references/cli/_index.md b/content/v2/references/cli/_index.md
new file mode 100644
index 0000000000..e99a6775da
--- /dev/null
+++ b/content/v2/references/cli/_index.md
@@ -0,0 +1,534 @@
+---
+title: "CLI"
+linkTitle: "CLI"
+weight: 1
+Description: >
+ CLI for Dell Container Storage Modules (CSM)
+---
+dellctl is a common command line interface(CLI) used to interact with and manage your [Container Storage Modules](https://github.com/dell/csm) (CSM) resources.
+This document outlines all dellctl commands, their intended use, options that can be provided to alter their execution, and expected output from those commands.
+
+| Command | Description |
+| - | - |
+| [dellctl](#dellctl) | dellctl is used to interact with Container Storage Modules |
+| [dellctl cluster](#dellctl-cluster) | Manipulate one or more k8s cluster configurations |
+| [dellctl cluster add](#dellctl-cluster-add) | Add a k8s cluster to be managed by dellctl |
+| [dellctl cluster remove](#dellctl-cluster-remove) | Removes a k8s cluster managed by dellctl |
+| [dellctl cluster get](#dellctl-cluster-get) | List all clusters currently being managed by dellctl |
+| [dellctl backup](#dellctl-backup) | Allows to manipulate application backups/clones |
+| [dellctl backup create](#dellctl-backup-create) | Create an application backup/clones |
+| [dellctl backup delete](#dellctl-backup-delete) | Delete application backups |
+| [dellctl backup get](#dellctl-backup-get) | Get application backups |
+| [dellctl restore](#dellctl-restore) | Allows to manipulate application restores |
+| [dellctl restore create](#dellctl-restore-create) | Restore an application backup |
+| [dellctl restore delete](#dellctl-restore-delete) | Delete application restores |
+| [dellctl restore get](#dellctl-restore-get) | Get application restores |
+
+
+## Installation instructions
+1. Download `dellctl` from [here](https://github.com/dell/csm/releases/tag/v1.4.0).
+2. chmod +x dellctl
+3. Move `dellctl` to `/usr/local/bin` or add `dellctl`'s containing directory path to PATH environment variable.
+4. Run `dellctl --help` to know available commands or run `dellctl command --help` to know more about a specific command.
+
+By default, the `dellctl` runs against local cluster(referenced by `KUBECONFIG` environment variable or by a kube config file present at default location).
+The user can register one or more remote clusters for `dellctl`, and run any `dellctl` command against these clusters by specifying the registered cluster id to the command.
+
+
+## General Commands
+
+### dellctl
+
+dellctl is a CLI tool for managing Dell Container Storage Resources.
+
+##### Flags
+
+```
+ -h, --help help for dellctl
+ -v, --version version for dellctl
+```
+
+##### Output
+
+Outputs help text
+
+
+
+---
+
+
+
+### dellctl cluster
+
+Allows to manipulate one or more k8s cluster configurations
+
+##### Available Commands
+
+```
+ add Adds a k8s cluster to be managed by dellctl
+ remove Removes a k8s cluster managed by dellctl
+ get List all clusters currently being managed by dellctl
+```
+
+##### Flags
+
+```
+ -h, --help help for cluster
+```
+
+##### Output
+
+Outputs help text
+
+
+
+---
+
+
+
+### dellctl cluster add
+
+Add one or more k8s clusters to be managed by dellctl
+
+##### Flags
+
+```
+Flags:
+ -n, --names strings cluster names
+ -f, --files strings paths for kube config files
+ -u, --uids strings uids of the kube-system namespaces in the clusters
+ --force forcefully add cluster
+ -h, --help help for add
+```
+
+##### Output
+
+```
+# dellctl cluster add -n cluster1 -f ~/kubeconfigs/cluster1-kubeconfig
+ INFO Adding clusters ...
+ INFO Cluster: cluster1
+ INFO Successfully added cluster cluster1 in /root/.dellctl/clusters/cluster1 folder.
+```
+
+Add a cluster with it's uid
+
+```
+# dellctl cluster add -n cluster2 -f ~/kubeconfigs/cluster2-kubeconfig -u "035133aa-5b65-4080-a813-34a7abe48180"
+ INFO Adding clusters ...
+ INFO Cluster: cluster2
+ INFO Successfully added cluster cluster2 in /root/.dellctl/clusters/cluster2 folder.
+```
+
+
+
+---
+
+
+
+### dellctl cluster remove
+
+Removes a k8s cluster by name from the list of clusters being managed by dellctl
+
+##### Aliases
+
+```
+ remove, rm
+```
+
+##### Flags
+
+```
+ -h, --help help for remove
+ -n, --name string cluster name
+```
+
+##### Output
+
+```
+# dellctl cluster remove -n cluster1
+ INFO Removing cluster with id cluster1
+ INFO Removed cluster with id cluster1
+```
+
+
+
+---
+
+
+
+### dellctl cluster get
+
+List all clusters currently being managed by dellctl
+
+##### Aliases
+
+```
+ get, ls
+```
+
+##### Flags
+
+```
+ -h, --help help for get
+```
+
+##### Output
+
+```
+# dellctl cluster get
+CLUSTER ID VERSION URL UID
+cluster1 v1.22 https://1.2.3.4:6443
+cluster2 v1.22 https://1.2.3.5:6443 035133aa-5b65-4080-a813-34a7abe48180
+```
+
+
+
+---
+
+
+
+## Commands related to application mobility operations
+
+### dellctl backup
+
+Allows to manipulate application backups/clones
+
+##### Available Commands
+
+```
+ create Create an application backup/clones
+ delete Delete application backups
+ get Get application backups
+```
+
+##### Flags
+
+```
+ -h, --help help for backup
+```
+
+##### Output
+
+Outputs help text
+
+
+
+---
+
+
+
+### dellctl backup create
+
+Create an application backup/clones
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ --exclude-namespaces stringArray List of namespace names to exclude from the backup.
+ --include-namespaces stringArray List of namespace names to include in the backup (use '*' for all namespaces). (default *)
+ --ttl duration Backup retention period. (default 720h0m0s)
+ --exclude-resources stringArray Resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io.
+ --include-resources stringArray Resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources).
+ --backup-location string Storage location where k8s resources and application data will be backed up to. (default "default")
+ --data-mover string Data mover to be used to backup application data. (default "Restic")
+ --include-cluster-resources optionalBool[=true] Include cluster-scoped resources in the backup
+ -l, --label-selector labelSelector Only backup resources matching this label selector. (default )
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+ --clones stringArray Creates an application clone into target clusters managed by dellctl. Specify optional namespace mappings where the clone is created. Example: 'cluster1/sourceNamespace1:targetNamespace1', 'cluster1/sourceNamespace1:targetNamespace1;cluster2/sourceNamespace2:targetNamespace2'
+ -h, --help help for create
+```
+
+##### Output
+
+Create a backup of the applications running in namespace `demo1`
+
+```
+# dellctl backup create backup1 --include-namespaces demo1
+ INFO Backup request "backup1" submitted successfully.
+ INFO Run 'dellctl backup get backup1' for more details.
+```
+
+Create clones of the application running in namespace `demo1`, on clusters with id `cluster1` and `cluster2`
+
+```
+# dellctl backup create demo-app-clones --include-namespaces demo1 --clones "cluster1/demo1:restore-ns1" --clones "cluster2/demo1:restore-ns1"
+ INFO Clone request "demo-app-clones" submitted successfully.
+ INFO Run 'dellctl backup get demo-app-clones' for more details.
+```
+
+Take backup of application running in namespace `demo3` on remote cluster with id `cluster2`
+
+```
+# dellctl backup create backup4 --include-namespaces demo3 --cluster-id cluster2
+ INFO Backup request "backup4" submitted successfully.
+ INFO Run 'dellctl backup get backup4' for more details.
+```
+
+
+
+---
+
+
+
+### dellctl backup delete
+
+Delete one or more application backups
+
+##### Flags
+
+```
+ --all Delete all backups
+ --cluster-id string Id of the cluster managed by dellctl
+ --confirm Confirm deletion
+ -h, --help help for delete
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+```
+
+##### Output
+
+```
+# dellctl backup delete backup1
+Are you sure you want to continue (Y/N)? Y
+ INFO Request to delete backup "backup1" submitted successfully.
+ INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed.
+```
+
+Delete multiple backups
+
+```
+# dellctl backup delete backup1 backup2
+Are you sure you want to continue (Y/N)? Y
+ INFO Request to delete backup "backup1" submitted successfully.
+ INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed.
+ INFO Request to delete backup "backup2" submitted successfully.
+ INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed.
+```
+
+
+Delete all backups without asking for user confirmation
+
+```
+# dellctl backup delete --all --confirm
+ INFO Request to delete backup "backup4" submitted successfully.
+ INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed.
+ INFO Request to delete backup "demo-app-clones" submitted successfully.
+ INFO The backup will be fully deleted after all associated data (backup files, pod volume data, restores, velero backup) are removed.
+```
+
+
+---
+
+
+
+### dellctl backup get
+
+Get application backups
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ -h, --help help for get
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+
+```
+
+##### Output
+
+```
+# dellctl backup get
+NAME STATUS CREATED EXPIRES STORAGE LOCATION DATA MOVER CLONED TARGET CLUSTERS
+backup1 Completed 2022-07-27 11:51:00 -0400 EDT 2022-08-26 11:51:00 -0400 EDT default Restic false
+backup2 Completed 2022-07-27 11:59:24 -0400 EDT 2022-08-26 11:59:42 -0400 EDT default Restic false
+backup4 Completed 2022-07-27 12:02:54 -0400 EDT NA default Restic false
+demo-app-clones Restored 2022-07-27 11:53:37 -0400 EDT 2022-08-26 11:53:37 -0400 EDT default Restic true cluster1, cluster2
+```
+
+Get backups from remote cluster with id `cluster2`
+
+```
+# dellctl backup get --cluster-id cluster2
+NAME STATUS CREATED EXPIRES STORAGE LOCATION DATA MOVER CLONED TARGET CLUSTERS
+backup1 Completed 2022-07-27 11:52:42 -0400 EDT NA default Restic false
+backup2 Completed 2022-07-27 12:02:29 -0400 EDT NA default Restic false
+backup4 Completed 2022-07-27 12:01:49 -0400 EDT 2022-08-26 12:01:49 -0400 EDT default Restic false
+demo-app-clones Completed 2022-07-27 11:54:55 -0400 EDT NA default Restic true cluster1, cluster2
+```
+
+Get backups with their names
+
+```
+# dellctl backup get backup1 demo-app-clones
+NAME STATUS CREATED EXPIRES STORAGE LOCATION DATA MOVER CLONED TARGET CLUSTERS
+backup1 Completed 2022-07-27 11:51:00 -0400 EDT 2022-08-26 11:51:00 -0400 EDT default Restic false
+demo-app-clones Completed 2022-07-27 11:53:37 -0400 EDT 2022-08-26 11:53:37 -0400 EDT default Restic true cluster1, cluster2
+```
+
+
+
+---
+
+
+
+### dellctl restore
+
+Allows to manipulate application restores
+
+##### Available Commands
+
+```
+ create Restore an application backup
+ delete Delete application restores
+ get Get application restores
+```
+
+##### Flags
+
+```
+ -h, --help help for restore
+```
+
+##### Output
+
+Outputs help text
+
+
+
+---
+
+
+
+### dellctl restore create
+
+Restore an application backup
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ --from-backup string Backup to restore from
+ --namespace-mappings mapStringString Map of source namespace names to target namespace names to restore into in the form src1:dst1,src2:dst2,...
+ --exclude-namespaces stringArray List of namespace names to exclude from the backup.
+ --include-namespaces stringArray List of namespace names to include in the backup (use '*' for all namespaces). (default *)
+ --exclude-resources stringArray Resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io.
+ --include-resources stringArray Resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources).
+ --restore-volumes optionalBool[=true] Whether to restore volumes from snapshots.
+ --include-cluster-resources optionalBool[=true] Include cluster-scoped resources in the backup
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+ -h, --help help for create
+```
+
+##### Output
+
+Restore application backup `backup1` on local cluster in namespace `restorens1`
+
+```
+# dellctl restore create restore1 --from-backup backup1 --namespace-mappings "demo1:restorens1"
+ INFO Restore request "restore1" submitted successfully.
+ INFO Run 'dellctl restore get restore1' for more details.
+```
+
+Restore application backup `backup1` on remote cluster `cluster2` in namespace `demo1`
+
+```
+# dellctl restore create restore1 --from-backup backup1 --cluster-id cluster2
+ INFO Restore request "restore1" submitted successfully.
+ INFO Run 'dellctl restore get restore1' for more details.
+```
+
+
+
+---
+
+
+
+### dellctl restore delete
+
+Delete one or more application restores
+
+##### Flags
+
+```
+ --all Delete all restores
+ --cluster-id string Id of the cluster managed by dellctl
+ --confirm Confirm deletion
+ -h, --help help for delete
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+```
+
+##### Output
+
+Delete a restore created on remote cluster with id `cluster2`
+
+```
+# dellctl restore delete restore1 --cluster-id cluster2
+Are you sure you want to continue (Y/N)? Y
+ INFO Request to delete restore "restore1" submitted successfully.
+ INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed.
+```
+
+Delete multiple restores
+
+```
+# dellctl restore delete restore1 restore4
+Are you sure you want to continue (Y/N)? Y
+ INFO Request to delete restore "restore1" submitted successfully.
+ INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed.
+ INFO Request to delete restore "restore4" submitted successfully.
+ INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed.
+```
+
+Delete all restores without asking for user confirmation
+
+```
+# dellctl restore delete --all --confirm
+ INFO Request to delete restore "restore1" submitted successfully.
+ INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed.
+ INFO Request to delete restore "restore2" submitted successfully.
+ INFO The restore will be fully deleted after all associated data (restore files, velero restore) are removed.
+```
+
+
+---
+
+
+
+### dellctl restore get
+
+Get application restores
+
+##### Flags
+
+```
+ --cluster-id string Id of the cluster managed by dellctl
+ -h, --help help for get
+ -n, --namespace string The namespace in which application mobility service should operate. (default "app-mobility-system")
+```
+
+##### Output
+
+Get all the application restores created on local cluster
+
+```
+# dellctl restore get
+NAME BACKUP STATUS CREATED COMPLETED
+restore1 backup1 Completed 2022-07-27 12:35:29 -0400 EDT
+restore4 backup1 Completed 2022-07-27 12:39:42 -0400 EDT
+```
+
+Get all the application restores created on remote cluster with id `cluster2`
+
+```
+# dellctl restore get --cluster-id cluster2
+NAME BACKUP STATUS CREATED COMPLETED
+restore1 backup1 Completed 2022-07-27 12:38:43 -0400 EDT
+```
+
+Get restores with their names
+
+```
+# dellctl restore get restore1
+NAME BACKUP STATUS CREATED COMPLETED
+restore1 backup1 Completed 2022-07-27 12:35:29 -0400 EDT
+```
diff --git a/content/v2/release/_index.md b/content/v2/release/_index.md
index 97a5c32dc9..ffb7d086c3 100644
--- a/content/v2/release/_index.md
+++ b/content/v2/release/_index.md
@@ -1,7 +1,7 @@
---
title: "Release notes"
linkTitle: "Release notes"
-weight: 10
+weight: 12
Description: >
Dell Container Storage Modules (CSM) release notes
---
@@ -16,4 +16,8 @@ Release notes for Container Storage Modules:
[CSM for Replication](../replication/release)
-[CSM for Resiliency](../resiliency/release)
\ No newline at end of file
+[CSM for Resiliency](../resiliency/release)
+
+[CSM for Encryption](../secure/encryption/release)
+
+[CSM for Application Mobility](../applicationmobility/release)
diff --git a/content/v2/replication/_index.md b/content/v2/replication/_index.md
index fb5051e88c..2d0b15c594 100644
--- a/content/v2/replication/_index.md
+++ b/content/v2/replication/_index.md
@@ -22,6 +22,7 @@ CSM for Replication provides the following capabilities:
| Create `PersistentVolume` objects in the cluster representing the replicated volume | yes | yes | yes | no | no |
| Create `DellCSIReplicationGroup` objects in the cluster | yes | yes | yes | no | no |
| Failover & Reprotect applications using the replicated volumes | yes | yes | yes | no | no |
+| Online Volume Expansion for replicated volumes | yes | no | no | no | no |
| Provides a command line utility - [repctl](tools) for configuring & managing replication related resources across multiple clusters | yes | yes | yes | no | no |
{{}}
@@ -43,7 +44,7 @@ CSM for Replication provides the following capabilities:
{{
}}
## Supported CSI Drivers
diff --git a/content/v2/replication/deployment/installation.md b/content/v2/replication/deployment/installation.md
index 005637fac7..6bbabeee29 100644
--- a/content/v2/replication/deployment/installation.md
+++ b/content/v2/replication/deployment/installation.md
@@ -75,8 +75,9 @@ The following CSI drivers support replication:
1. CSI driver for PowerMax
2. CSI driver for PowerStore
3. CSI driver for PowerScale
+4. CSI driver for Unity XT
-Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore) or [PowerScale](../powerscale) pages during the driver installation.
+Please follow the steps outlined in [PowerMax](../powermax), [PowerStore](../powerstore), [PowerScale](../powerscale) or [Unity](../unity) pages during the driver installation.
>Note: Please ensure that replication CRDs are installed in the clusters where you are installing the CSI drivers. These CRDs are generally installed as part of the CSM Replication controller installation process.
diff --git a/content/v2/replication/deployment/powermax.md b/content/v2/replication/deployment/powermax.md
index 2d9fca7e0a..06dc2ec149 100644
--- a/content/v2/replication/deployment/powermax.md
+++ b/content/v2/replication/deployment/powermax.md
@@ -22,11 +22,22 @@ While using any SRDF groups, ensure that they are for exclusive use by the CSI P
* If an SRDF group is already in use by a CSI driver, don't use it for provisioning replicated volumes outside CSI provisioning workflows.
There are some important limitations that apply to how CSI PowerMax driver uses SRDF groups -
-* One replicated storage group __always__ contains volumes provisioned from a single namespace
-* While using SRDF mode Async/Metro, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels.
+* One replicated storage group using Async/Sync __always__ contains volumes provisioned from a single namespace.
+* While using SRDF mode Async, a single SRDF group can be used to provision volumes within a single namespace. You can still create multiple storage classes using the same SRDF group for different Service Levels.
But all these storage classes will be restricted to provisioning volumes within a single namespace.
-* When using SRDF mode Sync, a single SRDF group can be used to provision volumes from multiple namespaces.
-
+* When using SRDF mode Sync/Metro, a single SRDF group can be used to provision volumes from multiple namespaces.
+
+#### Automatic creation of SRDF Groups
+CSI Driver for Powermax supports automatic creation of SRDF Groups starting **v2.4.0** with help of **10.0** REST endpoints.
+To use this feature:
+* Remove _replication.storage.dell.com/RemoteRDFGroup_ and _replication.storage.dell.com/RDFGroup_ params from the storage classes before creating first replicated volume.
+* Driver will check next available RDF pair and use them to create volumes.
+* This enables customers to use same storage class across namespace to create volume.
+
+Limitation of Auto SRDFG:
+* For Async mode, this feature is supported for namespaces with at most 7 characters.
+* RDF label used to map namespace with the RDF group has limit of 10 char. 3 char is used for cluster prefix to make RDFG unique across clusters.
+* For namespace with more than 7 char, use manual entry of RDF groups in storage class.
#### In Kubernetes
Ensure you installed CRDs and replication controller in your clusters.
@@ -105,8 +116,8 @@ parameters:
replication.storage.dell.com/RemoteServiceLevel:
replication.storage.dell.com/RdfMode:
replication.storage.dell.com/Bias: "false"
- replication.storage.dell.com/RdfGroup:
- replication.storage.dell.com/RemoteRDFGroup:
+ replication.storage.dell.com/RdfGroup: # optional
+ replication.storage.dell.com/RemoteRDFGroup: # optional
replication.storage.dell.com/remoteStorageClassName:
replication.storage.dell.com/remoteClusterID:
```
@@ -123,8 +134,8 @@ Let's go through each parameter and what it means:
METRO, driver does not need `RemoteStorageClassName` and `RemoteClusterID` as it supports METRO with single cluster configuration.
* `replication.storage.dell.com/Bias` when the RdfMode is set to METRO, this parameter is required to indicate driver to use Bias or Witness.
If set to true, the driver will configure METRO with Bias, if set to false, the driver will configure METRO with Witness.
-* `replication.storage.dell.com/RdfGroup` is the local SRDF group number, as configured.
-* `replication.storage.dell.com/RemoteRDFGroup` is the remote SRDF group number, as configured.
+* `replication.storage.dell.com/RdfGroup` is the local SRDF group number, as configured. It is optional for using Auto SRDF group by driver.
+* `replication.storage.dell.com/RemoteRDFGroup` is the remote SRDF group number, as configured. It is optional for using Auto SRDF group by driver.
Let's follow up that with an example, let's assume we have two Kubernetes clusters and two PowerMax
storage arrays:
diff --git a/content/v2/replication/deployment/storageclasses.md b/content/v2/replication/deployment/storageclasses.md
index df85a44833..042d351d72 100644
--- a/content/v2/replication/deployment/storageclasses.md
+++ b/content/v2/replication/deployment/storageclasses.md
@@ -29,7 +29,7 @@ This should contain the name of the storage class on the remote cluster which is
>Note: You still need to create a pair of storage classes even while using a single stretched cluster
### Driver specific parameters
-Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes) or [PowerScale](../powerscale/#creating-storage-classes) for a detailed list of parameters.
+Please refer to the driver specific sections for [PowerMax](../powermax/#creating-storage-classes), [PowerStore](../powerstore/#creating-storage-classes), [PowerScale](../powerscale/#creating-storage-classes) or [Unity](../unity/#creating-storage-classes) for a detailed list of parameters.
### PV sync Deletion
diff --git a/content/docs/replication/deployment/unity.md b/content/v2/replication/deployment/unity.md
similarity index 98%
rename from content/docs/replication/deployment/unity.md
rename to content/v2/replication/deployment/unity.md
index 84bc358ff4..cab4a068fe 100644
--- a/content/docs/replication/deployment/unity.md
+++ b/content/v2/replication/deployment/unity.md
@@ -110,7 +110,6 @@ Let's go through each parameter and what it means:
* `replication.storage.dell.com/rpo` is an acceptable amount of data, which is measured in units of time, that may be lost due to a failure.
* `replication.storage.dell.com/ignoreNamespaces`, if set to `true` Unity driver, it will ignore in what namespace volumes are created and put every volume created using this storage class into a single volume group.
* `replication.storage.dell.com/volumeGroupPrefix` represents what string would be appended to the volume group name to differentiate them.
->NOTE: To configure the VolumeGroupPrefix, the name format of \'\-\-\-\' cannot be more than 63 characters.
* `arrayId` is a unique identifier of the storage array you specified in array connection secret.
* `nasServer` id of the Nas server of local array to which the allocated volume will belong.
* `storagePool` is the storage pool of the local array.
diff --git a/content/v2/replication/high-availability.md b/content/v2/replication/high-availability.md
index 01d74feab9..9a4f8f3b37 100644
--- a/content/v2/replication/high-availability.md
+++ b/content/v2/replication/high-availability.md
@@ -37,9 +37,9 @@ parameters:
SYMID: '000000000001'
ServiceLevel: 'Bronze'
replication.storage.dell.com/IsReplicationEnabled: 'true'
- replication.storage.dell.com/RdfGroup: '7'
+ replication.storage.dell.com/RdfGroup: '7' # Optional for Auto SRDF group
replication.storage.dell.com/RdfMode: 'METRO'
- replication.storage.dell.com/RemoteRDFGroup: '7'
+ replication.storage.dell.com/RemoteRDFGroup: '7' # Optional for Auto SRDF group
replication.storage.dell.com/RemoteSYMID: '000000000002'
replication.storage.dell.com/RemoteServiceLevel: 'Bronze'
reclaimPolicy: Delete
diff --git a/content/v2/replication/replication-actions.md b/content/v2/replication/replication-actions.md
index fa9502265c..96eece95f8 100644
--- a/content/v2/replication/replication-actions.md
+++ b/content/v2/replication/replication-actions.md
@@ -34,11 +34,11 @@ For e.g. -
The following table lists details of what actions should be used in different Disaster Recovery workflows & the equivalent operation done on the storage array:
{{
}}
### Maintenance Actions
@@ -46,11 +46,11 @@ These actions can be run at any site and are used to change the replication link
The following table lists the supported maintenance actions and the equivalent operation done on the storage arrays
{{
}}
### How to perform actions
diff --git a/content/v2/replication/volume_expansion.md b/content/v2/replication/volume_expansion.md
new file mode 100644
index 0000000000..464811d519
--- /dev/null
+++ b/content/v2/replication/volume_expansion.md
@@ -0,0 +1,44 @@
+---
+title: Volume Expansion
+linktitle: Volume Expansion
+weight: 6
+description: >
+ Online expansion of replicated volumes
+---
+
+Starting in v2.4.0, the CSI PowerMax driver supports the expansion of Replicated Persistent Volumes (PVs). This expansion is done online, which is when the PVC is attached to any node.
+
+## Prerequisites
+- To use this feature, enable resizer in values.yaml.
+```yaml
+resizer:
+ enabled: true
+```
+- To use this feature, the storage class that is used to create the PVC must have the attribute allowVolumeExpansion set to true.
+
+## Basic Usage
+
+To resize a PVC, edit the existing PVC spec and set spec.resources.requests.storage to the intended size. For example, if you have a PVC - pmax-pvc-demo of size 5 Gi, then you can resize it to 10 Gi by updating the PVC.
+
+```yaml
+kind: PersistentVolumeClaim
+apiVersion: v1
+metadata:
+ name: pmax-pvc-demo
+ namespace: test
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 10Gi #Updated size from 5Gi to 10Gi
+ storageClassName: powermax-expand-sc
+```
+Update remote PVC with expanded size:
+
+1. Update the remote PVC size with the same size as on local PVC
+
+2. After sync with remote CSI driver, volume size will be updated to show new size.
+
+*NOTE*: The Kubernetes Volume Expansion feature can only be used to increase the size of the volume, it cannot be used to shrink a volume.
diff --git a/content/v2/resiliency/_index.md b/content/v2/resiliency/_index.md
index ab043bc23d..e945bea855 100644
--- a/content/v2/resiliency/_index.md
+++ b/content/v2/resiliency/_index.md
@@ -144,7 +144,13 @@ pmtu3 podmontest-0 1/1 Running 0 3m6s
...
```
- CSM for Resiliency may also generate events if it is unable to cleanup a pod for some reason. For example, it may not clean up a pod because the pod is still doing I/O to the array.
+ CSM for Resiliency may also generate events if it is unable to clean up a pod for some reason. For example, it may not clean up a pod because the pod is still doing I/O to the array.
+
+ Similarly, the label selector for csi-powerscale and csi-unity would be as shown respectively.
+ ```
+ labelSelector: {map[podmon.dellemc.com/driver:csi-isilon]
+ labelSelector: {map[podmon.dellemc.com/driver:csi-unity]
+ ```
#### Important
Before putting an application into production that relies on CSM for Resiliency monitoring, it is important to do a few test failovers first. To do this take the node that is running the pod offline for at least 2-3 minutes. Verify that there is an event message similar to the one above is logged, and that the pod recovers and restarts normally with no loss of data. (Note that if the node is running many CSM for Resiliency protected pods, the node may need to be down longer for CSM for Resiliency to have time to evacuate all the protected pods.)
diff --git a/content/v2/resiliency/deployment.md b/content/v2/resiliency/deployment.md
index 8a4a20519f..edadca721a 100644
--- a/content/v2/resiliency/deployment.md
+++ b/content/v2/resiliency/deployment.md
@@ -21,11 +21,10 @@ Configure all the helm chart parameters described below before installing the dr
The drivers that support Helm chart installation allow CSM for Resiliency to be _optionally_ installed by variables in the chart. There is a _podmon_ block specified in the _values.yaml_ file of the chart that will look similar to the text below by default:
```
-# Podmon is an optional feature under development and tech preview.
# Enable this feature only after contact support for additional information
podmon:
enabled: true
- image: dellemc/podmon:v1.2.0
+ image: dellemc/podmon:v
controller:
args:
- "--csisock=unix:/var/run/csi/csi.sock"
diff --git a/content/v2/resiliency/release/_index.md b/content/v2/resiliency/release/_index.md
index 3beec86748..96d9a62f47 100644
--- a/content/v2/resiliency/release/_index.md
+++ b/content/v2/resiliency/release/_index.md
@@ -6,16 +6,13 @@ Description: >
Dell Container Storage Modules (CSM) release notes for resiliency
---
-## Release Notes - CSM Resiliency 1.2.0
+## Release Notes - CSM Resiliency 1.3.0
### New Features/Changes
-- Support for node taint when driver pod is unhealthy.
-- Resiliency protection on driver node pods, see [CSI node failure protection](https://github.com/dell/csm/issues/145).
-- Resiliency support for CSI Driver for PowerScale, see [CSI Driver for PowerScale](https://github.com/dell/csm/issues/262).
### Fixed Issues
-- Occasional failure unmounting Unity volume for raw block devices via iSCSI, see [unmounting Unity volume](https://github.com/dell/csm/issues/237).
+- Documentation improvement to identify all requirements when building the service and running unit tests for CSM Authorization and CSM Resiliency repository (https://github.com/dell/karavi-resiliency/pull/131).
### Known Issues
\ No newline at end of file
diff --git a/content/v2/secure/_index.md b/content/v2/secure/_index.md
new file mode 100644
index 0000000000..88e3b42ed3
--- /dev/null
+++ b/content/v2/secure/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Secure"
+linkTitle: "Secure"
+weight: 9
+Description: >
+ Security features for Dell CSI drivers
+---
+Secure is a suite of Dell Container Storage Modules (CSM) that brings security related features to Kubernetes users of Dell storage products.
diff --git a/content/v2/secure/encryption/_index.md b/content/v2/secure/encryption/_index.md
new file mode 100644
index 0000000000..3f2568dfb6
--- /dev/null
+++ b/content/v2/secure/encryption/_index.md
@@ -0,0 +1,130 @@
+---
+title: "Encryption"
+linkTitle: "Encryption"
+weight: 1
+Description: >
+ CSI Volumes Encryption
+---
+Encryption provides the capability to encrypt user data residing on volumes created by Dell CSI Drivers.
+
+> **NOTE:** This tech-preview release is not intended for use in production environment.
+
+> **NOTE:** Encryption requires a time-based license to create new encrypted volumes. Request a [trial license](../../license) prior to deployment.
+>
+> After the license expiration, existing encrypted volume can still be unlocked and used, but no new encrypted volumes can be created.
+
+The volume data is encrypted on the Kubernetes worker host running the application workload, transparently for the application.
+
+Under the hood, *gocryptfs*, an open-source FUSE based encryptor, is used to encrypt both files content and the names of files and directories.
+
+Files content is encrypted using AES-256-GCM and names are encrypted using AES-256-EME.
+
+*gocryptfs* needs a password to initialize and to unlock the encrypted file system.
+Encryption generates 32 random bytes for the password and stores them in Hashicorp Vault.
+
+For detailed information on the cryptography behind gocryptfs, see [gocryptfs Cryptography](https://nuetzlich.net/gocryptfs/forward_mode_crypto).
+
+When a CSI Driver is installed with the Encryption feature enabled, two provisioners are registered in the cluster:
+
+#### Provisioner for unencrypted volumes
+
+This provisioner belongs to the storage driver and does not depend on the Encryption feature. Use a storage class with this provisioner to create regular unencrypted volumes.
+
+#### Provisioner for encrypted volumes
+
+This provisioner belongs to Encryption and registers with the name [`encryption.pluginName`](deployment/#helm-chart-values) when Encryption is enabled. Use a storage class with this provisioner to create encrypted volumes.
+
+## Capabilities
+
+{{
}}
+| Feature | PowerScale |
+| ------- | ---------- |
+| Dynamic provisionings of new volumes | Yes |
+| Static provisioning of new volumes | Yes |
+| Volume snapshot creation | Yes |
+| Volume creation from snapshot | Yes |
+| Volume cloning | Yes |
+| Volume expansion | Yes |
+| Encrypted volume unlocking in a different cluster | Yes |
+| User file and directory names encryption | Yes |
+{{
}}
+
+## Limitations
+
+- Only file system volumes are supported.
+- Existing volumes with data cannot be encrypted.
+ **Workaround:** create a new encrypted volume of the same size and copy/move the data from the original *unencrypted* volume to the new *encrypted* volume.
+- Encryption cannot be disabled in-place.
+ **Workaround:** create a new unencrypted volume of the same size and copy/move the data from the original *encrypted* volume to the new *unencrypted* volume.
+- Encrypted volume content can be seen in clear text through root access to the worker node or by obtaining shell access into the Encryption driver container.
+- When deployed with PowerScale CSI driver, `controllerCount` has to be set to 1.
+- No other CSM component can be enabled simultaneously with Encryption.
+- The only supported authentication method for Vault is AppRole.
+- Encryption secrets, config maps and encryption related values cannot be updated while the CSI driver is running:
+the CSI driver must be restarted to pick up the change.
+
+## Supported Operating Systems/Container Orchestrator Platforms
+
+{{
}}
+
+### PowerScale
+
+When enabling Encryption for PowerScale CSI Driver, make sure these requirements are met:
+- PowerScale CSI Driver uses root credentials for the storage array where encrypted volumes will be placed
+- OneFS NFS export configuration does not have root user mapping enabled
+- All other CSM features like Authorization, Replication, Resiliency are disabled
+- Health Monitor feature is disabled
+- CSI driver `controllerCount` is set to 1
+
+## Hashicorp Vault Support
+
+**Supported Vault version is 1.9.3 and newer.**
+
+Vault server (or cluster) is typically deployed in a dedicated Kubernetes cluster, but for the purpose of Encryption, it can be located anywhere.
+Even the simplest standalone single instance server with in-memory storage will suffice for testing.
+
+> **NOTE:** Properly deployed and configured Vault is crucial for security of the volumes encrypted with Encryption.
+Please refer to the Hashicorp Vault documentation regarding recommended deployment options.
+
+> **CAUTION:** Compromised Vault server or Vault storage back-end may lead to unauthorized access to the volumes encrypted with Encryption.
+
+> **CAUTION:** Destroyed Vault storage back-end or the encryption key stored in it, will make it impossible to unlock the volume encrypted with Encryption.
+Access to the data will be lost for ever.
+
+Refer to [Vault Configuration section](vault) for minimal configuration steps required to support Encryption and other configuration considerations.
+
+## Kubernetes Worker Hosts Requirements
+
+- Each Kubernetes worker host should have SSH server running.
+- SSH server should have SSH public key authentication enabled for user *root*.
+- SSH server should remain running all the time whenever an application with an encrypted volume is running on the host.
+> **NOTE:** Stopping the SSH server on the worker host makes any encrypted volume attached to this host [inaccessible](troubleshooting#ssh-stopped).
+- Each Kubernetes worker host should have commands `fusermount` and `mount.fuse`. They are pre-installed in most Linux distros.
+To install package *fuse* in Ubuntu/Debian run command similar to `apt install fuse`.
+To install package *fuse* in SUSE run command similar to `zypper install fuse`.
+
+
diff --git a/content/v2/secure/encryption/deployment.md b/content/v2/secure/encryption/deployment.md
new file mode 100644
index 0000000000..33fbf2174f
--- /dev/null
+++ b/content/v2/secure/encryption/deployment.md
@@ -0,0 +1,170 @@
+---
+title: "Deployment"
+linkTitle: "Deployment"
+weight: 1
+Description: >
+ Deployment
+---
+Encryption for Dell Container Storage Modules is enabled via the Dell CSI driver installation. The drivers can be installed either by a Helm chart or by the Dell CSI Operator.
+In the tech preview release, Encryption can only be enabled via Helm chart installation.
+
+Except for additional Encryption related configuration outlined on this page,
+the rest of the deployment process is described in the correspondent [CSI driver documentation](../../../csidriver/installation/helm).
+
+## Vault Server
+
+Hashicorp Vault must be [pre-configured](../vault) to support Encryption. The Vault server's IP address and port must be accessible
+from the Kubernetes cluster where the CSI driver is to be deployed.
+
+## Helm Chart Values
+
+The drivers that support Encryption via Helm chart have an `encryption` block in their *values.yaml* file that looks like this:
+
+```yaml
+encryption:
+ # enabled: Enable/disable volume encryption feature.
+ enabled: false
+
+ # pluginName: The name of the provisioner to use for encrypted volumes.
+ pluginName: "sec-isilon.dellemc.com"
+
+ # image: Encryption driver image name.
+ image: "dellemc/csm-encryption:v0.1.0"
+
+ # imagePullPolicy: If specified, overrides the chart global imagePullPolicy.
+ imagePullPolicy:
+
+ # logLevel: Log level of the encryption driver.
+ # Allowed values: "error", "warning", "info", "debug", "trace".
+ logLevel: "error"
+
+ # livenessPort: HTTP liveness probe port number.
+ # Leave empty to disable the liveness probe.
+ # Example: 8080
+ livenessPort:
+
+ # extraArgs: Extra command line parameters to pass to the encryption driver.
+ # Allowed values:
+ # --sharedStorage - may be required by some applications to work properly.
+ # When set, performance is reduced and hard links cannot be created.
+ # See the gocryptfs documentation for more details.
+ extraArgs: []
+```
+
+| Parameter | Description | Required | Default |
+| --------- | ----------- | -------- | ------- |
+| enabled | Enable/disable volume encryption feature. | No | false |
+| pluginName | The name of the provisioner to use for encrypted volumes. | No | "sec-isilon.dellemc.com" |
+| image | Encryption driver image name. | No | "dellemc/csm-encryption:v0.1.0" |
+| imagePullPolicy | If specified, overrides the chart global imagePullPolicy. | No | CSI driver global imagePullPolicy |
+| logLevel | Log level of the encryption driver. Allowed values: "error", "warning", "info", "debug, `"trace". | No | "error" |
+| livenessPort | HTTP liveness probe port number. Leave empty to disable the liveness probe. | No | |
+| extraArgs | Extra command line parameters to pass to the encryption driver. Allowed values: "\-\-sharedStorage" - may be required by some applications to work properly. When set, performance is reduced and hard links cannot be created. See the [gocryptfs documentation](https://github.com/rfjakob/gocryptfs/blob/v2.2.1/Documentation/MANPAGE.md#-sharedstorage) for more details. | No | [] |
+
+## Secrets and Config Maps
+
+Apart from any secrets and config maps described in the CSI driver documentation, these resources should be created for Encryption:
+
+### Secret *encryption-license*
+
+Request a trial license following instructions on the [License page](../../../license). You will be provided with a YAML file similar to:
+
+```yaml
+apiVersion: v1
+data:
+ license: k1FXzMDZodGNnK4I12Alo4UvuhLd+ithRhuLz2eoIxlcMSfW0xJYWnBiNMvTUl8VdGmR5fsvs2L6KqPfpIJk4wOzCxQ9wfDIJuYqrwV0wi2F2lzb1Hkk7O7/4r8cblPdCRJWfbg8QFc2BVtl4PZ/pFkHZoZVCbhGDD1MsbI1CiKqva9r9TBfswSFnqv7p3QXgbqQov8/q/j2+sHcvFF3j4kx+q1PzXoRNxwuTQaP4VAvipsQNAU5yV2dos2hs4Y/Ltbtreu/vrRGUaxvPbass1vUtIOJnvKkfbp53j8PFJGGISMYvYylUiD7TpoamxT/1I6mkjgRds+tEciMvutqDpmKEtdyp3vBjt4Sgd07ptvsdBJlyRAYb8ZPX9vXr4Ws
+kind: Secret
+metadata:
+ name: edit_name
+ namespace: edit_namespace
+```
+
+Set `name` to `"encryption-license"` and `namespace` to your driver namespace and apply the file:
+
+```shell
+kubectl apply -f
+```
+
+### Secret *vault-auth*
+
+A secret with the AppRole credentials used by Encryption to authenticate to the Vault server.
+
+> Set `role_id` and `secret_id` to the values provided by the Vault server administrator.
+
+> If a self-managed test Vault instance is used, generate role ID and secret ID following [these steps](../vault/#set-role-id-and-secret-id-to-the-role).
+
+```shell
+cat >auth.json <",
+ "secret_id": ""
+}
+EOF
+
+kubectl create secret generic vault-auth -n --from-file=auth.json -o yaml --dry-run=client | kubectl apply -f -
+
+rm -f auth.json
+```
+In this release, Encryption does not pick up modifications to this secret while the CSI driver is running, unless it needs to re-login which happens at:
+- CSI Driver startup
+- an authentication error from the Vault server
+- client token expiration
+
+In all other cases, to apply new values in the secret (e.g., to use another role), the CSI driver must be restarted.
+
+### Secret *vault-cert*
+
+A secret with TLS certificates used by Encryption to communicate with the Vault server.
+
+> Files *server-ca.crt*, *client.crt* and *client.key* should be in PEM format.
+
+```shell
+kubectl create secret generic vault-cert -n \
+ --from-file=server-ca.crt --from-file=client.crt --from-file=client.key \
+ -o yaml --dry-run=client | kubectl apply -f -
+```
+In this release, Encryption does not pick up modifications to this secret while the CSI driver is running.
+To apply new values in the secret (e.g., to update the client certificate), the CSI driver must be restarted.
+
+### ConfigMap *vault-client-conf*
+
+A config map with settings used by Encryption to communicate with the Vault server.
+
+> Populate *client.json* with your settings.
+
+```shell
+cat >client.json <:8400",
+ "kv_engine_path": "/dea-keys",
+ "tls_config":
+ {
+ "client_crt": "/etc/dea/vault/client.crt",
+ "client_key": "/etc/dea/vault/client.key",
+ "server_ca": "/etc/dea/vault/server-ca.crt"
+ }
+}
+EOF
+
+kubectl create configmap vault-client-conf -n \
+ --from-file=client.json -o yaml --dry-run=client | kubectl apply -f -
+
+rm -f client.json
+```
+
+These fields are available for use in *client.json*:
+
+| client.json field | Description | Required | Default |
+| ----------------- | ----------- | -------- | ------- |
+| auth_type | Authentication type used to authenticate to the Vault server. Currently, the only supported type is "approle". | Yes | |
+| auth_conf_file | Set to "/etc/dea/vault/auth.json" | Yes | |
+| auth_timeout | Defines in how many seconds key requests to the Vault server fail if there is no valid authentication token. | No | 5 |
+| lease_duration_margin | Defines how many seconds in advance the authentication token lease will be renewed. This value should accommodate network and processing delays. | No | 15 |
+| lease_increase | Defines the number of seconds used in the authentication token renew call. This value is advisory and may be disregarded by the server. | No | 3600 |
+| vault_addr | URL to use for REST calls to the Vault server. It must start with "https". | Yes | |
+| kv_engine_path | The path to which the Key/Value secret engine is mounted on the Vault server. | Yes | |
+| tls_config.client_crt | Set to "/etc/dea/vault/client.crt" | Yes | |
+| tls_config.client_key | Set to "/etc/dea/vault/client.key" | Yes | |
+| tls_config.client_ca | Set to "/etc/dea/vault/server-ca.crt" | Yes | |
diff --git a/content/v2/secure/encryption/release.md b/content/v2/secure/encryption/release.md
new file mode 100644
index 0000000000..cbaeea0f2f
--- /dev/null
+++ b/content/v2/secure/encryption/release.md
@@ -0,0 +1,21 @@
+---
+title: "Release Notes"
+linkTitle: "Release Notes"
+weight: 5
+Description: >
+ Release Notes
+---
+
+### New Features/Changes
+
+- [Technical preview release](https://github.com/dell/csm/issues/437)
+- PowerScale CSI volumes encryption (for new volumes)
+- Encryption keys stored in Hashicorp Vault
+
+### Fixed Issues
+
+There are no fixed issues in this release.
+
+### Known Issues
+
+There are no known issues in this release.
\ No newline at end of file
diff --git a/content/v2/secure/encryption/troubleshooting.md b/content/v2/secure/encryption/troubleshooting.md
new file mode 100644
index 0000000000..b966adf50a
--- /dev/null
+++ b/content/v2/secure/encryption/troubleshooting.md
@@ -0,0 +1,87 @@
+---
+title: "Troubleshooting"
+linkTitle: "Troubleshooting"
+weight: 4
+Description: >
+ Troubleshooting
+---
+
+## Logs and Events
+
+The first and in most cases sufficient step in troubleshooting issues with a CSI driver that has Encryption enabled
+is exploring logs of the Encryption driver and related Kubernetes components. These are some useful log sources:
+
+### CSI Driver Containers Logs
+
+The driver creates several *controller* and *node* pods. They can be listed with `kubectl -n get pods`.
+The output will look similar to:
+
+```
+NAME READY STATUS RESTARTS AGE
+isi-controller-84f697c874-2j6d4 10/10 Running 0 16h
+isi-node-4gtwf 4/4 Running 0 16h
+isi-node-lnzws 4/4 Running 0 16h
+```
+
+List containers in pod `isi-node-4gtwf` with `kubectl -n logs isi-node-4gtwf`.
+Each pod has containers called `driver` which is the storage driver container and `driver-sec` which is the Encryption driver container.
+These container's logs tend to provide the most important information, but other containers may give a hint too.
+View the logs of `driver-sec` in `isi-node-4gtwf` with `kubectl -n logs isi-node-4gtwf driver-sec`.
+The log level of this container can be changed by setting value [encryption.logLevel](../deployment#helm-chart-values) and restarting the driver.
+
+Often it is necessary to see the logs produced on a specific Kubernetes worker host.
+To find which *node* pod is running on which worker host, use `kubectl -n get pods -o wide`.
+
+### PersistentVolume, PersistentVolumeClaim and Application Pod Events
+
+Some errors may be logged to the related resource events that can be viewed with `kubectl describe` command for that resource.
+
+### Vault Server Logs
+
+Some errors related to communication with the Vault server and key requests may be logged on the Vault server side.
+If you run a [test instance of the server in a Docker container](../vault#vault-server-installation) you can view the logs with `docker logs vault-server`.
+
+## Typical Failure Reasons
+
+#### Incorrect Vault related configuration
+
+- check [logs](#logs-and-events)
+- check [vault-auth secret](../deployment#secret-vault-auth)
+- check [vault-cert secret](../deployment#secret-vault-cert)
+- check [vault-client-conf config map](../deployment#configmap-vault-client-conf)
+
+#### Incorrect Vault server-side configuration
+
+- check [logs](#logs-and-events)
+- check [Vault server configuration](../vault#minimum-server-configuration)
+
+#### Expired AppRole secret ID
+
+- [reset the role secret ID](../vault#set-role-id-and-secret-id-to-the-role)
+
+#### Incorrect CSI driver configuration
+
+- check the related CSI driver [troubleshooting steps](../../../csidriver/troubleshooting)
+
+#### SSH server is stopped/restarted on the worker host {#ssh-stopped}
+
+This may manifest in:
+- failure to start the CSI driver
+- failure to create a new encrypted volume
+- failure to access an encrypted volume (IO errors)
+
+Resolution:
+- check SSH server is running on all worker host
+- stop all workloads that use encrypted volumes on the node, then restart them
+
+#### No license provided, or license expired
+
+This may manifest in:
+- failure to start the CSI driver
+- failure to create a new encrypted volume
+
+Resolution:
+- obtain a [new valid license](../../../license)
+- check the license is for the cluster on which the encrypted volumes are created
+- check [encryption-license secret](../deployment#secret-encryption-license)
+
diff --git a/content/v2/secure/encryption/uninstallation.md b/content/v2/secure/encryption/uninstallation.md
new file mode 100644
index 0000000000..60144e866f
--- /dev/null
+++ b/content/v2/secure/encryption/uninstallation.md
@@ -0,0 +1,39 @@
+---
+title: "Uninstallation"
+linkTitle: "Uninstallation"
+weight: 2
+Description: >
+ Uninstallation
+---
+
+## Cleanup Kubernetes Worker Hosts
+
+Login to each worker host and perform these steps:
+
+#### Remove directory */root/.driver-sec*
+
+This directory was created when a CSI driver with Encryption first ran on the host.
+
+#### Remove entry from */root/.ssh/authorized_keys*
+
+This is an entry added when a CSI driver with Encryption first ran on the host.
+It ends with `driver-sec`, similarly to:
+
+```
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDGvSWmTL7NORRDPAvtbMbvoHUBLnen9bRtJePbGk1boJ4XK39Qdvo2zFHZ/6t2+dSL7xKo2kcxX3ovj3RyOPuqNCob
+5CLYyuIqduooy+eSP8S1i0FbiDHvH/52yHglnGkBb8g8fmoMolYGW7k35mKOEItKlXruP5/hpP0rBDfBfrxe/K4aHicxv6GylP+uTSBjdj7bZrdgRAIlmDyIdvU4oU6L
+K9PDW5rufArlrZHaToHXLMbXbqswD08rgFt3tLiXjj2GgvU8ifWYYAeuijMp+hwwE0dYv45EgUNTlXUa7x2STFZrVn8MFkLKjtZ60Qjbb4JoijRpBQ5XEUkW9UoeGbV2
+s+lCpZ2bMkmdda/0UC1ckvyrLkD0yQotb8gafizdX+WrQRE+iqUv/NQ2mrSEHtLgvuvgZ3myFU5chRv498YxglYZsAZUdCQI2hQt+7smjYMaM0V200UT741U9lIlYxza
+ocI5t+n01dWeVOCSOH/Q3uXxHKnFvWVZh7m6583R9LfdGfwshsnx4CNz22kp69hzwBPxehR+U/VXkDUWnoQgI8NSPc0fFyU58yLHnl91XT9alz8qrkFK7oggKy5RRX7c
+VQrpjsCPCu3fpVjvvwfspVOftbn/sNgY1J3lz0pdgvJ3yQs6pa+DODQyin5Rt//19rIGifPxi/Hk/k49Vw== driver-sec
+```
+
+It can be removed with `sed -i '/^ssh-rsa .* driver-sec$/d' /root/.ssh/authorized_keys`.
+
+## Remove Kubernetes Resources
+
+Remove [the resources that were created in Kubernetes cluster for Encryption](../deployment#secrets-and-config-maps).
+
+## Remove Vault Server Configuration
+
+Remove [the configuration created in the Vault server for Encryption](../vault#minimum-server-configuration).
diff --git a/content/v2/secure/encryption/vault.md b/content/v2/secure/encryption/vault.md
new file mode 100644
index 0000000000..6332ea2c13
--- /dev/null
+++ b/content/v2/secure/encryption/vault.md
@@ -0,0 +1,244 @@
+---
+title: "Vault Configuration"
+linkTitle: "Vault Configuration"
+weight: 3
+Description: >
+ Configuration requirements for Vault server
+---
+
+## Vault Server Installation
+
+If there is already a Vault server available, skip to [Minimum Server Configuration](#minimum-server-configuration).
+
+If there is no Vault server available to use with Encryption, it can be installed in many ways following [Hashicorp Vault documentation](https://www.vaultproject.io/docs).
+
+For testing environment, however, a simple deployment suggested in this section may suffice.
+It creates a standalone server with in-memory (non-persistent) storage, running in a Docker container.
+
+> **NOTE**: With in-memory storage, the encryption keys are permanently destroyed upon the server termination.
+
+#### Generate TLS certificates for server and client
+
+Create server CA private key and certificate:
+
+```shell
+openssl req -x509 -sha256 -days 365 -newkey rsa:2048 -nodes \
+ -subj "/CN=Vault Root CA" \
+ -keyout server-ca.key \
+ -out server-ca.crt
+```
+
+Create server private key and CSR:
+
+```shell
+openssl req -newkey rsa:2048 -nodes \
+ -subj "/CN=vault-demo-server" \
+ -keyout server.key \
+ -out server.csr
+```
+
+Create server certificate signed by the CA:
+
+> Replace `` with an IP address by which Encryption can reach the Vault server.
+This may be the address of the Docker host where the Vault server will be running.
+The same address should be used for `vault_addr` in [vault-client-conf](../deployment#configmap-vault-client-conf).
+
+```shell
+cat > cert.ext <
+EOF
+
+openssl x509 -req \
+ -CA server-ca.crt -CAkey server-ca.key \
+ -in server.csr \
+ -out server.crt \
+ -days 365 \
+ -extfile cert.ext \
+ -CAcreateserial
+
+cat server-ca.crt >> server.crt
+```
+
+Create client CA private key and certificate:
+
+```shell
+openssl req -x509 -sha256 -days 365 -newkey rsa:2048 -nodes \
+ -subj "/CN=Client Root CA" \
+ -keyout client-ca.key \
+ -out client-ca.crt
+```
+
+Create client private key and CSR:
+
+```shell
+openssl req -newkey rsa:2048 -nodes \
+ -subj "/CN=vault-client" \
+ -keyout client.key \
+ -out client.csr
+```
+
+Create client certificate signed by the CA:
+
+```shell
+cat > cert.ext <> client.crt
+```
+
+#### Create server hcl file
+
+```shell
+cat >server.hcl < Variable `CONF_DIR` below refers to the directory containing files *server.crt*, *server.key*, *client-ca.crt* and *server.hcl*.
+```shell
+VOL_DIR="$CONF_DIR"
+VOL_DIR_D="/var/vault"
+ROOT_TOKEN="DemoRootToken"
+VAULT_IMG="vault:1.9.3"
+
+docker run --rm -d \
+ --name="vault-server" \
+ -p 8200:8200 -p 8400:8400 \
+ -v $VOL_DIR:$VOL_DIR_D -w $VOL_DIR_D \
+ -e VAULT_DEV_ROOT_TOKEN_ID=$ROOT_TOKEN \
+ -e VAULT_ADDR="http://127.0.0.1:8200" \
+ -e VAULT_TOKEN=$ROOT_TOKEN \
+ $VAULT_IMG \
+ sh -c 'vault server -dev -dev-listen-address 0.0.0.0:8200 -config=server.hcl'
+```
+
+## Minimum Server Configuration
+
+> **NOTE:** this configuration is a bare minimum to support Encryption and is not intended for use in production environment.
+Refer to the [Hashicorp Vault documentation](https://www.vaultproject.io/docs) for recommended configuration options.
+
+> If a [test instance of Vault](#vault-server-installation) is used, the `vault` commands below can be executed in the Vault server container shell.
+> To enter the shell, run `docker exec -it vault-server sh`. After completing the configuration process, exit the shell by typing `exit`.
+>
+> Alternatively, you can [download the vault binary](https://www.vaultproject.io/downloads) and run it anywhere.
+> It will require two environment variables to communicate with the Vault server:
+> - `VAULT_ADDR` - URL similar to `http://127.0.0.1:8200`. You may need to change the address in the URL to the address of
+the Docker host where the server is running.
+> - `VAULT_TOKEN` - Authentication token, e.g. the root token `DemoRootToken` used in the [test instance of Vault](#vault-server-installation).
+
+#### Enable Key/Value secret engine
+
+```shell
+vault secrets enable -version=2 -path=dea-keys/ kv
+vault write /dea-keys/config cas_required=true max_versions=1
+```
+
+Key/Value secret engine is used to store encryption keys. Each encryption key is represented by a key-value entry.
+
+#### Enable AppRole authentication
+
+```shell
+vault auth enable approle
+```
+
+#### Create a role
+
+```shell
+vault write auth/approle/role/dea-role \
+ secret_id_ttl=28d \
+ token_num_uses=0 \
+ token_ttl=1h \
+ token_max_ttl=1h \
+ token_explicit_max_ttl=10d \
+ secret_id_num_uses=0
+```
+
+TTL values here are chosen arbitrarily and can be changed to desired values.
+
+#### Create and assign a token policy to the role
+
+```shell
+vault policy write dea-policy - < Secret ID has an expiration time after which it becomes invalid resulting in [authorization failure](../troubleshooting#expired-approle-secret-id).
+> The expiration time for new secret IDs can be set in `secret_id_ttl` parameter when [the role is created](#create-a-role) or later on using
+> `vault write auth/approle/role/dea-role/secret-id-ttl secret_id_ttl=24h`.
+
+## Token TTL Considerations
+
+Effective client token TTL is determined by the Vault server based on multiple factors which are described in the [Vault documentation](https://www.vaultproject.io/docs/concepts/tokens#token-time-to-live-periodic-tokens-and-explicit-max-ttls).
+
+With the default server settings, role level values control TTL in this way:
+
+`token_explicit_max_ttl=2h` - limits the client token TTL to 2 hours since it was originally issues as a result of login. This is a hard limit.
+
+`token_ttl=30m` - sets the default client token TTL to 30 minutes. 30 minutes are counted from the login time and from any following token renewal.
+The client token will only be able to renew 3 times before reaching it total allowed TTL of 2 hours.
+
+Existing role values can be changed using `vault write auth/approle/role/dea-role token_ttl=30m token_explicit_max_ttl=2h`.
+
+> Selecting too short TTL values will result in excessive overhead in Encryption to remain authenticated to the Vault server.
diff --git a/content/v2/snapshots/volume-group-snapshots/_index.md b/content/v2/snapshots/volume-group-snapshots/_index.md
index c266498bef..7d0b35b53c 100644
--- a/content/v2/snapshots/volume-group-snapshots/_index.md
+++ b/content/v2/snapshots/volume-group-snapshots/_index.md
@@ -6,13 +6,58 @@ Description: >
Volume Group Snapshot module of Dell CSI drivers
---
## Volume Group Snapshot Feature
+The Dell CSM Volume Group Snapshotter is an operator which extends Kubernetes API to support crash-consistent snapshots of groups of volumes.
+Volume Group Snapshot supports PowerFlex and PowerStore driver.
-In order to use Volume Group Snapshots, ensure the volume snapshot module is enabled.
-- Kubernetes Volume Snapshot CRDs
-- Volume Snapshot Controller
-- Volume Snapshot Class
+## Installation
+To install and use the Volume Group Snapshotter, you need to install pre-requisites in your cluster, then install the CRD in your cluster and deploy it with the driver.
-### Creating Volume Group Snapshots
+### 1. Install Pre-Requisites
+The only pre-requisite required is the external-snapshotter, which is available [here](https://github.com/kubernetes-csi/external-snapshotter/tree/v4.1.1). Version 4.1+ is required. This is also required for the driver, so if the driver has already been installed, this pre-requisite should already be fulfilled as well.
+
+The external-snapshotter is split into two controllers, the common snapshot controller and a CSI external-snapshotter sidecar. The common snapshot controller must be installed only once per cluster.
+
+Here are sample instructions on installing the external-snapshotter CRDs:
+```
+git clone https://github.com/kubernetes-csi/external-snapshotter/
+cd ./external-snapshotter
+git checkout release-
+kubectl create -f client/config/crd
+kubectl create -f deploy/kubernetes/snapshot-controller
+```
+
+### 2. Install VGS CRD
+
+```
+IMPORTANT: delete previous v1aplha2 version of CRD and vgs resources created using alpha version.
+ Snapshots on array will remain if memberReclaimPolicy=retain was used.
+```
+If you want to install the VGS CRD from a pre-generated yaml, you can do so with this command (run in top-level folder):
+```
+kubectl apply -f config/crd/vgs-install.yaml
+```
+
+If you want to create your own CRD for installation with Kustomize, then the command `make install` can be used to create and install the Custom Resource Definitions in your Kubernetes cluster.
+
+### 3. Deploy VGS in CSI Driver with Helm Chart Parameters
+The drivers that support Helm chart deployment allow the CSM Volume Group Snapshotter to be _optionally_ deployed
+by variables in the chart. There is a _vgsnapshotter_ block specified in the _values.yaml_ file of the chart that will look similar this default text:
+
+```
+# volume group snapshotter(vgsnapshotter) details
+# These options control the running of the vgsnapshotter container
+vgsnapshotter:
+ enabled: false
+ image:
+
+```
+To deploy CSM Volume Group Snapshotter with the driver, these changes are required:
+1. Enable CSM Volume Group Snapshotter by changing the vgsnapshotter.enabled boolean to true.
+2. In the vgsnapshotter.image field, put the location of the image you created, or link to the one already built (such as the one on DockerHub, `dellemc/csi-volumegroup-snapshotter:v1.2.0`).
+3. Install/upgrade the driver normally. You should now have VGS successfully deployed with the driver!
+
+
+## Creating Volume Group Snapshots
This is a sample manifest for creating a Volume Group Snapshot:
```yaml
apiVersion: volumegroup.storage.dell.com/v1
@@ -28,11 +73,13 @@ spec:
# "Delete" - delete VolumeSnapshot instances
memberReclaimPolicy: "Retain"
volumesnapshotclass: ""
+ timeout: 90sec
pvcLabel: "vgs-snap-label"
# pvcList:
# - "pvcName1"
# - "pvcName2"
```
+Run the command `kubectl create -f vg.yaml` to take the specified snapshot.
The PVC labels field specifies a label that must be present in PVCs that are to be snapshotted. Here is a sample of that portion of a .yaml for a PVC:
@@ -44,6 +91,16 @@ metadata:
volume-group: vgs-snap-label
```
+## How to create policy based Volume Group Snapshots
+Currently, array based policies are not supported. This will be addressed in an upcoming release. For a temporary solution, cronjob can be used to mimic policy based Volume Group Snapshots. The only supported policy is how often the group should be created. To create a cronjob that creates a volume group snapshot periodically, use the template found in samples/ directory. Once the template is filled out, use the command `kubectl create -f samples/cron-template.yaml` to create the configmap and cronjob.
+>Note: Cronjob is only supported on Kubernetes versions 1.21 or higher
+
+## VolumeSnapshotContent watcher
+A VolumeSnapshotContent watcher is implemented to watch for VG's managing VolumeSnapshotContent. When any of the VolumeSnapshotContents get deleted, its managing VG, if there is one, will update `Status.Snapshots` to remove that snapshot. If all the snapshots are deleted, the VG will be also deleted automatically.
+
+## Deleting policy based Volume Group Snapshots
+Currently, automatic deletion of Volume Group Snapshots is not supported. All deletion must be done manually.
+
More details about the installation and use of the VolumeGroup Snapshotter can be found here: [dell-csi-volumegroup-snapshotter](https://github.com/dell/csi-volumegroup-snapshotter).
>Note: Volume group cannot be seen from the Kubernetes level as of now only volume group snapshots can be viewed as a CRD
diff --git a/content/v2/support/_index.md b/content/v2/support/_index.md
index 458bd392a5..54535f32f8 100644
--- a/content/v2/support/_index.md
+++ b/content/v2/support/_index.md
@@ -1,7 +1,7 @@
---
title: "Support"
linkTitle: "Support"
-weight: 11
+weight: 13
Description: >
Dell Container Storage Modules (CSM) support
---
diff --git a/content/v2/troubleshooting/_index.md b/content/v2/troubleshooting/_index.md
index c07a2998c8..f1679aa6b7 100644
--- a/content/v2/troubleshooting/_index.md
+++ b/content/v2/troubleshooting/_index.md
@@ -1,7 +1,7 @@
---
title: "Troubleshooting"
linkTitle: "Troubleshooting"
-weight: 10
+weight: 11
Description: >
Dell Container Storage Modules (CSM) troubleshooting information
---
@@ -16,4 +16,8 @@ Troubleshooting links for Container Storage Modules:
[CSM for Replication](../replication/troubleshooting)
-[CSM for Resiliency](../resiliency/troubleshooting)
\ No newline at end of file
+[CSM for Resiliency](../resiliency/troubleshooting)
+
+[CSM for Encryption](../secure/encryption/troubleshooting)
+
+[CSM for Application Mobility](../applicationmobility/troubleshooting)
\ No newline at end of file
diff --git a/content/v3/_index.md b/content/v3/_index.md
index d5b1916def..7b18fa6fb0 100644
--- a/content/v3/_index.md
+++ b/content/v3/_index.md
@@ -1,4 +1,3 @@
-
---
title: "Documentation"
linkTitle: "Documentation"
@@ -7,6 +6,7 @@ linkTitle: "Documentation"
This document version is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the [latest version](/csm-docs/)
{{% /pageinfo %}}
+
The Dell Technologies (Dell) Container Storage Modules (CSM) enables simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications. It reduces management complexity so developers can independently consume enterprise storage with ease and automate daily operations such as provisioning, snapshotting, replication, observability, authorization and, resiliency.
@@ -17,23 +17,23 @@ CSM is made up of multiple components including modules (enterprise capabilities
## CSM Supported Modules and Dell CSI Drivers
-| Modules/Drivers | CSM 1.2.1 | [CSM 1.2](../v1/) | [CSM 1.1](../v1/) | [CSM 1.0.1](../v2/) |
+| Modules/Drivers | CSM 1.3.1 | [CSM 1.2.1](../v1/) | [CSM 1.2](../v2/) | [CSM 1.1](../v3/) |
| - | :-: | :-: | :-: | :-: |
-| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | 1.2 | 1.2 | 1.1 | 1.0 |
-| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | 1.1.1 | 1.1 | 1.0.1 | 1.0.1 |
-| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | 1.2 | 1.2 | 1.1 | 1.0 |
-| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | 1.1 | 1.1 | 1.0.1 | 1.0.1 |
-| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
-| [CSI Driver for Unity](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
-| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
-| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
-| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.2 | v2.2 | v2.1 | v2.0 |
+| [Authorization](https://hub.docker.com/r/dellemc/csm-authorization-sidecar) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 |
+| [Observability](https://hub.docker.com/r/dellemc/csm-topology) | v1.2.0 | v1.1.1 | v1.1.0 | v1.0.1 |
+| [Replication](https://hub.docker.com/r/dellemc/dell-csi-replicator) | v1.3.0 | v1.2.0 | v1.2.0 | v1.1.0 |
+| [Resiliency](https://hub.docker.com/r/dellemc/podmon) | v1.2.0 | v1.1.0 | v1.1.0 | v1.0.1 |
+| [CSI Driver for PowerScale](https://hub.docker.com/r/dellemc/csi-isilon/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 |
+| [CSI Driver for Unity XT](https://hub.docker.com/r/dellemc/csi-unity/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 |
+| [CSI Driver for PowerStore](https://hub.docker.com/r/dellemc/csi-powerstore/tags) | v2.3.0 | v2.2.0 | v2.2.0| v2.1.0 |
+| [CSI Driver for PowerFlex](https://hub.docker.com/r/dellemc/csi-vxflexos/tags) | v2.3.0 | v2.2.0 | v2.2.0 | v2.1.0 |
+| [CSI Driver for PowerMax](https://hub.docker.com/r/dellemc/csi-powermax/tags) | v2.3.1 | v2.2.0 | v2.2.0 | v2.1.0 |
## CSM Modules Support Matrix for Dell CSI Drivers
-| CSM Module | CSI PowerFlex v2.2 | CSI PowerScale v2.2 | CSI PowerStore v2.2 | CSI PowerMax v2.2 | CSI Unity XT v2.2 |
+| CSM Module | CSI PowerFlex v2.3.0 | CSI PowerScale v2.3.0 | CSI PowerStore v2.3.0 | CSI PowerMax v2.3.1 | CSI Unity XT v2.3.0 |
| ----------------- | -------------- | --------------- | --------------- | ------------- | --------------- |
-| Authorization v1.2| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
-| Observability v1.1.1 | ✔️ | ❌ | ✔️ | ❌ | ❌ |
-| Replication v1.2| ❌ | ✔️ | ✔️ | ✔️ | ❌ |
-| Resilency v1.1| ✔️ | ❌ | ❌ | ❌ | ✔️
\ No newline at end of file
+| Authorization v1.3| ✔️ | ✔️ | ❌ | ✔️ | ❌ |
+| Observability v1.2| ✔️ | ❌ | ✔️ | ❌ | ❌ |
+| Replication v1.3| ❌ | ✔️ | ✔️ | ✔️ | ❌ |
+| Resiliency v1.2| ✔️ | ✔️ | ❌ | ❌ | ✔️ |
diff --git a/content/v3/authorization/_index.md b/content/v3/authorization/_index.md
index 5a3d8a3fac..62f7c46c36 100644
--- a/content/v3/authorization/_index.md
+++ b/content/v3/authorization/_index.md
@@ -20,7 +20,7 @@ The following diagram shows a high-level overview of CSM for Authorization with
## CSM for Authorization Capabilities
{{
}}
-| Feature | PowerFlex | PowerMax | PowerScale | Unity | PowerStore |
+| Feature | PowerFlex | PowerMax | PowerScale | Unity XT | PowerStore |
| - | - | - | - | - | - |
| Ability to set storage quota limits to ensure k8s tenants are not overconsuming storage | Yes | Yes | No (natively supported) | No | No |
| Ability to create access control policies to ensure k8s tenant clusters are not accessing storage that does not belong to them | Yes | Yes | No (natively supported) | No | No |
@@ -33,8 +33,7 @@ The following diagram shows a high-level overview of CSM for Authorization with
{{
}}
**NOTE:** If the deployed CSI driver has a number of controller pods equal to the number of schedulable nodes in your cluster, CSM for Authorization may not be able to inject properly into the driver's controller pod.
@@ -69,6 +68,7 @@ CSM for Authorization consists of 2 components - the Authorization sidecar and t
| ------------------------------- | ---------------------------------- |
| dellemc/csm-authorization-sidecar:v1.0.0 | v1.0.0, v1.1.0 |
| dellemc/csm-authorization-sidecar:v1.2.0 | v1.1.0, v1.2.0 |
+| dellemc/csm-authorization-sidecar:v1.3.0 | v1.1.0, v1.2.0, v1.3.0 |
{{
}}
## Roles and Responsibilities
diff --git a/content/v3/authorization/cli.md b/content/v3/authorization/cli.md
index f1ef1bb5aa..b282d7c3fd 100644
--- a/content/v3/authorization/cli.md
+++ b/content/v3/authorization/cli.md
@@ -25,6 +25,7 @@ If you feel that something is unclear or missing in this document, please open u
| [karavictl role delete](#karavictl-role-delete ) | Delete role |
| [karavictl rolebinding](#karavictl-rolebinding) | Manage role bindings |
| [karavictl rolebinding create](#karavictl-rolebinding-create) | Create a rolebinding between role and tenant |
+| [karavictl rolebinding delete](#karavictl-rolebinding-delete) | Delete a rolebinding between role and tenant |
| [karavictl storage](#karavictl-storage) | Manage storage systems |
| [karavictl storage get](#karavictl-storage-get) | Get details on a registered storage system |
| [karavictl storage list](#karavictl-storage-list) | List registered storage systems |
@@ -35,7 +36,7 @@ If you feel that something is unclear or missing in this document, please open u
| [karavictl tenant create](#karavictl-tenant-create) | Create a tenant resource within CSM |
| [karavictl tenant get](#karavictl-tenant-get) | Get a tenant resource within CSM |
| [karavictl tenant list](#karavictl-tenant-list) | Lists tenant resources within CSM |
-| [karavictl tenant get](#karavictl-tenant-get) | Get a tenant resource within CSM |
+| [karavictl tenant revoke](#karavictl-tenant-revoke) | Get a tenant resource within CSM |
| [karavictl tenant delete](#karavictl-tenant-delete) | Deletes a tenant resource within CSM |
@@ -538,7 +539,46 @@ karavictl rolebinding create [flags]
```
$ karavictl rolebinding create --role CSISilver --tenant Alice
```
-On success, there will be no output. You may run `karavictl tenant get ` to confirm the rolebinding creation occurred.
+On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the rolebinding creation occurred.
+
+
+---
+
+
+
+### karavictl rolebinding delete
+
+Delete a rolebinding between role and tenant
+
+##### Synopsis
+
+Deletes a rolebinding between role and tenant
+
+```
+karavictl rolebinding delete [flags]
+```
+
+##### Options
+
+```
+ -h, --help help for create
+ -r, --role string Role name
+ -t, --tenant string Tenant name
+```
+
+##### Options inherited from parent commands
+
+```
+ --addr string Address of the server (default "localhost:443")
+ --config string config file (default is $HOME/.karavictl.yaml)
+```
+
+##### Output
+
+```
+$ karavictl rolebinding delete --role CSISilver --tenant Alice
+```
+On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the rolebinding deletion occurred.
@@ -802,7 +842,7 @@ Manage tenants
##### Synopsis
-Management fortenants
+Management for tenants
```
karavictl tenant [flags]
@@ -875,7 +915,7 @@ Get a tenant resource within CSM
##### Synopsis
-Gets a tenant resource within CSM
+Gets a tenant resource and its assigned roles within CSM
```
karavictl tenant get [flags]
@@ -902,6 +942,7 @@ $ karavictl tenant get --name Alice
{
"name": "Alice"
+ "roles": "role-1,role-2"
}
```
@@ -958,6 +999,44 @@ $ karavictl tenant list
+### karavictl tenant revoke
+
+Revokes access for a tenant
+
+##### Synopsis
+
+Revokes access to storage resources for a tenant
+
+```
+karavictl tenant revoke [flags]
+```
+
+##### Options
+
+```
+ -h, --help help for create
+ -n, --name string Tenant name
+```
+
+##### Options inherited from parent commands
+
+```
+ --addr string Address of the server (default "localhost:443")
+ --config string config file (default is $HOME/.karavictl.yaml)
+```
+
+##### Output
+```
+$ karavictl tenant revoke --name Alice
+```
+On success, there will be no output.
+
+
+
+---
+
+
+
### karavictl tenant delete
Deletes a tenant resource within CSM
@@ -988,4 +1067,4 @@ karavictl tenant delete [flags]
```
$ karavictl tenant delete --name Alice
```
-On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the deletion occurred.
\ No newline at end of file
+On success, there will be no output. You may run `karavictl tenant get --name ` to confirm the deletion occurred.
diff --git a/content/v3/authorization/deployment.md b/content/v3/authorization/deployment.md
deleted file mode 100644
index b2c11a53a0..0000000000
--- a/content/v3/authorization/deployment.md
+++ /dev/null
@@ -1,274 +0,0 @@
----
-title: Deployment
-linktitle: Deployment
-weight: 2
-description: >
- Dell EMC Container Storage Modules (CSM) for Authorization deployment
----
-
-This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts:
-- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators
-- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell EMC CSI drivers with CSM for Authorization
-
-## Prerequisites
-
-The CSM for Authorization proxy server requires a Linux host with the following minimum resource allocations:
-- 32 GB of memory
-- 4 CPU
-- 200 GB local storage
-
-## Deploying the CSM Authorization Proxy Server
-
-The first part deploying CSM for Authorization is installing the proxy server. This activity and the administration of the proxy server will be owned by the storage administrator.
-
-The CSM for Authorization proxy server is installed using a single binary installer.
-
-### Single Binary Installer
-
-The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
-
-The single binary installer can also be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer:
-
-```
-make dist build-installer rpm
-```
-
-The `build-installer` step creates a binary at `bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `deploy/rpm/x86_64/`.
-This allows CSM for Authorization to be installed in network-restricted environments.
-
-A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`.
-
-### Installing the RPM
-
-1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents:
-
- ```json
- {
- "web": {
- "sidecarproxyaddr": "docker_registry/sidecar-proxy:latest",
- "jwtsigningsecret": "secret"
- },
- "proxy": {
- "host": ":8080"
- },
- "zipkin": {
- "collectoruri": "http://DNS_host_name:9411/api/v2/spans",
- "probability": 1
- },
- "certificate": {
- "keyFile": "path_to_private_key_file",
- "crtFile": "path_to_host_cert_file",
- "rootCertificate": "path_to_root_CA_file"
- },
- "hostName": "DNS_host_name"
- }
- ```
-
- In the above template, `DNS_host_name` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running the below command on the system:
-
- ```
- nslookup
- ```
-
-2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS_host_name` is also required. All traffic from `grpc.DNS_host_name` needs to be routed to `DNS_host_name` address, this can be configured by adding a new DNS entry for `grpc.DNS_host_name` or providing a temporary path in the `/etc/hosts` file.
-
- **NOTE:** The certificate provided in `crtFile` should be valid for both the `DNS_host_name` and the `grpc.DNS_host_name` address.
-
- For example, create the certificate config file with alternate names (to include example.com and grpc.example.com) and then create the .crt file:
-
- ```
- CN = example.com
- subjectAltName = @alt_names
- [alt_names]
- DNS.1 = grpc.example.com
-
- openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out example.com.crt -days 365 -sha256
- ```
-
-3. To install the rpm package on the system, run the below command:
-
- ```shell
- rpm -ivh
- ```
-
-4. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`.
-
-## Configuring the CSM for Authorization Proxy Server
-
-The storage administrator must first configure the proxy server with the following:
-- Storage systems
-- Tenants
-- Roles
-- Bind roles to tenants
-
-Run the following commands on the Authorization proxy server:
-
- ```console
- # Specify any desired name
- export RoleName=""
- export RoleQuota=""
- export TenantName=""
-
- # Specify info about Array1
- export Array1Type=""
- export Array1SystemID=""
- export Array1User=""
- export Array1Password=""
- export Array1Pool=""
- export Array1Endpoint=""
-
- # Specify info about Array2
- export Array2Type=""
- export Array2SystemID=""
- export Array2User=""
- export Array2Password=""
- export Array2Pool=""
- export Array2Endpoint=""
-
- # Specify IPs
- export DriverHostVMIP=""
- export DriverHostVMPassword=""
- export DriverHostVMUser=""
-
- # Specify Authorization host address. NOTE: this is not the same as IP
- export AuthorizationHost=""
-
- echo === Creating Storage(s) ===
- # Add array1 to authorization
- karavictl storage create \
- --type ${Array1Type} \
- --endpoint ${Array1Endpoint} \
- --system-id ${Array1SystemID} \
- --user ${Array1User} \
- --password ${Array1Password} \
- --insecure
-
- # Add array2 to authorization
- karavictl storage create \
- --type ${Array2Type} \
- --endpoint ${Array2Endpoint} \
- --system-id ${Array2SystemID} \
- --user ${Array2User} \
- --password ${Array2Password} \
- --insecure
-
- echo === Creating Tenant ===
- karavictl tenant create -n $TenantName --insecure --addr "grpc.${AuthorizationHost}"
-
- echo === Creating Role ===
- karavictl role create \
- --role=${RoleName}=${Array1Type}=${Array1SystemID}=${Array1Pool}=${RoleQuota} \
- --role=${RoleName}=${Array2Type}=${Array2SystemID}=${Array2Pool}=${RoleQuota}
-
- echo === === Binding Role ===
- karavictl rolebinding create --tenant $TenantName --role $RoleName --insecure --addr "grpc.${AuthorizationHost}"
- ```
-
-### Generate a Token
-
-After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin.
-
- ```
- echo === Generating token ===
- karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationHost}" | jq -r '.Token' > token.yaml
-
- echo === Copy token to Driver Host ===
- sshpass -p $DriverHostPassword scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
- ```
-
-**Note:** The sample above copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin.
-
-### Copy the karavictl Binary to the Kubernetes Master Node
-
-The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node where Kubernetes tenant admins so they configure the Dell EMC CSI driver with CSM for Authorization.
-
-```
-sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl
-```
-
-**Note:** The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin.
-
-## Configuring a Dell EMC CSI Driver with CSM for Authorization
-
-The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers). This is controlled by the Kubernetes tenant admin.
-
-There are currently 2 ways of doing this:
-- Using the [CSM Installer](../../deployment) (*Recommended installation method*)
-- Manually by following the steps [below](#configuring-a-dell-emc-csi-driver)
-
-### Configuring a Dell EMC CSI Driver
-
-Given a setup where Kubernetes, a storage system, CSI driver(s), and CSM for Authorization are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar:
-
-Run the following commands on the CSI Driver host
-
- ```console
- # Specify Authorization host address. NOTE: this is not the same as IP
- export AuthorizationHost=""
-
- echo === Applying token token ===
- # It is assumed that array type powermax has the namespace "powermax" and powerflex has the namepace "vxflexos"
- kubectl apply -f /tmp/token.yaml -n powermax
- kubectl apply -f /tmp/token.yaml -n vxflexos
-
- echo === injecting sidecar in all CSI driver hosts that token has been applied to ===
- sudo curl -k https://${AuthorizationHost}/install | sh
-
- # NOTE: you can also query parameters("namespace" and "proxy-port") with the curl url if you desire a specific behavior.
- # 1) For instance, if you want to inject into just powermax, you can run
- # sudo curl -k https://${AuthorizationHost}/install?namespace=powermax | sh
- # 2) If you want to specify the proxy-port for powermax to be 900001, you can run
- # sudo curl -k https://${AuthorizationHost}/install?proxy-port=powermax:900001 | sh
- # 3) You can mix behaviors
- # sudo curl -k https://${AuthorizationHost}/install?namespace=powermax&proxy-port=powermax:900001&namespace=vxflexos | sh
- ```
-
-## Updating CSM for Authorization Proxy Server Configuration
-
-CSM for Authorization has a subset of configuration parameters that can be updated dynamically:
-
-| Parameter | Type | Default | Description |
-| --------- | ---- | ------- | ----------- |
-| certificate.crtFile | String | "" |Path to the host certificate file |
-| certificate.keyFile | String | "" |Path to the host private key file |
-| certificate.rootCertificate | String | "" |Path to the root CA file |
-| web.sidecarproxyaddr | String |"127.0.0.1:5000/sidecar-proxy:latest" |Docker registry address of the CSM for Authorization sidecar-proxy |
-| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens |
-
-Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so:
-
-`k3s kubectl -n karavi get secret/karavi-config-secret`
-
-To update or add parameters, you must edit the base64 encoded data in the secret. The` karavi-config-secret` data can be decoded like so:
-
-`k3s kubectl -n karavi get secret/karavi-config-secret -o yaml | grep config.yaml | head -n 1 | awk '{print $2}' | base64 -d`
-
-Save the output to a file or copy it to an editor to make changes. Once you are done with the changes, you must encode the data to base64. If your changes are in a file, you can encode it like so:
-
-`cat | base64`
-
-Copy the new, encoded data and edit the `karavi-config-secret` with the new data. Run this command to edit the secret:
-
-`k3s kubectl -n karavi edit secret/karavi-config-secret`
-
-Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret.
-
-__Note:__ If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so:
-
-`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -`
-
-## CSM for Authorization Proxy Server Dynamic Configuration Settings
-
-Some settings are not stored in the `karavi-config-secret` but in the csm-config-params ConfigMap, such as LOG_LEVEL and LOG_FORMAT. To update the CSM for Authorization logging settings during runtime, run the below command on the K3s cluster, make your changes, and save the updated configmap data.
-
-```
-k3s kubectl -n karavi edit configmap/csm-config-params
-```
-
-This edit will not update the logging level for the sidecar-proxy containers running in the CSI Driver pods. To update the sidecar-proxy logging levels, you must update the associated CSI Driver ConfigMap in a similar fashion:
-
-```
-kubectl -n [CSM_CSI_DRVIER_NAMESPACE] edit configmap/-config-params
-```
-
-Using PowerFlex as an example, `kubectl -n vxflexos edit configmap/vxflexos-config-params` can be used to update the logging level of the sidecar-proxy and the driver.
\ No newline at end of file
diff --git a/content/v3/authorization/deployment/_index.md b/content/v3/authorization/deployment/_index.md
index ca15cb03da..5ff8a907d1 100644
--- a/content/v3/authorization/deployment/_index.md
+++ b/content/v3/authorization/deployment/_index.md
@@ -1,344 +1,11 @@
---
title: Deployment
-linktitle: Deployment
+linktitle: Deployment
weight: 2
-description: >
- Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization deployment
+description: Methods to install CSM Authorization
+tags:
+ - install
+ - csm-authorization
---
-This section outlines the deployment steps for Container Storage Modules (CSM) for Authorization. The deployment of CSM for Authorization is handled in 2 parts:
-- Deploying the CSM for Authorization proxy server, to be controlled by storage administrators
-- Configuring one to many [supported](../../authorization#supported-csi-drivers) Dell CSI drivers with CSM for Authorization
-
-## Prerequisites
-
-The CSM for Authorization proxy server requires a Linux host with the following minimum resource allocations:
-- 32 GB of memory
-- 4 CPU
-- 200 GB local storage
-
-## Deploying the CSM Authorization Proxy Server
-
-The first part of deploying CSM for Authorization is installing the proxy server. This activity and the administration of the proxy server will be owned by the storage administrator.
-
-The CSM for Authorization proxy server is installed using a single binary installer.
-
-### Single Binary Installer
-
-The easiest way to obtain the single binary installer RPM is directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
-
-Alternatively, the single binary installer can be built from source by cloning the [GitHub repository](https://github.com/dell/karavi-authorization) and using the following Makefile targets to build the installer:
-
-```
-make dist build-installer rpm
-```
-
-The `build-installer` step creates a binary at `karavi-authorization/bin/deploy` and embeds all components required for installation. The `rpm` step generates an RPM package and stores it at `karavi-authorization/deploy/rpm/x86_64/`.
-This allows CSM for Authorization to be installed in network-restricted environments.
-
-A Storage Administrator can execute the installer or rpm package as a root user or via `sudo`.
-
-### Installing the RPM
-
-1. Before installing the rpm, some network and security configuration inputs need to be provided in json format. The json file should be created in the location `$HOME/.karavi/config.json` having the following contents:
-
- ```json
- {
- "web": {
- "jwtsigningsecret": "secret"
- },
- "proxy": {
- "host": ":8080"
- },
- "zipkin": {
- "collectoruri": "http://DNS-hostname:9411/api/v2/spans",
- "probability": 1
- },
- "certificate": {
- "keyFile": "path_to_private_key_file",
- "crtFile": "path_to_host_cert_file",
- "rootCertificate": "path_to_root_CA_file"
- },
- "hostname": "DNS-hostname"
- }
- ```
-
- In an instance where a secure deployment is not required, an insecure deployment is possible. Please note that self-signed certificates will be created for you using cert-manager to allow TLS encryption for communication on the CSM for Authorization proxy server. However, this is not recommended for production environments. For an insecure deployment, the json file in the location `$HOME/.karavi/config.json` only requires the following contents:
-
- ```json
- {
- "hostname": "DNS-hostname"
- }
- ```
-
->__Note__:
-> - `DNS-hostname` refers to the hostname of the system in which the CSM for Authorization server will be installed. This hostname can be found by running `nslookup `
-> - There are a number of ways to create certificates. In a production environment, certificates are usually created and managed by an IT administrator. Otherwise, certificates can be created using OpenSSL.
-
-2. In order to configure secure grpc connectivity, an additional subdomain in the format `grpc.DNS-hostname` is also required. All traffic from `grpc.DNS-hostname` needs to be routed to `DNS-hostname` address, this can be configured by adding a new DNS entry for `grpc.DNS-hostname` or providing a temporary path in the systems `/etc/hosts` file.
-
->__Note__: The certificate provided in `crtFile` should be valid for both the `DNS-hostname` and the `grpc.DNS-hostname` address.
-
- For example, create the certificate config file with alternate names (to include DNS-hostname and grpc.DNS-hostname) and then create the .crt file:
-
- ```
- CN = DNS-hostname
- subjectAltName = @alt_names
- [alt_names]
- DNS.1 = grpc.DNS-hostname.com
-
- $ openssl x509 -req -in cert_request_file.csr -CA root_CA.pem -CAkey private_key_File.key -CAcreateserial -out DNS-hostname.com.crt -days 365 -sha256
- ```
-
-3. To install the rpm package on the system, run the below command:
-
- ```shell
- rpm -ivh
- ```
-
-4. After installation, application data will be stored on the system under `/var/lib/rancher/k3s/storage/`.
-
-## Configuring the CSM for Authorization Proxy Server
-
-The storage administrator must first configure the proxy server with the following:
-- Storage systems
-- Tenants
-- Roles
-- Bind roles to tenants
-
-Run the following commands on the Authorization proxy server:
->__Note__: The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`.
-
- ```console
- # Specify any desired name
- export RoleName=""
- export RoleQuota=""
- export TenantName=""
-
- # Specify info about Array1
- export Array1Type=""
- export Array1SystemID=""
- export Array1User=""
- export Array1Password=""
- export Array1Pool=""
- export Array1Endpoint=""
-
- # Specify info about Array2
- export Array2Type=""
- export Array2SystemID=""
- export Array2User=""
- export Array2Password=""
- export Array2Pool=""
- export Array2Endpoint=""
-
- # Specify IPs
- export DriverHostVMIP=""
- export DriverHostVMPassword=""
- export DriverHostVMUser=""
-
- # Specify Authorization proxy host address. NOTE: this is not the same as IP
- export AuthorizationProxyHost=""
-
- echo === Creating Storage(s) ===
- # Add array1 to authorization
- karavictl storage create \
- --type ${Array1Type} \
- --endpoint ${Array1Endpoint} \
- --system-id ${Array1SystemID} \
- --user ${Array1User} \
- --password ${Array1Password} \
- --insecure
-
- # Add array2 to authorization
- karavictl storage create \
- --type ${Array2Type} \
- --endpoint ${Array2Endpoint} \
- --system-id ${Array2SystemID} \
- --user ${Array2User} \
- --password ${Array2Password} \
- --insecure
-
- echo === Creating Tenant ===
- karavictl tenant create -n $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}"
-
- echo === Creating Role ===
- karavictl role create \
- --role=${RoleName}=${Array1Type}=${Array1SystemID}=${Array1Pool}=${RoleQuota} \
- --role=${RoleName}=${Array2Type}=${Array2SystemID}=${Array2Pool}=${RoleQuota}
-
- echo === === Binding Role ===
- karavictl rolebinding create --tenant $TenantName --role $RoleName --insecure --addr "grpc.${AuthorizationProxyHost}"
- ```
-
-### Generate a Token
-
-After creating the role bindings, the next logical step is to generate the access token. The storage admin is responsible for generating and sending the token to the Kubernetes tenant admin.
-
->__Note__:
-> - The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`.
-> - This sample copies the token directly to the Kubernetes cluster master node. The requirement here is that the token must be copied and/or stored in any location accessible to the Kubernetes tenant admin.
-
- ```
- echo === Generating token ===
- karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > token.yaml
-
- echo === Copy token to Driver Host ===
- sshpass -p $DriverHostPassword scp token.yaml ${DriverHostVMUser}@{DriverHostVMIP}:/tmp/token.yaml
- ```
-
-### Copy the karavictl Binary to the Kubernetes Master Node
-
-The karavictl binary is available from the CSM for Authorization proxy server. This needs to be copied to the Kubernetes master node for Kubernetes tenant admins so the Kubernetes tenant admins can configure the Dell CSI driver with CSM for Authorization.
-
-```
-sshpass -p dangerous scp bin/karavictl root@10.247.96.174:/tmp/karavictl
-```
-
->__Note__: The storage admin is responsible for copying the binary to a location accessible by the Kubernetes tenant admin.
-
-## Configuring a Dell CSI Driver with CSM for Authorization
-
-The second part of CSM for Authorization deployment is to configure one or more of the [supported](../../authorization#supported-csi-drivers) CSI drivers. This is controlled by the Kubernetes tenant admin.
-
-### Configuring a Dell CSI Driver
-
-Given a setup where Kubernetes, a storage system, and the CSM for Authorization Proxy Server are deployed, follow the steps below to configure the CSI Drivers to work with the Authorization sidecar:
-
-1. Create the secret token in the namespace of the driver.
-
- ```console
- # It is assumed that array type powermax has the namespace "powermax", powerflex has the namepace "vxflexos", and powerscale has the namespace "isilon".
- kubectl apply -f /tmp/token.yaml -n powermax
- kubectl apply -f /tmp/token.yaml -n vxflexos
- kubectl apply -f /tmp/token.yaml -n isilon
- ```
-
-2. Edit the following parameters in samples/secret/karavi-authorization-config.json file in [CSI PowerFlex](https://github.com/dell/csi-powerflex/tree/main/samples), [CSI PowerMax](https://github.com/dell/csi-powermax/tree/main/samples/secret), or [CSI PowerScale](https://github.com/dell/csi-powerscale/tree/main/samples/secret) driver and update/add connection information for one or more backend storage arrays. In an instance where multiple CSI drivers are configured on the same Kubernetes cluster, the port range in the *endpoint* parameter must be different for each driver.
-
- | Parameter | Description | Required | Default |
- | --------- | ----------- | -------- |-------- |
- | username | Username for connecting to the backend storage array. This parameter is ignored. | No | - |
- | password | Password for connecting to to the backend storage array. This parameter is ignored. | No | - |
- | intendedEndpoint | HTTPS REST API endpoint of the backend storage array. | Yes | - |
- | endpoint | HTTPS localhost endpoint that the authorization sidecar will listen on. | Yes | https://localhost:9400 |
- | systemID | System ID of the backend storage array. | Yes | " " |
- | insecure | A boolean that enables/disables certificate validation of the backend storage array. This parameter is not used. | No | true |
- | isDefault | A boolean that indicates if the array is the default array. This parameter is not used. | No | default value from values.yaml |
-
-
-Create the karavi-authorization-config secret using the following command:
-
-`kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic karavi-authorization-config --from-file=config=samples/secret/karavi-authorization-config.json -o yaml --dry-run=client | kubectl apply -f -`
-
->__Note__:
-> - Create the driver secret as you would normally except update/add the connection information for communicating with the sidecar instead of the backend storage array and scrub the username and password
-> - For PowerScale, the *systemID* will be the *clusterName* of the array.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
-3. Create the proxy-server-root-certificate secret.
-
- If running in *insecure* mode, create the secret with empty data:
-
- `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-literal=rootCertificate.pem= -o yaml --dry-run=client | kubectl apply -f -`
-
- Otherwise, create the proxy-server-root-certificate secret with the appropriate file:
-
- `kubectl -n [CSI_DRIVER_NAMESPACE] create secret generic proxy-server-root-certificate --from-file=rootCertificate.pem=/path/to/rootCA -o yaml --dry-run=client | kubectl apply -f -`
-
-
->__Note__: Follow the steps below for additional configurations to one or more of the supported CSI drivers.
-#### PowerFlex
-
-Please refer to step 5 in the [installation steps for PowerFlex](../../csidriver/installation/helm/powerflex) to edit the parameters in samples/config.yaml file to communicate with the sidecar.
-
-1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
-2. Create vxflexos-config secret using the following command:
-
- `kubectl create secret generic vxflexos-config -n vxflexos --from-file=config=config.yaml -o yaml --dry-run=client | kubectl apply -f -`
-
-Please refer to step 9 in the [installation steps for PowerFlex](../../csidriver/installation/helm/powerflex) to edit the parameters in *myvalues.yaml* file to communicate with the sidecar.
-
-3. Enable CSM for Authorization and provide *proxyHost* address
-
-4. Install the CSI PowerFlex driver
-#### PowerMax
-
-Please refer to step 7 in the [installation steps for PowerMax](../../csidriver/installation/helm/powermax) to edit the parameters in *my-powermax-settings.yaml* to communicate with the sidecar.
-
-1. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
-2. Enable CSM for Authorization and provide *proxyHost* address
-
-3. Install the CSI PowerMax driver
-
-#### PowerScale
-
-Please refer to step 5 in the [installation steps for PowerScale](../../csidriver/installation/helm/isilon) to edit the parameters in *my-isilon-settings.yaml* to communicate with the sidecar.
-
-1. Update *endpointPort* to match the endpoint port number set in samples/secret/karavi-authorization-config.json
-
-*Notes:*
-> - In *my-isilon-settings.yaml*, endpointPort acts as a default value. If endpointPort is not specified in *my-isilon-settings.yaml*, then it should be specified in the *endpoint* parameter of samples/secret/secret.yaml.
-> - The *isilon-creds* secret has a *mountEndpoint* parameter which must be set to the hostname or IP address of the PowerScale OneFS API server, for example, 10.0.0.1.
-
-2. Enable CSM for Authorization and provide *proxyHost* address
-
-Please refer to step 6 in the [installation steps for PowerScale](../../csidriver/installation/helm/isilon) to edit the parameters in samples/secret/secret.yaml file to communicate with the sidecar.
-
-3. Update *endpoint* to match the endpoint set in samples/secret/karavi-authorization-config.json
-
->__Note__: Only add the endpoint port if it has not been set in *my-isilon-settings.yaml*.
-
-4. Create the isilon-creds secret using the following command:
-
- `kubectl create secret generic isilon-creds -n isilon --from-file=config=secret.yaml -o yaml --dry-run=client | kubectl apply -f -`
-
-5. Install the CSI PowerScale driver
-## Updating CSM for Authorization Proxy Server Configuration
-
-CSM for Authorization has a subset of configuration parameters that can be updated dynamically:
-
-| Parameter | Type | Default | Description |
-| --------- | ---- | ------- | ----------- |
-| certificate.crtFile | String | "" |Path to the host certificate file |
-| certificate.keyFile | String | "" |Path to the host private key file |
-| certificate.rootCertificate | String | "" |Path to the root CA file |
-| web.jwtsigningsecret | String | "secret" |The secret used to sign JWT tokens |
-
-Updating configuration parameters can be done by editing the `karavi-config-secret` on the CSM for the Authorization Server. The secret can be queried using k3s and kubectl like so:
-
-`k3s kubectl -n karavi get secret/karavi-config-secret`
-
-To update or add parameters, you must edit the base64 encoded data in the secret. The` karavi-config-secret` data can be decoded like so:
-
-`k3s kubectl -n karavi get secret/karavi-config-secret -o yaml | grep config.yaml | head -n 1 | awk '{print $2}' | base64 -d`
-
-Save the output to a file or copy it to an editor to make changes. Once you are done with the changes, you must encode the data to base64. If your changes are in a file, you can encode it like so:
-
-`cat | base64`
-
-Copy the new, encoded data and edit the `karavi-config-secret` with the new data. Run this command to edit the secret:
-
-`k3s kubectl -n karavi edit secret/karavi-config-secret`
-
-Replace the data in `config.yaml` under the `data` field with your new, encoded data. Save the changes and CSM for Authorization will read the changed secret.
-
->__Note__: If you are updating the signing secret, the tenants need to be updated with new tokens via the `karavictl generate token` command like so. The `--insecure` flag is only necessary if certificates were not provided in `$HOME/.karavi/config.json`
-
-`karavictl generate token --tenant $TenantName --insecure --addr "grpc.${AuthorizationProxyHost}" | jq -r '.Token' > kubectl -n $namespace apply -f -`
-
-## CSM for Authorization Proxy Server Dynamic Configuration Settings
-
-Some settings are not stored in the `karavi-config-secret` but in the csm-config-params ConfigMap, such as LOG_LEVEL and LOG_FORMAT. To update the CSM for Authorization logging settings during runtime, run the below command on the K3s cluster, make your changes, and save the updated configmap data.
-
-```
-k3s kubectl -n karavi edit configmap/csm-config-params
-```
-
-This edit will not update the logging level for the sidecar-proxy containers running in the CSI Driver pods. To update the sidecar-proxy logging levels, you must update the associated CSI Driver ConfigMap in a similar fashion:
-
-```
-kubectl -n [CSM_CSI_DRVIER_NAMESPACE] edit configmap/-config-params
-```
-
-Using PowerFlex as an example, `kubectl -n vxflexos edit configmap/vxflexos-config-params` can be used to update the logging level of the sidecar-proxy and the driver.
+Installation information for CSM Authorization can be found in this section.
diff --git a/content/v3/authorization/deployment/helm/_index.md b/content/v3/authorization/deployment/helm/_index.md
new file mode 100644
index 0000000000..76d0f47c1a
--- /dev/null
+++ b/content/v3/authorization/deployment/helm/_index.md
@@ -0,0 +1,374 @@
+---
+title: Helm
+linktitle: Helm
+description: >
+ Dell Technologies (Dell) Container Storage Modules (CSM) for Authorization Helm deployment
+---
+
+CSM Authorization can be installed by using the provided Helm v3 charts on Kubernetes platforms.
+
+The following CSM Authorization components are installed in the specified namespace:
+- proxy-service, which forwards requests from the CSI Driver to the backend storage array
+- tenant-service, which configures tenants, role bindings, and generates JSON Web Tokens
+- role-service, which configures roles for tenants to be bound to
+- storage-service, which configures backend storage arrays for the proxy-server to foward requests to
+
+The folloiwng third-party components are installed in the specified namespace:
+- redis, which stores data regarding tenants and their volume ownership, quota, and revokation status
+- redis-commander, a web management tool for Redis
+
+The following third-party components are optionally installed in the specified namespace:
+- cert-manager, which optionally provides a self-signed certificate to configure the CSM Authorization Ingresses
+- nginx-ingress-controller, which fulfills the CSM Authorization Ingresses
+
+## Install CSM Authorization
+
+**Steps**
+1. Run `git clone https://github.com/dell/helm-charts.git` to clone the git repository.
+
+2. Ensure that you have created a namespace where you want to install CSM Authorization. You can run `kubectl create namespace authorization` to create a new one.
+
+3. Prepare `samples/csm-authorization/config.yaml` which contains the JWT signing secret. The following table lists the configuration parameters.
+
+ | Parameter | Description | Required | Default |
+ | --------- | ------------------------------------------------------------ | -------- | ------- |
+ | web.jwtsigningsecret | String used to sign JSON Web Tokens | true | secret |.
+
+ Example:
+
+ ```yaml
+ web:
+ jwtsigningsecret: randomString123
+ ```
+
+ After editing the file, run the following command to create a secret called `karavi-config-secret`:
+
+ `kubectl create secret generic karavi-config-secret -n authorization --from-file=config.yaml=samples/csm-authorization/config.yaml`
+
+ Use the following command to replace or update the secret:
+
+ `kubectl create secret generic karavi-config-secret -n authorization --from-file=config=samples/csm-authorization/config.yaml -o yaml --dry-run=client | kubectl replace -f -`
+
+4. Copy the default values.yaml file `cp charts/csm-authorization/values.yaml myvalues.yaml`
+
+5. Look over all the fields in `myvalues.yaml` and fill in/adjust any as needed.
+
+| Parameter | Description | Required | Default |
+| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- |
+| **ingress-nginx** | This section configures the enablement of the NGINX Ingress Controller. | - | - |
+| enabled | Enable/Disable deployment of the NGINX Ingress Controller. Set to false if you already have an Ingress Controller installed. | No | true |
+| **cert-manager** | This section configures the enablement of cert-manager. | - | - |
+| enabled | Enable/Disable deployment of cert-manager. Set to false if you already have cert-manager installed. | No | true |
+| **authorization** | This section configures the CSM-Authorization components. | - | - |
+| authorization.images.proxyService | The image to use for the proxy-service. | Yes | dellemc/csm-authorization-proxy:nightly |
+| authorization.images.tenantService | The image to use for the tenant-service. | Yes | dellemc/csm-authorization-tenant:nightly |
+| authorization.images.roleService | The image to use for the role-service. | Yes | dellemc/csm-authorization-proxy:nightly |
+| authorization.images.storageService | The image to use for the storage-service. | Yes | dellemc/csm-authorization-storage:nightly |
+| authorization.images.opa | The image to use for Open Policy Agent. | Yes | openpolicyagent/opa |
+| authorization.images.opaKubeMgmt | The image to use for Open Policy Agent kube-mgmt. | Yes | openpolicyagent/kube-mgmt:0.11 |
+| authorization.hostname | The hostname to configure the self-signed certificate (if applicable) and the proxy, tenant, role, and storage service Ingresses. | Yes | csm-authorization.com |
+| authorization.logLevel | CSM Authorization log level. Allowed values: “error”, “warn”/“warning”, “info”, “debug”. | Yes | debug |
+| authorization.zipkin.collectoruri | The URI of the Zipkin instance to export traces. | No | - |
+| authorization.zipkin.probability | The ratio of traces to export. | No | - |
+| authorization.proxyServerIngress.ingressClassName | The ingressClassName of the proxy-service Ingress. | Yes | - |
+| authorization.proxyServerIngress.hosts | Additional host rules to be applied to the proxy-service Ingress. | No | - |
+| authorization.proxyServerIngress.annotations | Additional annotations for the proxy-service Ingress. | No | - |
+| authorization.tenantServiceIngress.ingressClassName | The ingressClassName of the tenant-service Ingress. | Yes | - |
+| authorization.tenantServiceIngress.hosts | Additional host rules to be applied to the tenant-service Ingress. | No | - |
+| authorization.tenantServiceIngress.annotations | Additional annotations for the tenant-service Ingress. | No | - |
+| authorization.roleServiceIngress.ingressClassName | The ingressClassName of the role-service Ingress. | Yes | - |
+| authorization.roleServiceIngress.hosts | Additional host rules to be applied to the role-service Ingress. | No | - |
+| authorization.roleServiceIngress.annotations | Additional annotations for the role-service Ingress. | No | - |
+| authorization.storageServiceIngress.ingressClassName | The ingressClassName of the storage-service Ingress. | Yes | - |
+| authorization.storageServiceIngress.hosts | Additional host rules to be applied to the storage-service Ingress. | No | - |
+| authorization.storageServiceIngress.annotations | Additional annotations for the storage-service Ingress. | No | - |
+| **redis** | This section configures Redis. | - | - |
+| redis.images.redis | The image to use for Redis. | Yes | redis:6.0.8-alpine |
+| redis.images.commander | The image to use for Redis Commander. | Yes | rediscommander/redis-commander:latest |
+| redis.storageClass | The storage class for Redis to use for persistence. If not supplied, the default storage class is used. | No | - |
+
+ *NOTE*:
+- The tenant, role, and storage services use GRPC. If the Ingress Controller requires annotations to support GRPC, they must be supplied.
+
+6. Install the driver using `helm`:
+
+To install CSM Authorization with the service Ingresses using your own certificate, run:
+
+```
+helm -n authorization install authorization -f myvalues.yaml charts/csm-authorization \
+--set-file authorization.certificate= \
+--set-file authorization.privateKey=
+```
+
+To install CSM Authorization with the service Ingresses using a self-signed certificate generated via cert-manager, run:
+
+```
+helm -n authorization install authorization -f myvalues.yaml charts/csm-authorization
+```
+
+## Install Karavictl
+
+The Karavictl CLI can be obtained directly from the [GitHub repository's releases](https://github.com/dell/karavi-authorization/releases) section.
+
+In order to run `karavictl` commands, the binary needs to exist in your PATH, for example /usr/local/bin.
+
+Karavictl commands and intended use can be found [here](../../cli/).
+
+## Configuring the CSM Authorization Proxy Server
+
+The storage administrator must first configure the proxy server with the following:
+- Storage systems
+- Tenants
+- Roles
+- Role bindings
+
+This is done using `karavictl` to connect to the storage, tenant, and role services. In this example, we will be referencing an installation using `csm-authorization.com` as the authorization.hostname value and the NGINX Ingress Controller accessed via the cluster's master node.
+
+Run `kubectl -n authorization get ingress` and `kubectl -n authorization get service` to see the Ingress rules for these services and the exposed port for accessing these services via the LoadBalancer. For example:
+
+```
+# kubectl -n authorization get ingress
+NAME CLASS HOSTS ADDRESS PORTS AGE
+proxy-server nginx csm-authorization.com 80, 443 86s
+role-service nginx role.csm-authorization.com 80, 443 86s
+storage-service nginx storage.csm-authorization.com 80, 443 86s
+tenant-service nginx tenant.csm-authorization.com 80, 443 86s
+
+# kubectl -n auth get service
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+authorization-cert-manager ClusterIP 10.104.35.150 9402/TCP 28s
+authorization-cert-manager-webhook ClusterIP 10.97.179.94 443/TCP 27s
+authorization-ingress-nginx-controller LoadBalancer 10.108.115.217 80:30080/TCP,443:30016/TCP 27s
+authorization-ingress-nginx-controller-admission ClusterIP 10.103.143.215