Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing Broken links #1038

Merged
merged 1 commit into from
Mar 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/docs/cert-csi/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -375,7 +375,7 @@ storageClasses:

> NOTE: For testing/debugging purposes, it can be useful to use the `--no-cleanup` so resources do not get deleted.

> NOTE: If you are using CSI PowerScale with [SmartQuotas](../../features/powerscale/#usage-of-smartquotas-to-limit-storage-consumption) disabled, the `Volume Expansion` suite is expected to timeout due to the way PowerScale provisions storage. Set `storageClasses.expansion` to `false` to skip this suite.
> NOTE: If you are using CSI PowerScale with [SmartQuotas](../csidriver/features/powerscale/#usage-of-smartquotas-to-limit-storage-consumption) disabled, the `Volume Expansion` suite is expected to timeout due to the way PowerScale provisions storage. Set `storageClasses.expansion` to `false` to skip this suite.

```bash
cert-csi certify --cert-config <path-to-config> --vsc <volume-snapshot-class>
Expand Down Expand Up @@ -532,7 +532,7 @@ Run `cert-csi test clone-volume -h` for more options.

> Raw block volumes cannot be verified since there is no filesystem.

> If you are using CSI PowerScale with [SmartQuotas](../../features/powerscale/#usage-of-smartquotas-to-limit-storage-consumption) disabled, the `Volume Expansion` suite is expected to timeout due to the way PowerScale provisions storage.
> If you are using CSI PowerScale with [SmartQuotas](../csidriver/features/powerscale/#usage-of-smartquotas-to-limit-storage-consumption) disabled, the `Volume Expansion` suite is expected to timeout due to the way PowerScale provisions storage.

```bash
cert-csi test expansion --sc <storage class>
Expand Down
2 changes: 1 addition & 1 deletion content/docs/csidriver/release/powerstore.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ description: Release notes for PowerStore CSI driver
| If the NVMeFC pod is not getting created and the host looses the ssh connection, causing the driver pods to go to error state | remove the nvme_tcp module from the host incase of NVMeFC connection |
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: <br /> 1. Force delete the pod running on the node that went down <br /> 2. Delete the volumeattachment to the node that went down. <br /> Now the volume can be attached to the new node. |
| When driver node pods enter CrashLoopBackOff and PVC remains in pending state with one of the following events:<br /> 1. failed to provision volume with StorageClass `<storage-class-name>`: error generating accessibility requirements: no available topology found <br /> 2. waiting for a volume to be created, either by external provisioner "csi-powerstore.dellemc.com" or manually created by system administrator. | Check whether all array details present in the secret file are valid and remove any invalid entries if present. <br/>Redeploy the driver. |
| If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission (https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html). Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. |
| If an ephemeral pod is not being created in OpenShift 4.13 and is failing with the error "error when creating pod: the pod uses an inline volume provided by CSIDriver csi-powerstore.dellemc.com, and the namespace has a pod security enforcement level that is lower than privileged." | This issue occurs because OpenShift 4.13 introduced the CSI Volume Admission plugin to restrict the use of a CSI driver capable of provisioning CSI ephemeral volumes during pod admission https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/ephemeral-storage-csi-inline.html . Therefore, an additional label "security.openshift.io/csi-ephemeral-volume-profile" needs to be added to the CSIDriver object to support inline ephemeral volumes. |
| In OpenShift 4.13, the root user is not allowed to perform write operations on NFS shares, when root squashing is enabled. | The workaround for this issue is to disable root squashing by setting allowRoot: "true" in the NFS storage class. |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs, and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with Kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| If two separate networks are configured for ISCSI and NVMeTCP, the driver may encounter difficulty identifying the second network (e.g., NVMeTCP). | This is a known issue, and the workaround involves creating a single network on the array to serve both ISCSI and NVMeTCP purposes. |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ replication:

The CRDs for replication can be obtained and installed from the csm-replication project on Github. Use `csm-replication/deploy/replicationcrds.all.yaml` located in csm-replication git repo for the installation.

CRDs should be configured during replication prepare stage with repctl as described in [install-repctl](../../../../replication/deployment/install-repctl)
CRDs should be configured during replication prepare stage with repctl as described in [install-repctl](../../../helm/modules/installation/replication/install-repctl)

1. Create namespace.
Execute `kubectl create namespace powerstore` to create the powerstore namespace (if not already present). Note that the namespace can be any user-defined name, in this example, we assume that the namespace is 'powerstore'.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ description: >
- Select *Create instance* under the provided Container Storage Module API
- Use the CR backup from step 1 to manually map desired settings to the new CSI driver
- As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
- Ex: spec.driver.fsGroupPolicy in [PowerMax 2.7 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v270_k8s_127.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.7 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v270.yaml#L28C7-L28C20)
- Ex: spec.driver.fsGroupPolicy in [PowerMax 2.7 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v270_k8s_127.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.10 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v2100.yaml#L28C7-L28C20)
>NOTE: Uninstallation of the driver and the Operator is non-disruptive for mounted volumes. Nonetheless you can not create new volume, snapshot or move a Pod.

## Testing
Expand Down
2 changes: 1 addition & 1 deletion content/v2/cosidriver/installation/helm.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Installing any of the CSI Driver components using Helm requires a few utilities
1. Run `git clone -b main https://github.com/dell/helm-charts.git` to clone the git repository.
2. Ensure that you have created the namespace where you want to install the driver. You can run `kubectl create namespace dell-cosi` to create a new one. The use of _dell-cosi_ as the namespace is just an example. You can choose any name for the namespace.
3. Copy the _charts/cosi/values.yaml_ into a new location with name _my-cosi-values.yaml_, to customize settings for installation.
4. Create new file called _my-cosi-configuration.yaml_, and copy the settings available in the [Configuration File](./configuration_file.md) page.
4. Create new file called _my-cosi-configuration.yaml_, and copy the settings available in the [Configuration File](../configuration_file) page.
5. Edit *my-cosi-values.yaml* to set the following parameters for your installation:
The following table lists the primary configurable parameters of the COSI driver Helm chart and their default values. More detailed information can be found in the [`values.yaml`](https://github.com/dell/helm-charts/blob/master/charts/cosi/values.yaml) file in this repository.

Expand Down
2 changes: 1 addition & 1 deletion content/v2/csidriver/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ The CSI Drivers by Dell implement an interface between [CSI](https://kubernetes-
{{</table>}}

>Note: To connect to a PowerFlex 4.5 array, the SDC image will need to be changed to dellemc/sdc:4.5.
>- If using helm to install, you will need to make this change in your values.yaml file. See [helm install documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/helm/powerflex/) for details.
>- If using helm to install, you will need to make this change in your values.yaml file. See [helm install documentation](https://dell.github.io/csm-docs/docs/deployment/helm/drivers/installation/powerflex/) for details.
>- If using CSM-Operator to install, you will need to make this change in your samples file. See [operator install documentation](https://dell.github.io/csm-docs/docs/deployment/csmoperator/drivers/powerflex/) for details.

### Backend Storage Details
Expand Down
4 changes: 2 additions & 2 deletions content/v2/csidriver/features/powerflex.md
Original file line number Diff line number Diff line change
Expand Up @@ -824,7 +824,7 @@ allowedTopologies:
- "true"
```

[`helm/csi-vxflexos/values.yaml`](https://github.com/dell/csi-powerflex/blob/main/helm/csi-vxflexos/values.yaml)
[`csi-vxflexos/values.yaml`](https://github.com/dell/helm-charts/blob/main/charts/csi-vxflexos/values.yaml)
```yaml
...
enableQuota: false
Expand Down Expand Up @@ -916,4 +916,4 @@ If such a node is not available, the pods stay in Pending state. This means pods

Without storage capacity tracking, pods get scheduled on a node satisfying the topology constraints. If the required capacity is not available, volume attachment to the pods fails, and pods remain in ContainerCreating state. Storage capacity tracking eliminates unnecessary scheduling of pods when there is insufficient capacity.

The attribute `storageCapacity.enabled` in `values.yaml` can be used to enable/disable the feature during driver installation using helm. This is by default set to true. To configure how often the driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample file provided [here](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerflex_v280.yaml) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
The attribute `storageCapacity.enabled` in `values.yaml` can be used to enable/disable the feature during driver installation using helm. This is by default set to true. To configure how often the driver checks for changed capacity set `storageCapacity.pollInterval` attribute. In case of driver installed via operator, this interval can be configured in the sample file provided [here](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerflex_v280.yaml) by editing the `--capacity-poll-interval` argument present in the provisioner sidecar.
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ description: >
```
2. Map and update the settings from the CR in step 1 to the relevant CSM Operator CR
- As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
- Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.6 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v260.yaml#L28C7-L28C20)
- Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.10 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v2100.yaml#L28C7-L28C20)
3. Retain (or do not delete) the secret, namespace, storage classes, and volume snapshot classes from the original deployment as they will be re-used in the CSM operator deployment
4. Uninstall the CR from the CSI Operator
```
Expand Down Expand Up @@ -69,9 +69,9 @@ description: >
- Select *Create instance* under the provided Container Storage Module API
- Use the CR backup from step 1 to manually map desired settings to the new CSI driver
- As the yaml content may differ, ensure the values held in the step 1 CR backup are present in the new CR before installing the new driver
- Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.6 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v260.yaml#L28C7-L28C20)
- Ex: spec.driver.fsGroupPolicy in [PowerMax 2.6 for CSI Operator](https://github.com/dell/dell-csi-operator/blob/main/samples/powermax_v260_k8s_126.yaml#L17C5-L17C18) maps to spec.driver.csiDriverSpec.fSGroupPolicy in [PowerMax 2.10 for CSM Operator](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powermax_v2100.yaml#L28C7-L28C20)
>NOTE: Uninstallation of the driver and the Operator is non-disruptive for mounted volumes. Nonetheless you can not create new volume, snapshot or move a Pod.

## Testing

To test that the new installation is working, please follow the steps outlined [here](../../test) for your specific driver.
To test that the new installation is working, please follow the steps outlined [here](../../test) for your specific driver.
2 changes: 1 addition & 1 deletion content/v2/csidriver/troubleshooting/powerscale.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Here are some installation failures that might be encountered and how to mitigat
| Driver node pod is in "CrashLoopBackOff" as "Node ID" generated is not with proper FQDN. | This might be due to "dnsPolicy" implemented on the driver node pod which may differ with different networks. <br><br> This parameter is configurable in both helm and Operator installer and the user can try with different "dnsPolicy" according to the environment.|
| The `kubectl logs isilon-controller-0 -n isilon -c driver` logs shows the driver **Authentication failed. Trying to re-authenticate** when using Session-based authentication | The issue has been resolved from OneFS 9.3 onwards, for OneFS versions prior to 9.3 for session-based authentication either smart connect can be created against a single node of Isilon or CSI Driver can be installed/pointed to a particular node of the Isilon else basic authentication can be used by setting isiAuthType in `values.yaml` to 0 |
| When an attempt is made to create more than one ReadOnly PVC from the same volume snapshot, the second and subsequent requests result in PVCs in state `Pending`, with a warning `another RO volume from this snapshot is already present`. This is because the driver allows only one RO volume from a specific snapshot at any point in time. This is to allow faster creation(within a few seconds) of a RO PVC from a volume snapshot irrespective of the size of the volume snapshot. | Wait for the deletion of the first RO PVC created from the same volume snapshot. |
|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/csi-powerscale/blob/main/helm/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
|Driver install or upgrade fails because of an incompatible Kubernetes version, even though the version seems to be within the range of compatibility. For example: Error: UPGRADE FAILED: chart requires kubeVersion: >= 1.22.0 < 1.25.0 which is incompatible with Kubernetes V1.22.11-mirantis-1 | If you are using an extended Kubernetes version, please see the [helm Chart](https://github.com/dell/helm-charts/blob/main/charts/csi-isilon/Chart.yaml) and use the alternate kubeVersion check that is provided in the comments. Please note that this is not meant to be used to enable the use of pre-release alpha and beta versions, which is not supported.|
2 changes: 1 addition & 1 deletion content/v3/csidriver/installation/helm/isilon.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ CRDs should be configured during replication prepare stage with repctl as descri
4. Copy *the helm/csi-isilon/values.yaml* into a new location with name say *my-isilon-settings.yaml*, to customize settings for installation.
5. Edit *my-isilon-settings.yaml* to set the following parameters for your installation:
The following table lists the primary configurable parameters of the PowerScale driver Helm chart and their default values. More detailed information can be
found in the [`values.yaml`](https://github.com/dell/csi-powerscale/blob/master/helm/csi-isilon/values.yaml) file in this repository.
found in the [`values.yaml`](https://github.com/dell/helm-charts/tree/main/charts/csi-isilon/values.yaml) file in this repository.

| Parameter | Description | Required | Default |
| --------- | ----------- | -------- |-------- |
Expand Down
2 changes: 1 addition & 1 deletion content/v3/csidriver/installation/operator/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: >
Installing CSI Driver for PowerMax via Operator
---
{{% pageinfo color="primary" %}}
The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/csidriver/installation/operator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.
The Dell CSI Operator is no longer actively maintained or supported. Dell CSI Operator has been replaced with [Dell CSM Operator](https://dell.github.io/csm-docs/docs/deployment/csmoperator/). If you are currently using Dell CSI Operator, refer to the [operator migration documentation](https://dell.github.io/csm-docs/docs/deployment/csmoperator/operator_migration/) to migrate from Dell CSI Operator to Dell CSM Operator.

{{% /pageinfo %}}
{{% pageinfo color="primary" %}} Linked Proxy mode for CSI reverse proxy is no longer actively maintained or supported. It will be deprecated in CSM 1.9. It is highly recommended that you use stand alone mode going forward. {{% /pageinfo %}}
Expand Down
2 changes: 1 addition & 1 deletion content/v3/deployment/csmoperator/drivers/powerflex.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ kubectl get csm --all-namespaces
- Optionally, enable sdc monitor by setting the enable flag for the sdc-monitor to true. Please note:
- **If using sidecar**, you will need to edit the value fields under the HOST_PID and MDM fields by filling the empty quotes with host PID and the MDM IPs.
- **If not using sidecar**, leave the enabled field set to false.
##### Example CR: [samples/storage_csm_powerflex_v290.yaml](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerflex_v270.yaml)
##### Example CR: [samples/storage_csm_powerflex_v2100.yaml](https://github.com/dell/csm-operator/blob/main/samples/storage_csm_powerflex_v2100.yaml)
```yaml
sideCars:
# sdc-monitor is disabled by default, due to high CPU usage
Expand Down
Loading
Loading