Skip to content

Latest commit

 

History

History
183 lines (144 loc) · 17.1 KB

CHANGELOG-1.9.0.md

File metadata and controls

183 lines (144 loc) · 17.1 KB

v1.9.4

Changelog since v1.9.3

Known Issues

  • The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
  • When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the karavi-topology-tls, otel-collector-tls, and cert-manager-webhook-ca secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with kubectl get secrets -A, and any unwanted secrets can be deleted with kubectl delete secret -n <secret-namespace> <secret-name>

Changes by Kind

Bugs

  • Change the Apex Connectivity Client access to the kube-proxy port to only connections within the client pod. (#1189)
  • Change Apex Connectivity Client access to secrets to only the secrets it needs to manage. (#1190)

v1.9.3

Changelog since v1.9.2

Known Issues

  • The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
  • When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the karavi-topology-tls, otel-collector-tls, and cert-manager-webhook-ca secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with kubectl get secrets -A, and any unwanted secrets can be deleted with kubectl delete secret -n <secret-namespace> <secret-name>

Changes by Kind

Features

  • Automatically create certificates when deploying observability with csm-operator. (#1158)

Bugs

  • CSM object stays in success state when all CSI Powerflex pods are failing due to bad secret credentials. (#1156)
  • If Authorization Proxy Server is installed in an alternate namespace by CSM Operator, the deployment fails. (#1157)
  • CSM status is not always accurate when Observability is deployed by CSM Operator without all components enabled. (#1159)
  • CSI driver changes to facilitate SDC brownfield deployments. (#1152)
  • CSM object occasionally stays in failed state when app-mobility is successfully deployed with csm-operator. (#1171)

v1.9.2

Changelog since v1.9.1

Known Issues

  • The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
  • The status calculation done for the csm object associated with the Authorization Proxy Server when deployed with CSM Operator assumes that the proxy server will be deployed in the "authorization" namespace. If a different namespace is used, the status will stay in the failed state, even though the deployment is healthy. As a workaround, we recommend using the "authorization" namespace for the proxy server. If this is not possible, the health of the deployment can be verified by checking the status of all the pods rather than by checking the status field.
  • When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the karavi-topology-tls, otel-collector-tls, and cert-manager-webhook-ca secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with kubectl get secrets -A, and any unwanted secrets can be deleted with kubectl delete secret -n <secret-namespace> <secret-name>
  • When PowerFlex CSI driver is deployed on a host that has SDC already installed or on a host that does not support automatic SDC installation (non CoreOS, non RHEL), the SDC container is unable to detect existing scini driver. As a result, the powerflex-node pod is stuck in Init:CrashLoopBackOff state.

Changes by Kind

Bugs

  • CSM Operator doesn't apply fSGroupPolicy value to CSIDriver Object. (#1103)
  • CSM Operator does not calculate status correctly when a driver is deployed by itself. (#1130)
  • CSM Operator does not calculate status correctly when application-mobility is deployed by itself. (#1133)
  • CSM Operator intermittently does not calculate status correctly when deploying a driver. (#1137)
  • CSM Operator does not calculate status correctly when deploying the authorization proxy server. (#1143)
  • CSM Operator does not calculate status correctly when deploying observability with csi-powerscale. (#1146)
  • CSM Operator labels csm objects with CSMVersion 1.8.0, an old version. (#1147)

v1.9.1

Changelog since v1.9.0

Known Issues

  • For CSM Operator released in CSM v1.9.1, a plain driver install (no modules) will always be marked as failed in the CSM status even when it succeeds. As a workaround, the driver deployment is still usable as long as all the pods are running/healthy.
  • For CSM Operator released in CSM v1.9.1, a standalone install of application-mobility (not as a module under the driver CSM) will always be marked as failed in the CSM status, even when it succeeds. This is because the operator is looking for the wrong daemonset label to confirm the deployment. As a workaround, the module is still usable as long as all the pods are running/healthy.
  • For CSM Operator released in CSM v1.9.1, a driver install will rarely (~2% of the time) have a csm object stuck in a failed state for over an hour even though the deployment succeeds. This is due to a race condition in the status update logic. As a workaround, the driver is still usable as long as all the pods are running/healthy.
  • For CSM Operator released in CSM v1.9.1, the authorization proxy server csm object status will always be failed, even when it succeeds. This is because the operator is looking for a daemonset status when the authorization proxy server deployment does not have a daemonset. As a workaround, the module is still usable as long as all the pods are running/healthy.
  • For CSM Operator released in CSM v1.9.1, an install of csi-powerscale with observability will always be marked as failed in the csm object status, even when it succeeds. This is because the operator is looking for a legacy name of isilon in the status check. As a workaround, the module is still usable as long as all the pods are running/healthy.
  • For csm objects created by the CSM Operator, the CSMVersion label value is v1.8.0 when it should be v1.9.1. As a workaround, the CSM version can be double-checked by checking the operator version -- v1.4.1 operator corresponds to CSM v1.9.1.
  • The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
  • When PowerFlex CSI driver is deployed on a host that has SDC already installed or on a host that does not support automatic SDC installation (non CoreOS, non RHEL), the SDC container is unable to detect existing scini driver. As a result, the powerflex-node pod is stuck in Init:CrashLoopBackOff state.

Changes by Kind

Bugs

  • Multi Controller defect - sidecars timeout. (#1110)
  • Volumes failing to mount when customer using NVMeTCP on Powerstore. (#1108)
  • Operator crashes when deployed from OpenShift with OLM. (#1117)
  • Skip Certificate Validation is not propagated to Authorization module in CSM Operator. (#1120)
  • CSM Operator does not calculate status correctly when module is deployed with driver. (#1122)

v1.9.0

Changelog since v1.8.0

Known Issues

  • For CSM PowerMax, automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" on PowerMax 10.1 arrays. As a workaround, create the SRDF Group and add it to the storage class.
  • For CSM Operator released in CSM v1.9.0, a driver install will rarely (~2% of the time) have a csm object stuck in a failed state for over an hour even though the deployment succeeds. This is due to a race condition in the status update logic.
  • For csm objects created by the CSM Operator, the CSMVersion label value is v1.8.0 when it should be v1.9.0. As a workaround, the CSM version can be double-checked by checking the operator version -- v1.4.0 operator corresponds to CSM v1.9.0.
  • The status field of a csm object as deployed by CSM Operator may, in limited cases, display an incorrect status for a deployment. As a workaround, the health of the deployment can be determined by checking the health of the pods.
  • When CSM Operator creates a deployment that includes secrets (e.g., application-mobility, observability, cert-manager, velero), these secrets are not deleted on uninstall and will be left behind. For example, the karavi-topology-tls, otel-collector-tls, and cert-manager-webhook-ca secrets will not be deleted. This should not cause any issues on the system, but all secrets present on the cluster can be found with kubectl get secrets -A, and any unwanted secrets can be deleted with kubectl delete secret -n <secret-namespace> <secret-name>
  • When PowerFlex CSI driver is deployed on a host that has SDC already installed or on a host that does not support automatic SDC installation (non CoreOS, non RHEL), the SDC container is unable to detect existing scini driver. As a result, the powerflex-node pod is stuck in Init:CrashLoopBackOff state.

Changes by Kind

Deprecation

Features

  • Support For PowerFlex 4.5. (#1067)
  • Support for Openshift 4.14. (#1066)
  • Support for Kubernetes 1.28. (#947)
  • CSM PowerMax: Support PowerMax v10.1. (#1062)
  • Update to the latest UBI Micro image for CSM. (#1031)
  • Dell CSI to Dell CSM Operator Migration Process. (#996)
  • Remove linked proxy mode for PowerMax. (#991)
  • Add support for CSI Spec 1.6. (#905)
  • Helm Chart Enhancement - Container Images Configurable in values.yaml. (#851)

Bugs

  • Documentation links are broken in few places. (#1072)
  • Symmetrix APIs are not getting refreshed. (#1070)
  • CSM Doc page - Update link to PowerStore for Resiliency card. (#1065)
  • Golint is not installing with go get command. (#1061)
  • cert-csi - cannot configure image locations. (#1059)
  • CSI Health monitor for Node missing for CSM PowerFlex in Operator samples. (#1058)
  • CSI Driver - issue with creation volume from 1 of the worker nodes. (#1057)
  • Missing runtime dependencies reference in PowerMax README file.. (#1056)
  • The PowerFlex Dockerfile is incorrectly labeling the version as 2.7.0 for the 2.8.0 version.. (#1054)
  • make gosec is erroring out - Repos PowerMax,PowerStore,PowerScale (gosec is installed). (#1053)
  • make docker command is failing with error. (#1051)
  • NFS Export gets deleted when one pod is deleted from the multiple pods consuming the same PowerFlex RWX NFS volume. (#1050)
  • Is cert-csi expansion expected to successfully run with enableQuota: false on PowerScale?. (#1046)
  • Document instructions update: Either Multi-Path or the Power-Path software should be enabled for PowerMax. (#1037)
  • Comment out duplicate entries in the sample secret.yaml file. (#1030)
  • Provide more detail about what cert-csi is doing. (#1027)
  • CSM Installation wizard is issuing the warnings that are false positives. (#1022)
  • CSI-PowerFlex: SDC Rename fails when configuring multiple arrays in the secret. (#1020)
  • karavi-metrics-powerscale pod gets an segmentation violation error during start. (#1019)
  • Missing error check for os.Stat call during volume publish. (#1014)
  • PowerFlex RWX volume no option to configure the nfs export host access ip address.. (#1011)
  • cert-csi invalid path in go.mod prevents installation. (#1010)
  • Cert-CSI from release v1.2.0 downloads wrong version v0.8.1. (#1009)
  • Too many login sessions in gopowerstore client causes unexpected session termination in UI. (#1006)
  • CSM Replication - secret file requirement for both sites not documented. (#1002)
  • Volume health fails because it looks to a wrong path. (#999)
  • X_CSI_AUTH_TYPE cannot be set in CSM Operator. (#990)
  • Allow volume prefix to be set via CSM operator. (#989)
  • CSM Operator fails to install CSM Replication on the remote cluster. (#988)
  • storageCapacity can be set in unsupported CSI Powermax with CSM Operator. (#983)
  • Update resources limits for controller-manager to fix OOMKilled error. (#982)
  • Not able to take volumesnapshots. (#975)
  • Gopowerscale unit test fails. (#771)