Skip to content
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.

CORTX-29979: Multiple Data pod deployment: Support Node group in CDF #2116

Merged
merged 1 commit into from
Jun 22, 2022

Conversation

SwapnilGaonkar7
Copy link
Contributor

@SwapnilGaonkar7 SwapnilGaonkar7 commented Jun 14, 2022

Solution:
Added support for 'node_group' in CDF format

Test:
Tested 3 node deployment and single node bootstrap

Signed-off-by: Swapnil Gaonkar swapnil.gaonkar@seagate.com

@cla-bot cla-bot bot added the cla-signed label Jun 14, 2022
@SwapnilGaonkar7 SwapnilGaonkar7 force-pushed the CORTX-29979_1 branch 2 times, most recently from 9c6895e to a77c293 Compare June 14, 2022 13:06
@SwapnilGaonkar7 SwapnilGaonkar7 marked this pull request as ready for review June 14, 2022 14:10
@auto-assign auto-assign bot requested review from d-nayak and vaibhavparatwar June 14, 2022 14:10
@vaibhavparatwar
Copy link
Contributor

Sanity is having one failure.. which is known.. and not related to this PR .. @mssawant please review as you get chance

Copy link
Contributor

@vaibhavparatwar vaibhavparatwar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SwapnilGaonkar7 please add all unit/dev testing results for this PR.

@SwapnilGaonkar7 SwapnilGaonkar7 force-pushed the CORTX-29979_1 branch 3 times, most recently from 0f5ff4f to 8d5f1c9 Compare June 15, 2022 12:21
@SwapnilGaonkar7
Copy link
Contributor Author

SwapnilGaonkar7 commented Jun 15, 2022

@SwapnilGaonkar7 please add all unit/dev testing results for this PR.

[root@ssc-vm-g2-rhev4-2221 cortx-hare]# make test
--> Testing cfgen
(.py3venv) [root@ssc-vm-g2-rhev4-2221 miniprov]# python ./setup.py test
running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
running egg_info
writing hare_mp.egg-info/PKG-INFO
writing dependency_links to hare_mp.egg-info/dependency_links.txt
writing entry points to hare_mp.egg-info/entry_points.txt
writing requirements to hare_mp.egg-info/requires.txt
writing top-level names to hare_mp.egg-info/top_level.txt
reading manifest file 'hare_mp.egg-info/SOURCES.txt'
writing manifest file 'hare_mp.egg-info/SOURCES.txt'
running build_ext
test_empty_source_results_empty (test.test_systemd.TestHaxUnitTrasform) ... ok
test_not_everything_commented (test.test_systemd.TestHaxUnitTrasform) ... ok
test_restart_commented (test.test_systemd.TestHaxUnitTrasform) ... ok
test_invalid_machine_id (test.test_validator.TestValidator) ... ok
test_is_cluster_first_node (test.test_validator.TestValidator) ... ok
test_allowed_failure_generation (test.test_cdf.TestCDF) ... ok
test_both_dix_and_sns_pools_can_exist (test.test_cdf.TestCDF) ... ok
test_disk_refs_can_be_empty (test.test_cdf.TestCDF) ... ok
test_dix_pool_uses_metadata_devices (test.test_cdf.TestCDF) ... ok
test_iface_type_can_be_null (test.test_cdf.TestCDF) ... ok
test_invalid_storage_set_configuration_rejected (test.test_cdf.TestCDF)
This test case checks whether exception will be raise if total ... ok
test_it_works (test.test_cdf.TestCDF) ... ok
test_md_pool_ignored (test.test_cdf.TestCDF) ... ok
test_metadata_is_hardcoded (test.test_cdf.TestCDF) ... ok
test_multiple_nodes_supported (test.test_cdf.TestCDF) ... ok
test_provided_values_respected (test.test_cdf.TestCDF) ... ok
test_template_sane (test.test_cdf.TestCDF) ... ok
test_disks_empty (test.test_cdf.TestTypes) ... ok
test_m0clients (test.test_cdf.TestTypes) ... ok
test_m0server_with_disks (test.test_cdf.TestTypes) ... ok
test_maybe_none (test.test_cdf.TestTypes) ... ok
test_pooldesc_empty (test.test_cdf.TestTypes) ... ok
test_protocol (test.test_cdf.TestTypes) ... ok

----------------------------------------------------------------------
Ran 23 tests in 0.495s

OK

Tested single node bootstrap with following CDF

nodes:
  - hostname: localhost     # [user@]hostname
    node_group: ssc-vm-g2-rhev4-2221.colo.seagate.com
    data_iface: eth1        # name of data network interface
    #data_iface_type: o2ib  # type of network interface (optional);
                            # supported values: "tcp" (default), "o2ib"
    transport_type: libfab
    m0_servers:
      - runs_confd: true
        io_disks:
          data: []
      - io_disks:
          #meta_data: /path/to/meta-data/drive
          data:
            - path: /dev/loop0
            - path: /dev/loop1
            - path: /dev/loop2
            - path: /dev/loop3
            - path: /dev/loop4
            - path: /dev/loop5
            - path: /dev/loop6
            - path: /dev/loop7
            - path: /dev/loop8
            - path: /dev/loop9
    #m0_clients: null
create_aux: false # optional; supported values: "false" (default), "true"
pools:
  - name: the pool
    type: sns  # optional; supported values: "sns" (default), "dix", "md"
    disk_refs:
      - { path: /dev/loop0, node: localhost }
      - { path: /dev/loop1, node: localhost }
      - { path: /dev/loop2, node: localhost }
      - { path: /dev/loop3, node: localhost }
      - { path: /dev/loop4, node: localhost }
      - { path: /dev/loop5, node: localhost }
      - { path: /dev/loop6, node: localhost }
      - { path: /dev/loop7, node: localhost }
      - { path: /dev/loop8, node: localhost }
      - { path: /dev/loop9, node: localhost }
    data_units: 1
    parity_units: 0
    spare_units: 0

Tested 3 node deployment

[root@ssc-vm-g3-rhev4-2623 ~]# kubectl get pods
NAME                                                 READY   STATUS    RESTARTS        AGE
cortx-consul-client-467rd                            1/1     Running   0               11m
cortx-consul-client-7wk88                            1/1     Running   0               10m
cortx-consul-client-t6jxd                            1/1     Running   0               11m
cortx-consul-server-0                                1/1     Running   0               9m31s
cortx-consul-server-1                                1/1     Running   0               10m
cortx-consul-server-2                                1/1     Running   0               11m
cortx-control-8d66d8c74-54kqx                        1/1     Running   0               8m16s
cortx-data-ssc-vm-g2-rhev4-2784-7f9d78b567-qfrr4     4/4     Running   0               7m33s
cortx-data-ssc-vm-g2-rhev4-2785-7fdb47bbd8-mvm8h     4/4     Running   0               7m32s
cortx-data-ssc-vm-g3-rhev4-2623-849787f8cd-cb6dt     4/4     Running   0               7m31s
cortx-ha-6bf986fd8f-dvfrj                            3/3     Running   0               4m44s
cortx-kafka-0                                        1/1     Running   1 (11m ago)     12m
cortx-kafka-1                                        1/1     Running   0               12m
cortx-kafka-2                                        1/1     Running   1 (11m ago)     12m
cortx-server-ssc-vm-g2-rhev4-2784-5fd9ffd85d-cw2cv   2/2     Running   1 (2m52s ago)   6m10s
cortx-server-ssc-vm-g2-rhev4-2785-7c984677d8-xlh26   2/2     Running   1 (2m54s ago)   6m9s
cortx-server-ssc-vm-g3-rhev4-2623-6f7d444b7b-x7kjq   2/2     Running   2 (2m43s ago)   6m8s
cortx-zookeeper-0                                    1/1     Running   0               12m
cortx-zookeeper-1                                    1/1     Running   0               12m
cortx-zookeeper-2                                    1/1     Running   0               12m
[root@ssc-vm-g3-rhev4-2623 ~]# kubectl exec -it cortx-data-ssc-vm-g2-rhev4-2784-7f9d78b567-qfrr4 -c cortx-hax -- /bin/bash
[root@cortx-data-headless-svc-ssc-vm-g2-rhev4-2784 /]# hctl status
Bytecount:
    critical : 0
    damaged : 0
    degraded : 0
    healthy : 0
Data pool:
    # fid name
    0x6f00000000000001:0x0 'storage-set-1__sns'
Profile:
    # fid name: pool(s)
    0x7000000000000001:0x0 'Profile_the_pool': 'storage-set-1__sns' 'storage-set-1__dix' None
Services:
    cortx-data-headless-svc-ssc-vm-g3-rhev4-2623  (RC)
    [started]  hax                 0x7200000000000001:0x0          inet:tcp:cortx-data-headless-svc-ssc-vm-g3-rhev4-2623@22001
    [started]  ioservice           0x7200000000000001:0x1          inet:tcp:cortx-data-headless-svc-ssc-vm-g3-rhev4-2623@21001
    [started]  ioservice           0x7200000000000001:0x2          inet:tcp:cortx-data-headless-svc-ssc-vm-g3-rhev4-2623@21002
    [started]  confd               0x7200000000000001:0x3          inet:tcp:cortx-data-headless-svc-ssc-vm-g3-rhev4-2623@22002
    cortx-data-headless-svc-ssc-vm-g2-rhev4-2785
    [started]  hax                 0x7200000000000001:0x4          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2785@22001
    [started]  ioservice           0x7200000000000001:0x5          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2785@21001
    [started]  ioservice           0x7200000000000001:0x6          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2785@21002
    [started]  confd               0x7200000000000001:0x7          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2785@22002
    cortx-data-headless-svc-ssc-vm-g2-rhev4-2784
    [started]  hax                 0x7200000000000001:0x8          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2784@22001
    [started]  ioservice           0x7200000000000001:0x9          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2784@21001
    [started]  ioservice           0x7200000000000001:0xa          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2784@21002
    [started]  confd               0x7200000000000001:0xb          inet:tcp:cortx-data-headless-svc-ssc-vm-g2-rhev4-2784@22002
    cortx-server-headless-svc-ssc-vm-g2-rhev4-2785
    [started]  hax                 0x7200000000000001:0xc          inet:tcp:cortx-server-headless-svc-ssc-vm-g2-rhev4-2785@22001
    [started]  rgw_s3              0x7200000000000001:0xd          inet:tcp:cortx-server-headless-svc-ssc-vm-g2-rhev4-2785@21001
    cortx-server-headless-svc-ssc-vm-g2-rhev4-2784
    [started]  hax                 0x7200000000000001:0xe          inet:tcp:cortx-server-headless-svc-ssc-vm-g2-rhev4-2784@22001
    [started]  rgw_s3              0x7200000000000001:0xf          inet:tcp:cortx-server-headless-svc-ssc-vm-g2-rhev4-2784@21001
    cortx-server-headless-svc-ssc-vm-g3-rhev4-2623
    [started]  hax                 0x7200000000000001:0x10         inet:tcp:cortx-server-headless-svc-ssc-vm-g3-rhev4-2623@22001
    [started]  rgw_s3              0x7200000000000001:0x11         inet:tcp:cortx-server-headless-svc-ssc-vm-g3-rhev4-2623@21001

I observed that server pods restarted but there is existing issue filed for it.

@vaibhavparatwar vaibhavparatwar self-requested a review June 15, 2022 13:34
@vaibhavparatwar
Copy link
Contributor

retest this please

@vaibhavparatwar
Copy link
Contributor

retest this please

@vaibhavparatwar
Copy link
Contributor

@mssawant can we merge this today?

@SwapnilGaonkar7 SwapnilGaonkar7 force-pushed the CORTX-29979_1 branch 2 times, most recently from efa57f4 to 2ef675b Compare June 17, 2022 11:34
@SwapnilGaonkar7
Copy link
Contributor Author

SwapnilGaonkar7 commented Jun 17, 2022

Testing:

  • Tested 3 node deployment
[root@cortx-data-headless-svc-ssc-vm-g2-rhev4-2784 /]# cat /etc/cortx/hare/config/0f3f92fede504224dd6487227933eb72/cluster.yaml | grep group
  node_group: ssc-vm-g2-rhev4-2784.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2623.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-2785.colo.seagate.com
  node_group: None
  node_group: None
  node_group: None
[root@cortx-data-headless-svc-ssc-vm-g2-rhev4-2784 /]# consul kv get --recurse | grep group
conf/node>4820590e1e0cd66a0141a14646006db8>node_group:ssc-vm-g2-rhev4-2785.colo.seagate.com
conf/node>8ca31640bf4808ec0405623546544ea1>node_group:ssc-vm-g3-rhev4-2623.colo.seagate.com
conf/node>fd09ff271dcc4710d5f8f77521b2068e>node_group:ssc-vm-g2-rhev4-2784.colo.seagate.com
  • Tried running config after deleting one of the node_group key(not local node) and deployment hangs. At this stage if we create that key again then deployment proceeds and completes.

  • After deployment, for testing backward compatibility, deleted following key

conf/node>fd09ff271dcc4710d5f8f77521b2068e>node_group:ssc-vm-g2-rhev4-2784.colo.seagate.com
and ran config again. CDF gets generated with 'node_group= None' for all the nodes

[root@cortx-data-headless-svc-ssc-vm-g2-rhev4-2784 /]# cat /etc/cortx/hare/config/0f3f92fede504224dd6487227933eb72/cluster.yaml | grep group
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None

@vaibhavparatwar
Copy link
Contributor

@SwapnilGaonkar7 please address Codacy issues..

@SwapnilGaonkar7 SwapnilGaonkar7 force-pushed the CORTX-29979_1 branch 2 times, most recently from 7df124b to ca35797 Compare June 17, 2022 12:04
Copy link

@mssawant mssawant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure 15N deployments work.

@supriyachavan4398
Copy link
Contributor

Testing:
Custom build: https://eos-jenkins.colo.seagate.com/job/GitHub-custom-ci-builds/job/generic/job/custom-ci/6811/console
Tried 15N deployment and it's successfully done at https://eos-jenkins.colo.seagate.com/job/Cortx-Automation/job/RGW/job/setup-cortx-rgw-cluster/7340/.

[root@cortx-data-headless-svc-ssc-vm-g3-rhev4-2281 61dd49c59c5f8f44a11b190800164020]# cat /etc/cortx/hare/config/61dd49c59c5f8f44a11b190800164020/cluster.yaml | grep group
  node_group: ssc-vm-g3-rhev4-2278.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-1635.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-2238.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2282.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2107.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-1631.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2283.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2281.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2284.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-1630.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2279.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-2237.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2184.colo.seagate.com
  node_group: ssc-vm-g3-rhev4-2198.colo.seagate.com
  node_group: ssc-vm-g2-rhev4-1632.colo.seagate.com
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
  node_group: None
[root@cortx-data-headless-svc-ssc-vm-g3-rhev4-2281 61dd49c59c5f8f44a11b190800164020]# consul kv get --recurse | grep group
conf/node>0ff391257780f3649874161da3f48f7b>node_group:ssc-vm-g3-rhev4-2278.colo.seagate.com
conf/node>19283c8e29970cd20bf0492ec571f143>node_group:ssc-vm-g2-rhev4-1635.colo.seagate.com
conf/node>322d42222cf28f1c371a05216c9cf481>node_group:ssc-vm-g2-rhev4-2238.colo.seagate.com
conf/node>3f994de3ec2a4115f6f60bd469c4f178>node_group:ssc-vm-g3-rhev4-2282.colo.seagate.com
conf/node>40f4f5e75a3220fe68fd606286632234>node_group:ssc-vm-g3-rhev4-2107.colo.seagate.com
conf/node>5717c2e6a2b0c3fb4980da447cf10bb5>node_group:ssc-vm-g2-rhev4-1631.colo.seagate.com
conf/node>5df7321bd028a15fa1c430cf612a08c3>node_group:ssc-vm-g3-rhev4-2283.colo.seagate.com
conf/node>61dd49c59c5f8f44a11b190800164020>node_group:ssc-vm-g3-rhev4-2281.colo.seagate.com
conf/node>7a78d9f6ef7c549b72b2642f454fcfdf>node_group:ssc-vm-g3-rhev4-2284.colo.seagate.com
conf/node>bc4641183360701f1e57320e12df9df6>node_group:ssc-vm-g2-rhev4-1630.colo.seagate.com
conf/node>c10d6bfb1fa8b58e55d2cec47c60c1b4>node_group:ssc-vm-g3-rhev4-2279.colo.seagate.com
conf/node>d69ae96a95694e0a3cd7eb7e9df930e3>node_group:ssc-vm-g2-rhev4-2237.colo.seagate.com
conf/node>e3d515ced29016e4ed5e6c02c2748dc7>node_group:ssc-vm-g3-rhev4-2184.colo.seagate.com
conf/node>e6f9d062d84a0771e0b243570b5a388c>node_group:ssc-vm-g3-rhev4-2198.colo.seagate.com
conf/node>e9e08bb26ac1e385194d920b6879ac6c>node_group:ssc-vm-g2-rhev4-1632.colo.seagate.com
csm/config/MESSAGEBUS>CONSUMER>ALERTS>consumer_group:csm_alerts_group
csm/config/MESSAGEBUS>CONSUMER>STATS>perf>consumer_group:csm_perf_stat
  • After deployment deleted the following key:
[root@cortx-data-headless-svc-ssc-vm-g3-rhev4-2281 61dd49c59c5f8f44a11b190800164020]# consul kv delete "conf/node>0ff391257780f3649874161da3f48f7b>node_group"
Success! Deleted key: conf/node>0ff391257780f3649874161da3f48f7b>node_group
  • Tried running config after deleting one of the node_group keys (not a local node) and the config stage got hung.
  • If we create that key again then the config stage proceeds and completes.
[root@cortx-data-headless-svc-ssc-vm-g3-rhev4-2281 61dd49c59c5f8f44a11b190800164020]# consul kv put "conf/node>0ff391257780f3649874161da3f48f7b>node_group" "ssc-vm-g3-rhev4-2278.colo.seagate.com"
Success! Data written to: conf/node>0ff391257780f3649874161da3f48f7b>node_group
[root@cortx-data-headless-svc-ssc-vm-g3-rhev4-2281 61dd49c59c5f8f44a11b190800164020]# consul kv get --recurse | grep group                                    conf/node>0ff391257780f3649874161da3f48f7b>node_group:ssc-vm-g3-rhev4-2278.colo.seagate.com
conf/node>19283c8e29970cd20bf0492ec571f143>node_group:ssc-vm-g2-rhev4-1635.colo.seagate.com
conf/node>322d42222cf28f1c371a05216c9cf481>node_group:ssc-vm-g2-rhev4-2238.colo.seagate.com
conf/node>3f994de3ec2a4115f6f60bd469c4f178>node_group:ssc-vm-g3-rhev4-2282.colo.seagate.com
conf/node>40f4f5e75a3220fe68fd606286632234>node_group:ssc-vm-g3-rhev4-2107.colo.seagate.com
conf/node>5717c2e6a2b0c3fb4980da447cf10bb5>node_group:ssc-vm-g2-rhev4-1631.colo.seagate.com
conf/node>5df7321bd028a15fa1c430cf612a08c3>node_group:ssc-vm-g3-rhev4-2283.colo.seagate.com
conf/node>61dd49c59c5f8f44a11b190800164020>node_group:ssc-vm-g3-rhev4-2281.colo.seagate.com
conf/node>7a78d9f6ef7c549b72b2642f454fcfdf>node_group:ssc-vm-g3-rhev4-2284.colo.seagate.com
conf/node>bc4641183360701f1e57320e12df9df6>node_group:ssc-vm-g2-rhev4-1630.colo.seagate.com
conf/node>c10d6bfb1fa8b58e55d2cec47c60c1b4>node_group:ssc-vm-g3-rhev4-2279.colo.seagate.com
conf/node>d69ae96a95694e0a3cd7eb7e9df930e3>node_group:ssc-vm-g2-rhev4-2237.colo.seagate.com
conf/node>e3d515ced29016e4ed5e6c02c2748dc7>node_group:ssc-vm-g3-rhev4-2184.colo.seagate.com
conf/node>e6f9d062d84a0771e0b243570b5a388c>node_group:ssc-vm-g3-rhev4-2198.colo.seagate.com
conf/node>e9e08bb26ac1e385194d920b6879ac6c>node_group:ssc-vm-g2-rhev4-1632.colo.seagate.com
csm/config/MESSAGEBUS>CONSUMER>ALERTS>consumer_group:csm_alerts_group
csm/config/MESSAGEBUS>CONSUMER>STATS>perf>consumer_group:csm_perf_stat

cc. @vaibhavparatwar , @mssawant, @SwapnilGaonkar7

@pavankrishnat pavankrishnat force-pushed the CORTX-29979_1 branch 6 times, most recently from 81d256b to 2ee9f60 Compare June 20, 2022 09:09
@vaibhavparatwar
Copy link
Contributor

retest this please

@d-nayak d-nayak force-pushed the CORTX-29979_1 branch 5 times, most recently from 8d91f20 to eb98785 Compare June 21, 2022 14:09
1. Multiple Data pod deployment: Support Node group in CDF

2. Hare builds are failing for main and custom-ci branches (Seagate#2122)

Solution:

1:
Added support for 'node_group' in CDF format

2:
The issue was with the version of charset-normalizer, a package that
aiohttp (a package required by HARE) was pulling in as a depend. We have now set
the version of charset-normalizer to 2.0.12 - which works with aiohttp 3.8.1
as required by HARE.

Signed-off-by: Swapnil Gaonkar <swapnil.gaonkar@seagate.com>
Signed-off-by: pavankrishnat <pavan.k.thunuguntla@seagate.com>
Signed-off-by: Deepak Nayak <deepak.nayak@seagate.com>
@mssawant
Copy link

Looks like k8s deployment timed out, seems like an environment issue,

00:55:46  ######################################################
00:55:46  # Deploy CORTX Local Block Storage                    
00:55:46  ######################################################
00:55:47  NAME: cortx-data-blk-data-cortx
00:55:47  LAST DEPLOYED: Tue Jun 21 18:55:46 2022
00:55:47  NAMESPACE: cortx
00:55:47  STATUS: deployed
00:55:47  REVISION: 1
00:55:47  TEST SUITE: None
00:55:47  ########################################################
00:55:47  # Generating CORTX Pod Machine IDs                      
00:55:47  ########################################################
00:55:47  ######################################################
00:55:47  # Deploy CORTX                                        
00:55:47  ######################################################
00:55:52  W0621 18:55:51.906729   59131 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
00:55:52  W0621 18:55:52.004022   59131 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
01:00:59  Error: INSTALLATION FAILED: timed out waiting for the condition
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (QA SETUP)
Stage "QA SETUP" skipped due to earlier failure(s)

@mssawant
Copy link

retest this please

@vaibhavparatwar
Copy link
Contributor

Yes this looks environmental issue.. sanity used to pass previously.

@mssawant
Copy link

Okay, going ahead with merging.

@mssawant mssawant merged commit 0824c85 into Seagate:main Jun 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants