Skip to content
This repository has been archived by the owner on May 3, 2024. It is now read-only.

Cortx monitor 3 Node VM provisioning manual

mariyappanp edited this page Apr 1, 2021 · 15 revisions

Pre-Requisites:

Run below steps in all 3 nodes

  1. Setup yum repo (Required only for installing build specific RPMS)

    $yum-config-manager --add-repo http://cortx-storage.colo.seagate.com/releases/cortx/github/integration-custom-ci/centos-7.8.2003/custom-build-399/cortx_iso/

  2. Install 3rd party packages

    curl -s http://cortx-storage.colo.seagate.com/releases/cortx/third-party-deps/rpm/install-cortx-prereq.sh | bash

    Ensure below 3rd party packages are installed.

    1. RPMs

      Package Version
      hdparm 9.43
      ipmitool 1.8.18
      lshw B.02.18
      python3 3.6.8
      python36-dbus 1.2.4
      python36-gobject 3.22.0
      python36-paramiko 2.1.1
      python36-psutil 5.6.7
      shadow-utils 4.6
      smartmontools 7.0
      systemd-python36 1.0.0
      udisks2 2.8.4
    2. Python

      Package Version
      cryptography 2.8
      jsonschema 3.2.0
      pika 1.1.0
      pyinotify 0.9.6
      python-daemon 2.2.4
      requests 2.25.1
      zope.component 4.6.2
      zope.event 4.5.0
      zope.interface 5.2.0
  3. Run cortx-py-utils pre-requisites

  4. Config file for cortx-py-utils

    1. Create config file for cortx-py-utils at /tmp/cortx-config-new
    2. Description of keys used in config file
  5. Install cortx-py-utils

  6. Install SSPL RPMs

SSPL mini-provisioner configs

SSPL mini-provisioner interfaces are supposed to be executed in order i.e post-install, prepare, config, init. And each stage must be executed on each node before proceeding to next stage. And rest of interfaces can be executed in any order i.e for test, reset, cleanup.

Input config required at each mini provisioner stage is a cumulative i.e. super set of current plus previous stage config. The input config from post-install stage will be available to prepare stage, post-install and prepare stage input config will be available to config stage. Same way, post-install, prepare and config stage input configs to init stage. Init stage is expected to have all configs required for sspl-ll service to start and run smoothly.

At each mini-provisioner stage, argument "--config <template_file>" needs to be passed , which is a reference to required configs backend, depending on the stage.

The template file is a key-value mapped yaml file, maintaining the config requirement template for each stage. All the strings start with TMPL_ should be replaced with valid values. If no values identified, then it can be left blank.

Template fields

  1. TMPL_{field} is a identifier for consumer to update variable keys & values(ex. CI CD pipeline)

    Template fields used accross mini provisioner interfaces

    n will be 1,2 and 3 for first, second and 3rd node

    BMC information such as IP, USER and SECRET can be blank for VM.

    Stage Field Example Value Description Required
    post_install TMPL_NODE_NAME_{n} srvnode-1 Server name of node Yes
    post_install TMPL_MACHINE_ID_{n} 30512e5ae6df9f1ea02327bab45e499d cat /etc/machine-id Yes
    post_install TMPL_HOSTNAME_{n} ssc-vm-2217.colo.seagate.com Hostname of node Yes
    post_install TMPL_BMC_IP_{n} BMC IP Address No
    post_install TMPL_BMC_SECRET_{n} BMC encrypted password No
    post_install TMPL_BMC_USER_{n} BMC username No
    post_install TMPL_ENCLOSURE_ID_{n} enc30512e5ae6df9f1ea02327bab45e499d Strorage Enclosure logical name Yes
    post_install TMPL_SERVER_NODE_TYPE_{n} virtual Yes
    post_install TMPL_CONTROLLER_PASSWORD_{n} '!manage' Controller access - plaintext password Yes
    post_install TMPL_PRIMARY_CONTROLLER_IP_{n} 10.0.0.2 Controller A IP Address Yes
    post_install TMPL_PRIMARY_CONTROLLER_PORT_{n} 22 Controller A Port Yes
    post_install TMPL_SECONDARY_CONTROLLER_IP_{n} 10.0.0.3 Controller B IP Address Yes
    post_install TMPL_SECONDARY_CONTROLLER_PORT_{n} 22 Controller B Port Yes
    post_install TMPL_CONTROLLER_TYPE_{n} Gallium Type of controller Yes
    post_install TMPL_CONTROLLER_USER_{n} manage Controller access - username Yes
    prepare TMPL_ENCLOSURE_NAME_{n} enclosure-1 Strorage Enclosure logical name Yes
    prepare TMPL_ENCLOSURE_TYPE_{n} RBOD Type of enclosure Yes
    prepare TMPL_DATA_PRIVATE_FQDN_{n} srvnode-1.data.private.fqdn FQDN of Private Data Network Yes
    prepare TMPL_DATA_PRIVATE_INTERFACE_{n} Data network private interfaces No
    prepare TMPL_DATA_PUBLIC_FQDN_{n} srvnode-1.data.public.fqdn FQDN of Public Data Network Yes
    prepare TMPL_DATA_PUBLIC_INTERFACE_{n} Data network public interfaces No
    prepare TMPL_MGMT_INTERFACE_{n} eth0 Management network interfaces Yes
    prepare TMPL_MGMT_PUBLIC_FQDN_{n} srvnode-1.public.fqdn FQDN of Management Network Yes
    prepare TMPL_NODE_ID_{n} SN01 Identifies the node in the cluster Yes
    prepare TMPL_RACK_ID_{n} RC01 Identifies the rack in the cluster Yes
    prepare TMPL_SITE_ID_{n} DC01 Identifies the site in the cluster Yes
    prepare TMPL_CLUSTER_ID CC01 UUID Yes
  2. Note

    1. Mini provisioner interfaces are supposed to be executed in following order

      1. post-install, prepare, config, init
      2. And when needed, remaining stages has no order dependency to execute, i.e for test, reset, cleanup
    2. Config required at each mini provisioner stage is a super set of current plus previous stage config

Provisioning

  1. Post Install

    1. Run utils post_install on all 3 nodes

      /opt/seagate/cortx/utils/bin/utils_setup post_install --config /tmp/cortx-config-new

    2. Run SSPL post_install on all 3 nodes

      /opt/seagate/cortx/sspl/bin/sspl_setup post_install --config /opt/seagate/sspl/conf/sspl.post-install.tmpl.3-node

  2. Prepare

    1. Run SSPL prepare on all 3 nodes

      /opt/seagate/cortx/sspl/bin/sspl_setup prepare --config /opt/seagate/sspl/conf/sspl.prepare.tmpl.3-node

  3. Config

    1. Run utils config on all 3 nodes

      /opt/seagate/cortx/utils/bin/utils_setup config --config /tmp/cortx-config-new

    2. Run SSPL config on all 3 nodes

      /opt/seagate/cortx/sspl/bin/sspl_setup config --config /opt/seagate/sspl/conf/sspl.config.tmpl.3-node

  4. Init

    1. Run SSPL init on all 3 nodes

      /opt/seagate/cortx/sspl/bin/sspl_setup init --config /opt/seagate/sspl/conf/sspl.init.tmpl.3-node

Start

Run below stpes in all 3 nodes

  1. Start SSPL service

    systemctl start sspl-ll

  2. Check service status

    systemctl status sspl-ll

Note: Starting service using systemctl is recommended only when HA framework is not in place, otherwise service is supposed to be started in Cortx cluster automatically by HA

Test

Run below step in all 3 node to execute tests

Note: Test check features on single node alone(on one which its executed)

  1. Start SSPL tests with plan

    /opt/seagate/cortx/sspl/bin/sspl_setup test --config <config_url> --plan [sanity|alerts|self_primary|self_secondary]

    Ex.

    /opt/seagate/cortx/sspl/bin/sspl_setup test --config <config_url> --plan sanity