-
Notifications
You must be signed in to change notification settings - Fork 38
Cortx monitor 3 Node VM provisioning manual
Run below steps in all 3 nodes
- Setup yum repo (Required only for installing build specific RPMS)
curl https://raw.githubusercontent.com/Seagate/cortx-monitor/main/low-level/files/opt/seagate/sspl/setup/sspl_dev_deploy -o sspl_dev_deploy
chmod a+x sspl_dev_deploy
./sspl_dev_deploy --setup_repo -T http://cortx-storage.colo.seagate.com/releases/cortx/github/main/centos-7.8.2003/<build_number>/prod/
Specify <build_number> in the build URL.
Example:
./sspl_dev_deploy --setup_repo -T http://cortx-storage.colo.seagate.com/releases/cortx/github/main/centos-7.8.2003/830/prod/
-
Install 3rd party packages
-
RPMs
Package Version hdparm 9.43 ipmitool 1.8.18 lshw B.02.18 python3 3.6.8 python36-dbus 1.2.4 python36-gobject 3.22.0 python36-paramiko 2.1.1 python36-psutil 5.6.7 shadow-utils 4.6 smartmontools 7.0 systemd-python36 1.0.0 udisks2 2.8.4 -
Python
Package Version cryptography 2.8 jsonschema 3.2.0 pika 1.1.0 pyinotify 0.9.6 python-daemon 2.2.4 requests 2.25.1 zope.component 4.6.2 zope.event 4.5.0 zope.interface 5.2.0
-
-
Run cortx-py-utils pre-requisites
-
Config file for cortx-py-utils
- Create config file for cortx-py-utils at /tmp/cortx-config-new
- Description of keys used in config file
-
Install cortx-py-utils
-
Install SSPL RPMs
SSPL mini-provisioner interfaces are supposed to be executed in order i.e post-install, prepare, config, init. And each stage must be executed on each node before proceeding to next stage. And rest of interfaces can be executed in any order i.e for test, reset, cleanup.
Input config required at each mini provisioner stage is a cumulative i.e. super set of current plus previous stage config. The input config from post-install stage will be available to prepare stage, post-install and prepare stage input config will be available to config stage. Same way, post-install, prepare and config stage input configs to init stage. Init stage is expected to have all configs required for sspl-ll service to start and run smoothly.
At each mini-provisioner stage, argument "--config <template_file>" needs to be passed , which is a reference to required configs backend, depending on the stage.
The template file is a key-value mapped yaml file, maintaining the config requirement template for each stage. All the strings start with TMPL_ should be replaced with valid values. If no values identified, then it can be left blank.
-
TMPL_{field} is a identifier for consumer to update variable keys & values(ex. CI CD pipeline)
Template fields used accross mini provisioner interfaces
n will be 1,2 and 3 for first, second and 3rd node
BMC information such as IP, USER and SECRET can be blank for VM.
Stage Field Example Value Description Required post_install TMPL_NODE_NAME_{n} srvnode-1 Server name of node Yes post_install TMPL_MACHINE_ID_{n} 30512e5ae6df9f1ea02327bab45e499d cat /etc/machine-id Yes post_install TMPL_HOSTNAME_{n} ssc-vm-2217.colo.seagate.com Hostname of node Yes post_install TMPL_BMC_IP_{n} BMC IP Address No post_install TMPL_BMC_SECRET_{n} BMC encrypted password No post_install TMPL_BMC_USER_{n} BMC username No post_install TMPL_ENCLOSURE_ID_{n} enc30512e5ae6df9f1ea02327bab45e499d Strorage Enclosure logical name Yes post_install TMPL_SERVER_NODE_TYPE_{n} virtual Yes post_install TMPL_CONTROLLER_PASSWORD_{n} '!manage' Controller access - plaintext password Yes post_install TMPL_PRIMARY_CONTROLLER_IP_{n} 10.0.0.2 Controller A IP Address Yes post_install TMPL_PRIMARY_CONTROLLER_PORT_{n} 22 Controller A Port Yes post_install TMPL_SECONDARY_CONTROLLER_IP_{n} 10.0.0.3 Controller B IP Address Yes post_install TMPL_SECONDARY_CONTROLLER_PORT_{n} 22 Controller B Port Yes post_install TMPL_CONTROLLER_TYPE_{n} Gallium Type of controller Yes post_install TMPL_CONTROLLER_USER_{n} manage Controller access - username Yes prepare TMPL_ENCLOSURE_NAME_{n} enclosure-1 Strorage Enclosure logical name Yes prepare TMPL_ENCLOSURE_TYPE_{n} RBOD Type of enclosure Yes prepare TMPL_DATA_PRIVATE_FQDN_{n} srvnode-1.data.private.fqdn FQDN of Private Data Network Yes prepare TMPL_DATA_PRIVATE_INTERFACE_{n} Data network private interfaces No prepare TMPL_DATA_PUBLIC_FQDN_{n} srvnode-1.data.public.fqdn FQDN of Public Data Network Yes prepare TMPL_DATA_PUBLIC_INTERFACE_{n} Data network public interfaces No prepare TMPL_MGMT_INTERFACE_{n} eth0 Management network interfaces Yes prepare TMPL_MGMT_PUBLIC_FQDN_{n} srvnode-1.public.fqdn FQDN of Management Network Yes prepare TMPL_NODE_ID_{n} SN01 Identifies the node in the cluster Yes prepare TMPL_RACK_ID_{n} RC01 Identifies the rack in the cluster Yes prepare TMPL_SITE_ID_{n} DC01 Identifies the site in the cluster Yes prepare TMPL_CLUSTER_ID CC01 UUID Yes -
Note
-
Mini provisioner interfaces are supposed to be executed in following order
- post-install, prepare, config, init
- And when needed, remaining stages has no order dependency to execute, i.e for test, reset, cleanup
-
Config required at each mini provisioner stage is a super set of current plus previous stage config
-
-
Post Install
-
Run utils post_install on all 3 nodes
/opt/seagate/cortx/utils/bin/utils_setup post_install --config /tmp/cortx-config-new
-
Run SSPL post_install on all 3 nodes
/opt/seagate/cortx/sspl/bin/sspl_setup post_install --config /opt/seagate/sspl/conf/sspl.post-install.tmpl.3-node
-
-
Prepare
-
Run SSPL prepare on all 3 nodes
/opt/seagate/cortx/sspl/bin/sspl_setup prepare --config /opt/seagate/sspl/conf/sspl.prepare.tmpl.3-node
-
-
Config
-
Run utils config on all 3 nodes
/opt/seagate/cortx/utils/bin/utils_setup config --config /tmp/cortx-config-new
-
Run SSPL config on all 3 nodes
/opt/seagate/cortx/sspl/bin/sspl_setup config --config /opt/seagate/sspl/conf/sspl.config.tmpl.3-node
-
-
Init
-
Run SSPL init on all 3 nodes
/opt/seagate/cortx/sspl/bin/sspl_setup init --config /opt/seagate/sspl/conf/sspl.init.tmpl.3-node
-
Run below stpes in all 3 nodes
-
Start SSPL service
systemctl start sspl-ll
-
Check service status
systemctl status sspl-ll
Note: Starting service using systemctl is recommended only when HA framework is not in place, otherwise service is supposed to be started in Cortx cluster automatically by HA
Run below step in all 3 node to execute tests
Note: Test check features on single node alone(on one which its executed)
-
Start SSPL tests with plan
/opt/seagate/cortx/sspl/bin/sspl_setup test --config <config_url> --plan [sanity|alerts|self_primary|self_secondary]
Ex.
/opt/seagate/cortx/sspl/bin/sspl_setup test --config <config_url> --plan sanity