- Quick Start (MacOS)
- Quick Start (Windows)
- Overview
- Requirements
- DevVM provisioning
- Building and running Motr
- Try single-node Motr cluster
- Vagrant basics
- Streamlining VMs creation and provisioning with snapshots
- Managing multiple VM sets with workspaces
- Executing Ansible commands manually
- VirtualBox / VMware / Libvirt specifics
-
Install
-
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
-
bash
4.xbrew install bash
-
GNU
readlink
brew install coreutils
-
VMware Fusion or VirtualBox (VMware is recommended for better experience)
-
Vagrant (and Vagrant VMware Utility in case of VMware)
-
Vagrant plugins (for VMware the license needs to be purchased)
vagrant plugin install vagrant-{env,hostmanager,scp} vagrant plugin install vagrant-vmware-desktop # for VMware or vagrant plugin install vagrant-vbguest # for VirtualBox
-
Ansible
brew install ansible # on Linux or macOS hosts
-
-
Configure
-
m0vg
script (make sure you have$HOME/bin
in the$PATH
)MOTR_SRC=~/src/motr # use the actual Motr location on your host system ln -s $MOTR_SRC/scripts/m0vg $HOME/bin/
-
VMs
# open virtual cluster configuration file in default editor m0vg env edit
paste the following template updating parameters as desired:
# a host directory to share among all VMs, # by default it's a parent of $MOTR_SRC dir #M0_VM_SHARED_DIR=~ # default sharing mechanism is NFS, which is recommended, # but in case it doesn't work for some reason, a virtual # provider specific sharing mechanism can be used (e.g. # VirtualBox shared folder) #M0_VM_SHARE_TYPE=provider # a comma-separated list of additional packages to be installed # on each VM (they must be available in default `yum` repositories # or EPEL) #M0_VM_EXTRA_PKGS=python36,python36-pip # a script executed on CMU node after provisioning is finished, # a `sudo` can be used in the script to gain root privileges #M0_VM_PROVISION_SCRIPT=~/vagrant-postinstall.sh # amount of RAM available on CMU node #M0_VM_CMU_MEM_MB=4096 # number of client VMs #M0_VM_CLIENT_NR=2 # amount of RAM available on every client node #M0_VM_CLIENT_MEM_MB=3072 # number of ssu VMs #M0_VM_SSU_NR=3 # amount of RAM available on every ssu node #M0_VM_SSU_MEM_MB=2048 # number of data drives on every ssu node #M0_VM_SSU_DISKS=12 # size of each data drive on ssu node #M0_VM_SSU_DISK_SIZE_GB=8
see
m0vg params
output for the full list of supported configuration parameters
-
-
Run
-
check VMs state
m0vg status
-
create cmu VM (this can take ~30 minutes depending on the internet connection, CPU and system disk speed)
m0vg up cmu
-
restart cmu VM in order to activate shared folder
m0vg reload cmu
-
logon on cmu and check contents of
/data
dirm0vg tmux ls /data
-
create ssu and client VMs (can take about ~40 minutes depending on the number of configured ssu and client nodes)
m0vg up /ssu/ /client/ m0vg reload /ssu/ /client/
-
stop all nodes when they're not needed to be running
m0vg halt
-
if a node hangs (e.g. Motr crash in kernel or deadlock) it can be forced to shutdown using
-f
option forhalt
command, for example:m0vg halt -f client1
-
-
Install
-
VMware Workstation or VirtualBox (VMware is recommended for better experience)
-
Vagrant (and Vagrant VMware Utility in case of VMware)
-
Vagrant plugins (for VMware the license needs to be purchased)
vagrant plugin install vagrant-{env,hostmanager,scp} vagrant plugin install vagrant-vmware-desktop # for VMware or vagrant plugin install vagrant-vbguest # for VirtualBox
-
Git for Windows During installation, when asked, choose the following options (keep other options to their default setting):
- Use Git and optional Unix tools from the Command Prompt
- Checkout as-is, commit Unix-style line ending
- Enable symbolic links
-
-
Configure
-
Open Git Bash terminal, add CRLF configuration option to make sure that Motr/Hare scripts can work on VM
git config --global core.autocrlf input
-
Clone Motr repository somewhere, just as an example let's say it's in
$HOME/src/motr
:mkdir -p src cd src git clone --recursive git@github.com:Seagate/cortx-motr.git motr
-
Create a persistent alias for
m0vg
script:cat <<EOF >> $HOME/.bash_profile # use the actual Motr location on your host system export MOTR_SRC=$HOME/src/motr alias m0vg="\$MOTR_SRC/scripts/m0vg" EOF
Exit and re-launch Git Bash terminal. At this point the setup should be complete.
-
-
Run
-
Follow the steps from Run section under Quick Start (MacOS) above.
NOTE: during
m0vg up <node>
command execution you may be asked to enter your Windows username and password, and then grant permissions for creating Windows shared directory. To avoid manually entering the credentials for every node, set SMB_USERNAME/SMB_PASSWORD environment variables with the correspondent values. Note: make sure SMB_PASSWORD is not saved in your bash history (for security reasons).
-
This directory contains scripts for quick deployment of a "devvm" virtual machine (based on a stock Centos7 by default), prepared for Motr development and testing on a local desktop or a laptop.
The virtual machine is automatically created from the official Centos7 base image, which is downloaded from the vagrantcloud repository. After provisioning and installation of the required rpm packages, including build tools and latest Lustre from Whamcloud's repository, it takes about 2.5GB of extra disk space per VM.
Besides main virtual machine, which can be used as a build node, additional machines can be provisioned as well to provide a cluster-like environment for debugging and testing motr on multiple nodes. The main machine is named cmu, machines with attached disks are named ssuN, and "client" machines are named clientN, where N is a natural number.
All machines are accessible by names (with .local
suffix) within their private
network, with password-less ssh access from the main node to other test nodes. A
directory containing motr source code is shared with each machine over NFS.
This should provide a short-enough "prepare/build/test" cycle for efficient
development workflow.
Depending on the host OS, different virtualization providers are supported: on Linux those are Libvirt/KVM and VirtualBox, on macOS - VMware Fusion and VirtualBox, on Windows - VMware Workstation and VirtualBox.
In order to run these scripts, additional tools have to be installed first. It's assumed that either macOS, Windows or Linux is used as a host operating system.
-
Minimum Host OS
- 8GB of RAM
- 10GB of free disk space
- 2 CPU cores
-
Additional Software/Tools:
- VMware Fusion (for macOS) or VMware Workstation (for Windows) OR VirtualBox (VMware is recommended for better experience in terms of memory utilisation)
libvirt + qemu-kvm
(Linux only)- Vagrant
- Vagrant VMware plugin + Vagrant VMware Utility (in case of VMware)
- Ansible (macOS and Linux only)
- Git for Windows (Windows only)
On Ubuntu Linux all of the above prerequisites can be installed with a single command:
sudo apt install qemu-kvm libvirt-bin vagrant ansible
Though, it's actually better to get a more up-to-day versions of Vagrant and Ansible than those provided by a distribution. The procedure is same as described below for macOS.
On macOS the easiest way to install those tools is to download VirtualBox/VMware Fusion and Vagrant packages from their official web-sites (refer to the links above).
And install Ansible using Python package manager pip
, which is
available on macOS "out of the box":
# install for current user only
# make sure that '$HOME/.local/bin' is in your PATH
pip install --user ansible
# install system-wide
sudo pip install ansible
Another popular alternative is to use MacPorts or Homebrew package managers:
# install Ansible using MacPorts
sudo port install py36-ansible
# install Ansible using Homebrew
brew install ansible
After Vagrant is installed, a couple of plugins need to be installed also. On
Linux it is vagrant-libvirt
(for kvm support), and on macOS/Windows it's
vagrant-vbguest
, when using VirtualBox and vagrant-vmware-desktop
, when
using VMware Fusion/Workstation:
# linux with Qemu/KVM
vagrant plugin install vagrant-libvirt
# macOS with VirtualBox
vagrant plugin install vagrant-vbguest
# macOS/Windows with VMware Fusion/Workstation
vagrant plugin install vagrant-vmware-desktop
It's highly recommend to install a few more Vagrant plugins for a better user experience:
vagrant-env
-- for saving commonly used configuration variables in a config filevagrant-hostmanager
-- for managing /etc/hosts file on guest machinesvagrant-scp
-- for easier file copying between the host and VM
After installing required tools from the above section, all that remains is to
run vagrant up
command in the directory containing this README
file, that
will do rest of the work. But, there is a better way to achieve the same result
which is more convenient:
./scripts/m0vg up
The m0vg
helper script is a wrapper around Vagrant and Ansible commands
that can be symlinked somewhere into the PATH
and be called from any
directory. Check out m0vg --help
for more info.
It will spawn a VM and configure it using Ansible "playbook"
scripts/provisioning/cmu.yml
, that specifies all Motr dependencies which
should be installed in order to build and run Motr. It will install Lustre
2.10.4 from the official Whamcloud's
repository.
During provisioning, Vagrant might pause and ask for user password, this is
needed for NFS auto-configuration (it will add a new entry in /etc/exports
and restart nfsd
service).
By default, Vagrant creates a vagrant
user inside VM with password-less
sudo
privileges. The user password is also vagrant
.
When provisioning is finished it should be possible to login into the VM with
./scripts/m0vg ssh
command. Please, refer to the Vagrant basics section
below for the list of other useful Vagrant commands.
If a cluster-like environment is needed, more machines can be provisioned:
./scripts/m0vg up cmu /ssu/ /client/
The additional parameters are also explained in the Vagrant basics section below.
It is possible to control different parameters of the Vagrantfile
via
environment variables or .env
file that should be placed alongside
Vagrantfile
. For instance, the following two examples do the same thing but
with the latter there is no need to specify env variables every time while
executing a vagrant command, they will be loaded from the .env
file:
# -1- using env variables
M0_SSU_NR=5 M0_CLIENT_NR=3 vagrant up
# -2- using env file
cat .env
M0_SSU_NR=5
M0_CLIENT_NR=3
vagrant up
By the way, there is no need to create .env
file manually, m0vg env edit
helps with that. A complete list of supported variables is printed by m0vg params
command.
All additional nodes can be accessed from the main machine (cmu) by their name
in a .local
domain. For example, here is how to execute a command on the
ssu1 from cmu:
ssh ssu1.local <command>
The host directory containing motr sources directory will be mounted over NFS
on each VM under /data
.
NOTE: one important aspect of how Vagrant works is that it creates a hidden
.vagrant
directory, alongsideVagrantfile
, where it keeps all configuration data related to provisioned VMs. If that directory is lost the access to the VMs is lost as well. Which can happen unintentionally as a result ofgit clean -dfx
. This is another reason to usem0vg
script which takes care of it by moving.vagrant
directory outside of motr source tree.
Normally, Motr sources should be accessible over NFS (or native
VirtualBox/VMware shared folder) on each VM under /data
directory:
# build Motr in source tree
cd /data/motr
./scripts/m0 make
If, for some reason, Vagrant hasn't been able to configure the NFS share it
is still possible to copy motr sources to VM with the help of vagrant-scp
plugin:
# on the host
tar -czf ~/motr.tar.gz $MOTR_SRC
m0vg scp ~/motr.tar.gz :~
# on VM
cd ~
tar -xf motr.tar.gz
cd motr
./autogen.sh && ./configure && make rpms-notests
Resulting rpm files will be available in ~/rpmbuild/RPMS/x86_64/
directory.
To verify them they can be installed with:
sudo yum install rpmbuild/RPMS/x86_64/*
To bootstrap a single-node cluster Hare should be installed also. Here is the short script which can be run on the VM to prepare everything:
[[ -d cortx-motr ]] ||
git clone --recurse https://github.com/Seagate/cortx-motr.git &&
ln -s cortx-motr motr
cd motr
echo 'Building and installing Motr...'
./autogen.sh && ./configure --disable-expensive-checks && make -j8 &&
./scripts/install-motr-service
cd -
[[ -d cortx-hare ]] ||
git clone --recurse https://github.com/Seagate/cortx-hare.git &&
ln -s cortx-hare hare
cd hare
echo 'Building and installing Hare...'
make && make devinstall
cd -
# Create block devices
mkdir -p /var/motr
for i in {0..9}; do
dd if=/dev/zero of=/var/motr/disk$i.img bs=1M seek=9999 count=1
losetup /dev/loop$i /var/motr/disk$i.img
done
# Prepare CDF (Cluster Description File)
[[ -f singlenode.yaml ]] || cp hare/cfgen/examples/singlenode.yaml ./
sed 's/localhost/cmu/' -i singlenode.yaml
sed 's/data_iface: eth./data_iface: eth0/' -i singlenode.yaml
After all the above steps completed successfully, the single-node Motr cluster is ready for the 1st start:
hctl bootstrap --mkfs singlenode.yaml
Vagrant can be thought of as a scriptable unification API on top of various
virtualization providers, like VirtualBox, VMware, KVM etc. From a user
perspective all virtual machine configuration is done in a single Vagrantfile
,
which essentially is just a ruby
script. It's processed every time vagrant
command is executed, which expects to find it in the current working directory.
Most common Vagrant commands are:
# checking status of VM(s), e.g. running, halted, destroyed
vagrant status
# creating a VM if it doesn't exist or starting it if it's stopped;
# if there are provisioning steps, they are performed only once when VM is
# created/started for the first time
vagrant up
# stopping (powering off) a VM in graceful way
vagrant halt
# forcing a VM to power off, as if unplugging power cable
vagrant halt -f
# loggin into VM
vagrant ssh
# repeating provisioning steps (VM should be running)
vagrant provision
# repeating provisioning steps (VM should be stopped)
vagrant up --provision
# destroying VM and all associated data disks
vagrant destroy
Most of the Vagrant commands accept VM name as argument if there are multiple
VMs configured in Vagrantfile
. It is also possible to specify a regular
expression instead of a single name to operate on several VMs:
# start a 'cmu' VM
vagrant up cmu
# start all SSUs
vagrant up /ssu/
# start all VMs
vagrant up cmu /ssu/ /client/
NOTE: on Windows host, Ansible is running on guest VMs (not on host) and all the Ansible tasks scripts are rsync-ed to guests at
/vagrant/
folder. So whenever the scripts are updated on host they can be rsync-ed to guests (in order to pick up the changes) with the following command:m0vg rsync cmu /ssu/ /client/
It might be useful to save VM state just after provisioning for instant access to a clean VM without re-doing a complete provisioning from scratch. Please notice, that it's better when VMs are powered off when snapshot is made:
# poweroff all VMs
m0vg halt
# create snapshots, 'clean-vm' is just a name of the snapshot and can be
# changed to your liking
m0vg snapshot save cmu clean-vm
m0vg snapshot save ssu1 clean-vm
m0vg snapshot save ssu2 clean-vm
m0vg snapshot save client1 clean-vm
Then later, in order to discard the current state and restore a clean VM one may do:
m0vg snapshot restore --no-provision cmu clean-vm
If --no-provision
option is omitted, the Ansible provisioning will be
repeated after the restore phase. It may come in handy for getting latest
security updates for the VM since snapshot creation.
Workspaces is a handy little feature of m0vg
script, it exploits the fact that
Vagrant keeps all configuration data related to Vagrantfile
in a single
directory. If that special directory is replaced with another one, Vagrant
will use it instead. So it's possible to keep around multiple of those
directories and switch between them, thus having multiple virtual clusters.
The m0vg
supports following actions on workspaces:
m0vg workspace list
m0vg workspace add <NAME>
m0vg workspace switch <NAME>
The workspace
sub-command can be shortened as just ws
.
m0vg
also maintains a dedicated .env
file for each workspace so when
switching between workspaces each keeps it's own set of environment variables.
In some rare cases it can be useful to run Ansible commands against Vagrant
VMs manually. For this purpose m0vg
script supports ansible
command. Here
are just a few examples:
# list all hosts present in the cluster
m0vg ansible cluster.yml --list-hosts
# list all the tasks that would be performed for 'cmu' machine
m0vg ansible cmu.yml --list-tasks
On the other hand it can be achieved by starting a VM with GUI enabled:
M0_VM_ENABLE_GUI=yes m0vg up cmu
This needs to be done only once and VM will appear in VMware's VM Library.