To begin, export environment variables as shown in "Sample .bashrc" below.
These environment variables will be used to create Azure resources.
export RESOURCEGROUP=myresouregroup
export LOCATION=westus
export VMNAME=myvmname
export ADMINUSER=azureuser
export STORAGEACCTNAME=mystorageaccount
# the following will be the same as the variables exported on the VM below
export AZURE_CONNECTION_STRING="1234567890" # use the connection string for your Azure account. # Note the quotation marks around the string
export BUCKET_NAME=hsdstest # set to the name of the container you will be using
Setup Pip and Python 3 on your local machine if not already installed (e.g. with Miniconda https://docs.conda.io/en/latest/miniconda.html)
- Install azure-cli:
pip install azure-cli
- Validate runtime version az-cli is at least 2.0.80:
az version
- Login to Azure Subscription using AZ-Cli.
az login
- After successful login, the list of available subscriptions will be displayed. If you have access to more than one subscription, set the proper subscription to be used:
az account set --subscription [name]
- Run the following commands to create Azure Resource Group
az group create --name $RESOURCEGROUP --location $LOCATION
- Create an Ubuntu Virtual Machine:
az vm create --resource-group $RESOURCEGROUP --name $VMNAME --image UbuntuLTS --admin-username $ADMINUSER --public-ip-address-dns-name $VMNAME --generate-ssh-keys
The--generate-ssh-keys
parameter is used to automatically generate an SSH key, and put it in the default key location (~/.ssh). To use a specific set of keys instead, use the--ssh-key-value
option.
Note:: To use $VMNAME as your public DNS name, it will need to be unique across the $LOCATION the VM is located. - The above command will output values after the successful creation of the VM. Keep the publicIpAddress for use below.
- Open port 80 to web traffic:
az vm open-port --port 80 --resource-group $RESOURCEGROUP --name $VMNAME
- Create storage account if one does not exist:
az storage account create -n $STORAGEACCTNAME -g $RESOURCEGROUP -l $LOCATION --sku Standard_LRS
- Create a container for HSDS in the storage account:
az storage container create --name $BUCKET_NAME --connection-string $AZURE_CONNECTION_STRING
Note: The connection string for the storage account can be found in the portal under Settings > Access keys on the storage account or via this cli command: az storage account show-connection-string -n $STORAGEACCTNAME -g $RESOURCEGROUP
On the VM, export environment variables as shown in "Sample .bashrc" below. IMPORTANT: If you are not adding these variables into your .bashrc, they must be exported in step 7 below, after Docker is installed.
These environment variables will be passed to the Docker containers on startup.
export AZURE_CONNECTION_STRING="1234567890" # use the connection string for your Azure account. Note the quotation marks around the string
export BUCKET_NAME=hsdstest # set to the name of the container you will be using
export HSDS_ENDPOINT=http://myvmname.westus.cloudapp.azure.com # Set to the public DNS name of the VM. Use https protocol if SSL is desired and configured
Follow the following steps to setup HSDS:
- SSH to the VM created above. Replace [publicIpAddress] with the public IP dispayed in the ouput of your VM creation command above:
ssh azureuser@[publicIpAddress]
- Install Docker and docker-compose if necessary (see "Docker Setup" below)
- Ensure the proper container for HSDS is created (Step 10 in "Set up your Azure environment")
- Get project source code:
git clone https://github.com/HDFGroup/hsds
- Go to admin/config directory:
cd hsds/admin/config
- Copy the file "passwd.default" to "passwd.txt". Add any usernames/passwords you wish. Modify existing passwords (for admin, test_user1, test_user2) for security.
- Create environment variables as in "Sample .bashrc" above
- From hsds directory, build docker image:
docker build -t hdfgroup/hsds .
- Start the service
./runall.sh <n>
where n is the number of containers desired (defaults to 1) - Run
docker ps
and verify that the containers are running: hsds_head, hsds_sn_[1-n], hsds_dn_[1-n] - Run
curl $HSDS_ENDPOINT/about
where and verify that "cluster_state" is "READY" (might need to give it a minute or two)
The following are instructions for installing Docker on Linux/Ubuntu. Details for other Linux distros may vary.
Run the following commands to install Docker on Linux/Ubuntu:
sudo apt-get update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo groupadd docker
if group docker doesn't exist alreadysudo gpasswd -a $USER docker
- Log out and back in again (you may also need to stop/start docker service)
docker ps
to verify that Docker is running.
Install docker-compose.
The following configuration can be run on your local machine to verify the installation, and configure user home folders. Important: trailing slashes are essential here. These steps can be run on the server VM or on your client.
- Install pip if not installed:
sudo apt install python-pip
- Set an environment variable: ADMIN_PASSWORD with the value used in the password.txt file. E.g.:
export ADMIN_PASSWORD=admin
- Set an environment varaible: USER_PASSWORD with the password for test_user1 in the password.txt file. E.g.:
export USER_PASSWORD=test
- Get the hsds project if you haven't already:
git clone https://github.com/HDFGroup/hsds
- In the hsds directory, run the integration test:
python testall.py --skip_unit
. IgnoreWARNING: is test data setup?
messages for now - Install h5py:
pip install h5py
- Install h5pyd (Python client SDK):
pip install h5pyd
- Configure h5pyd:
hsconfigure
Server endpoint: $HSDS_ENDPOINT enviornment variable Username: from hsds/admin/config/passwd.txt file above Password: from hsds/admin/config/passwd.txt file above - To setup test data, download the following file:
wget https://s3.amazonaws.com/hdfgroup/data/hdf5test/tall.h5
- Create a test folder:
hstouch -u test_user1 -p $USER_PASSWORD /home/test_user1/test/
- Import into hsds:
hsload -v -u test_user1 -p $USER_PASSWORD tall.h5 /home/test_user1/test/
- Verify upload:
hsls -r -u test_user1 -p $USER_PASWORD /home/test_user1/test/tall.h5
- Rerun the integration test:
python testall.py --skip_unit
. You should not see any WARNING messages now - Create home folders for other users if desired:
python hstouch -u admin -p $ADMIN_PASSWORD -o USERNAME /home/USERNAME/
NOTE: If the initial run of testall.py (step 5 above) fails for any reason and does not create the home directory, you can create it manually as follows:
python hstouch -u admin -p $ADMIN_PASSWORD /home/
You can then add home folders for users as desired.
To get the latest codes changes from the HSDS repo do the following:
- Shutdown the service:
./stopall.sh
- Get code changes:
git pull
- Rebuild the Docker image:
./build.sh
- Start the service:
./runall.sh
To change passwords or add new user accounts do the following:
- Shutdown the service:
./stopall.sh
- Add new username/passwords to the hsds/admin/config/passwd.txt file
- Rebuild the Docker image:
./build.sh
- Start the service:
./runall.sh