Skip to content

Commit 419074f

Browse files
committed
Update README file to match current deployment process.
1 parent f0059a0 commit 419074f

File tree

1 file changed

+71
-19
lines changed

1 file changed

+71
-19
lines changed

dev_env/README.md

Lines changed: 71 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -30,66 +30,118 @@ This software encapsulates the deployment of the Unity Algorithm Development Ser
3030
Deploys these Unity ADS services:
3131

3232
* Development Environment
33-
* Jupyterlab
34-
* Shared Storage
35-
* EC2 Support Instance
33+
* EFS Shared Storage
34+
* Jupyterhub
35+
* EC2 Support Instance (optional)
3636

3737

3838
## Development Environment
3939

4040
For each deployment instance (ie. development, test, production) define the following environment variables to customize the install to the environment. For example for the test deployment you would defined the following variables:
4141

42+
4243
```
4344
export TF_VAR_unity_instance="Unity-Test"
4445
export TF_VAR_tenant_identifier="test"
45-
export TF_VAR_cognito_base_url="https://unitysds-test.auth.us-west-2.amazoncognito.com"
4646
export TF_VAR_s3_identifier="test"
47+
export TF_VAR_efs_identifier="uads-development-efs-fs"
4748
```
4849

49-
The `unity_instance` variable should match the string used in the Unity instance's VPC name as this variable is used to look up the VPC. The `cognito_base_url` can be found from the "App Integration" tab of the user pool's information screen through the AWS web interface. The `s3_identifier` variable must match the instance string inserted into the names of the instance's Cumulus S3 buckets.
50+
The `unity_instance` variable should match the string used in the Unity instance's VPC name as this variable is used to look up the VPC. The `s3_identifier` variable must match the instance string inserted into the names of the instance's Cumulus S3 buckets. The `efs_identifider` variable is used to create the EFS shared storage resource.
5051

5152
Each of the following Development Environment components need to be intialized individually by changing into their respective directory and running Terraform there. The Development Enviroment base directory under the repository is `dev_env/`.
5253

53-
### Shared Storage
54+
The steps below assume that the above environment variables have already been defined.
55+
56+
### EFS Shared Storage
5457

55-
Shared storage must be installed prior to initalizing Jupterlab. The Terraform scripts in this directory create an EFS server for Jupyterlab intended to host files common to all users. These scripts are seperated out from the Jupyterlab installation scripts to enable removing and rebuilding the Jupyterlab instance without deleting the EFS stored files.
58+
Shared storage must be installed prior to initalizing Jupterlab. The Terraform scripts in this directory create an EFS server for Jupyterhub intended to host files common to all users. These scripts are seperated out from the Jupyterhub installation scripts to enable removing and rebuilding the Jupyterhubz instance without deleting the EFS stored files.
5659

5760
1. Change to the `dev_env/shared_storage` directory
5861
2. Run `terraform init`
5962
3. Run `terraform apply`
6063

61-
### Jupyterlab
64+
### Cognito Initial Setup
6265

63-
Jupyterlab must be installed after the EFS shared storage Terraform scripts have run. These scripts require the creation of a private key that will be used set up self signed certificates in the Application Load Balancer.
66+
The connection of the Juptyterhub instance to the Unity Cognito Authentication requires running commands from the `cognito` directory twice.
6467

65-
1. Change to the `dev_env/jupyterlab` directory
66-
2. Generate the `private_key.pem` file: `$ openssl genrsa -out private_key.pem 2048`
67-
3. Run `terraform init`
68-
4. Run `terraform apply`
68+
The initial set up will generate a Cognito application client along with the client id and secret necessary for feeding into the Jupyterhub deployment.
69+
70+
1. Change to the `dev_env/cognito` directory
71+
2. Run `terraform init`
72+
3. Run `terraform apply`
73+
74+
Once Terraform has finished succesfully run the following to bring the Cognito id and secret into environment variables required in the next step.
75+
76+
```
77+
$ eval $(./cognito_config_envs.sh)
78+
```
79+
80+
Run the following to verify that the environment variables were succesfully set up:
81+
82+
```
83+
$ env | grep TF_VAR_cogn
84+
```
85+
86+
Note that the Cognito resource could exist in a seperate venue from the Jupyterhub instance.
6987

70-
These scripts set up an EKS cluster. To access the cluster you must first initialize your `~/.kube/config` configuration file by running the following command:
88+
### Jupyterhub
89+
90+
Jupyterhub must be installed after the EFS shared storage Terraform scripts and Cognito initial step have been run.
91+
92+
1. Change to the `dev_env/jupyterhub` directory
93+
2. Run `terraform init`
94+
3. Run `terraform apply`
95+
96+
For the above steps it is recommended to keep the `KUBE_CONFIG_PATH` environment variable unset, or else the EKS system within Terraform might get confused by trying to access a non-existent cluster if this is the first time this particular cluster has been set up and you have multiple clusters listed in your Kubernetes config file.
97+
98+
Two useful variables are output from the Terraform execution:
99+
100+
* jupyter\_base\_uri - The URL used to log into the Jupyterhub cluster
101+
* eks_cluster_name - The name of the generated EKS cluster
102+
* kube_namespace - The namespace used with `kubectl` for investigating the EKS cluster
103+
104+
But after successfully running the Terraform script for this directory for the first time, for subsequent runs to update changes to the Terraform scripts you must define the `KUBE_CONFIG_PATH` environment variable:
71105

72106
```
73-
$ aws eks update-kubeconfig --region us-west-2 --name uads-${TF_VAR_tenant_identifier}-jupyter-cluster
107+
export KUBE_CONFIG_PATH=$HOME/.kube/config
74108
```
75109

110+
Run the `update_kube_config.sh` script to use the generated EKS cluster name from Terraform to create a new entry in the Kubernetes config file to allow use of the kubectl command for querying the cluster.
111+
76112
Now you can query the status of the cluster nodes as follows:
77113

78114
```
79-
$ kubectl --namespace=jhub-${TF_VAR_tenant_identifier} get pods
115+
$ kubectl --namespace=$kube_namespace get pods
80116
```
81117

82118
The status for all pods should be ``Running``. If not query to log from the
83119

84120
```
85-
$ kubectl --namespace=jhub-${TF_VAR_tenant_identifier} logs ${pod_id}
121+
$ kubectl --namespace=$kube_namespace logs $pod_id
86122
```
87123

88-
Where `${pod_id}` comes from the output of the `get pods` command.
124+
Where `$pod_id` comes from the output of the `get pods` command.
125+
126+
### Cognito Final Setup
127+
128+
Change back to the `cognito` directory to run the following sequence to publish the Jupyterhub callback URL to Cognito:
129+
130+
```
131+
$ eval $(./jupyter_uri_env.sh)
132+
$ env | grep TF_VAR_jupyter_base_url
133+
$ terraform apply
134+
```
135+
136+
Now that the `TF_VAR_jupyter_base_url` variable has been defined the Terraform process will update the Cogntio client to allow connection from the Jupyterhub instance.
137+
138+
### Test Jupyterhub
139+
140+
Now to test Jupyterhub installation by navigating to the URL from the `jupyter_base_uri` output from the `jupyterhub` directory.
89141

90142
### EC2 Support Instance
91143

92-
The Support EC2 instance is used to manage the EFS shared storage. It must be installed after the EFS shared storage Terraform scripts have run. These scripts require the creation of a private key that will be used for logging into the instance.
144+
The Support EC2 instance can optionally be used to manage the EFS shared storage. It must be installed after the EFS shared storage Terraform scripts have run. These scripts require the creation of a private key that will be used for logging into the instance.
93145

94146
1. Change to the `dev_env/support_instance` directory
95147
2. Generate the `private_key.pem` file: `$ openssl genrsa -out private_key.pem 2048`

0 commit comments

Comments
 (0)