Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Towards automation of deployment creation/teardown #926

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,12 @@ addons:
- gettext

before_install:
- openssl aes-256-cbc -K $encrypted_ead445d7a1e2_key -iv $encrypted_ead445d7a1e2_iv
-in gcp-credentials.json.enc -out gcp-credentials.json -d
- openssl aes-256-cbc -K $encrypted_ead445d7a1e2_key -iv $encrypted_ead445d7a1e2_iv
-in application_secrets.json.enc -out application_secrets.json -d
- (cd deployment/dev ; openssl aes-256-cbc -K $encrypted_ead445d7a1e2_key -iv $encrypted_ead445d7a1e2_iv
-in gcp-credentials.json.enc -out gcp-credentials.json -d)
- (cd deployment/dev ; openssl aes-256-cbc -K $encrypted_ead445d7a1e2_key -iv $encrypted_ead445d7a1e2_iv
-in application_secrets.json.enc -out application_secrets.json -d)
- rm deployment/active
- (cd deployment ; ln -s dev active)
- source environment

install:
Expand Down
7 changes: 7 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,15 @@ scaletest:

deploy: deploy-chalice deploy-daemons

components := $(shell basename $(shell ls -d deployment/active/*/))
deploy-infra:
scripts/enable_gs_services.sh
$(MAKE) -C deployment apply
scripts/set_event_relay_parameters.py

deploy-chalice:
$(MAKE) -C chalice deploy
scripts/set_apigateway_base_path_mapping.py

deploy-daemons: deploy-daemons-serial deploy-daemons-parallel

Expand Down
111 changes: 22 additions & 89 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,6 @@ The tests require certain node.js packages. They must be installed using `npm`,

Tests also use data from the data-bundle-examples subrepository. Run: `git submodule update --init`

#### Environment Variables

Environment variables are required for test and deployment. The required environment variables and their default values
are in the file `environment`. To customize the values of these environment variables:

1. Copy `environment.local.example` to `environment.local`
2. Edit `environment.local` to add custom entries that override the default values in `environment`

Run `source environment` now and whenever these environment files are modified.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some instructions on things that need to be setup prior to running config. For example certificates or route 53.

#### Configuring cloud-specific access credentials

Expand All @@ -51,47 +42,34 @@ Run `source environment` now and whenever these environment files are modified.
1. Follow the instructions in http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html to get the
`aws` command line utility.

2. Create an S3 bucket that you want DSS to use and in `environment.local`, set the environment variable `DSS_S3_BUCKET`
to the name of that bucket. Make sure the bucket region is consistent with `AWS_DEFAULT_REGION` in
`environment.local`.

3. Repeat the previous step for

* DSS_S3_CHECKOUT_BUCKET
* DSS_S3_CHECKOUT_BUCKET_TEST
* DSS_S3_CHECKOUT_BUCKET_TEST_FIXTURES

4. If you wish to run the unit tests, you must create two more S3 buckets, one for test data and another for test
fixtures, and set the environment variables `DSS_S3_BUCKET_TEST` and `DSS_S3_BUCKET_TEST_FIXTURES` to the names of
those buckets.

Hint: To create S3 buckets from the command line, use `aws s3 mb --region REGION s3://BUCKET_NAME/`.
2. To configure your account credentials and named profiles for the `aws` cli, see
https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html and
https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html

##### GCP

1. Follow the instructions in https://cloud.google.com/sdk/downloads to get the `gcloud` command line utility.

2. In the [Google Cloud Console](https://console.cloud.google.com/), select the correct Google user account on the top
right and the correct GCP project in the drop down in the top center. Go to "IAM & Admin", then "Service accounts",
then click "Create service account" and select "Furnish a new private key". Under "Roles" select "Project – Owner",
"Service Accounts – Service Account User" and "Cloud Functions – Cloud Function Developer". Create the account and
download the service account key JSON file.
2. Run `gcloud auth login` to authorize the gcloud cli.

#### Terraform

3. In `environment.local`, set the environment variable `GOOGLE_APPLICATION_CREDENTIALS` to the path of the service
account key JSON file.
Some cloud assets are managed by Terraform, inlcuding the storage buckets and Elasticsearch domain.

4. Choose a region that has support for Cloud Functions and set `GCP_DEFAULT_REGION` to that region. See
https://cloud.google.com/about/locations/ for a list of supported regions.
1. Follow the instructions in https://www.terraform.io/intro/getting-started/install.html to get the
`terraform` command line utility.

5. Run `gcloud auth activate-service-account --key-file=/path/to/service-account.json`.
2. Run `configure.py` to prepare the deployment.

6. Run `gcloud config set project PROJECT_ID` where PROJECT_ID is the ID, not the name (!) of the GCP project you
selected earlier.
3. Infrastructure deployment definiations may be further customized by editing the terraform scripts in
'deployment/active' subdirectories.

7. Enable required APIs: `gcloud services enable cloudfunctions.googleapis.com`; `gcloud services
enable runtimeconfig.googleapis.com`
Now you may deploy the cloud assets with
make deploy-infra

8. Generate OAuth application secrets to be used for your instance:
##### GCP Application Secrets

3. Generate OAuth application secrets to be used for your instance:

1) Go to https://console.developers.google.com/apis/credentials (you may have to select Organization and Project
again)
Expand All @@ -107,22 +85,14 @@ Hint: To create S3 buckets from the command line, use `aws s3 mb --region REGION

6) Click the edit icon for the new credentials and click *Download JSON*

7) Place the downloaded JSON file into the project root as `application_secrets.json`

9. Create a Google Cloud Storage bucket and in `environment.local`, set the environment variable `DSS_GS_BUCKET` to the
name of that bucket. Make sure the bucket region is consistent with `GCP_DEFAULT_REGION` in `environment.local`.

10. Repeat the previous step for
7) Place the downloaded JSON file into active stage root as `deployment/active/application_secrets.json`

* DSS_GS_CHECKOUT_BUCKET
* DSS_GS_CHECKOUT_BUCKET_TEST
* DSS_GS_CHECKOUT_BUCKET_TEST_FIXTURES
#### Environment Variables

11. If you wish to run the unit tests, you must create two more buckets, one for test data and another for test
fixtures, and set the environment variables `DSS_GS_BUCKET_TEST` and `DSS_GS_BUCKET_TEST_FIXTURES` to the names of
those buckets.
Environment variables are required for test and deployment. The required environment variables and their default values
are in the file `environment`. To customize the values of these environment variables, run `configure.py`.

Hint: To create GCS buckets from the command line, use `gsutil mb -c regional -l REGION gs://BUCKET_NAME/`.
Run `source environment` now and whenever `configure.py` is executed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this how you change deployments?


##### Azure

Expand Down Expand Up @@ -158,47 +128,10 @@ Run `make test` in the top-level `data-store` directory.
Assuming the tests have passed above, the next step is to manually deploy. See the section below for information on
CI/CD with Travis if continuous deployment is your goal.

The AWS Elasticsearch Service is used for metadata indexing. Currently, the AWS Elasticsearch Service must be configured
manually. The AWS Elasticsearch Service domain name must either:

* have the value `dss-index-$DSS_DEPLOYMENT_STAGE`

* or, the environment variable `DSS_ES_DOMAIN` must be set to the domain name of the AWS Elasticsearch Service instance
to be used.

For typical development deployments the t2.small.elasticsearch instance type is more than sufficient.

Now deploy using make:

make deploy

Set up AWS API Gateway. The gateway is automatically set up for you and associated with the Lambda. However, to get a
friendly domain name, you need to follow the
directions [here](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html). In summary:

1. Generate a HTTPS certificate via AWS Certificate Manager (ACM). See note below on choosing a region for the
certificate.

2. Set up the custom domain name in the API gateway console. See note below on the DNS record type.

3. In Amazon Route 53 point the domain to the API gateway

4. In the API Gateway, fill in the endpoints for the custom domain name e.g. Path=`/`, Destination=`dss` and
`dev`. These might be different based on the profile used (dev, stage, etc).

5. Set the environment variable `API_DOMAIN_NAME` to your domain name in the `environment.local` file.

Note: The certificate should be in the same region as the API gateway or, if that's not possible, in `us-east-1`. If the
ACM certificate's region is `us-east-1` and the API gateway is in another region, the type of the custom domain name
must be *Edge Optimized*. Provisioning such a domain name typically takes up to 40 minutes because the certificate needs
to be replicated to all involved CloudFront edge servers. The corresponding record set in Route 53 needs to be an
**alias** A record, not a CNAME or a regular A record, and it must point to the CloudFront host name associated with the
edge-optimized domain name. Starting November 2017, API gateway supports regional certificates i.e., certificates in
regions other than `us-east-1`. This makes it possible to match the certificate's region with that of the API
gateway. and cuts the provisioning of the custom domain name down to seconds. Simply create the certificate in the same
region as that of the API gateway, create a custom domain name of type *Regional* and in Route53 add a CNAME recordset
that points to the gateway's canonical host name.

If successful, you should be able to see the Swagger API documentation at:

https://<domain_name>
Expand Down
1 change: 1 addition & 0 deletions chalice/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@ deploy:
cp -R ../dss ../dss-api.yml chalicelib
cp "$(GOOGLE_APPLICATION_CREDENTIALS)" chalicelib/gcp-credentials.json
cp "$(GOOGLE_APPLICATION_SECRETS)" chalicelib/application_secrets.json
chmod -R ugo+rX chalicelib
./build_deploy_config.sh
../scripts/dss-chalice deploy --no-autogen-policy --stage $(DSS_DEPLOYMENT_STAGE) --api-gateway-stage $(DSS_DEPLOYMENT_STAGE)
107 changes: 107 additions & 0 deletions configure.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
#!/usr/bin/env python

import os
import sys
import copy
import enum
import click
import subprocess
import dss_deployment


pkg_root = os.path.abspath(os.path.dirname(__file__)) # noqa


class Accept(enum.Enum):
all = enum.auto()
all_but_none = enum.auto()
nothing = enum.auto()


def run(command):
out = subprocess.run(command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding='utf-8')
try:
out.check_returncode()
except subprocess.CalledProcessError:
raise Exception(f'\t{out.stderr}')
return out.stdout.strip()


def request_input(info, key, stage, accept):
if info[key]['default'] is not None:
default = info[key]['default'].format(stage=stage)
else:
default = None

if Accept.all == accept:
print(f'setting {key}={default}')
info[key]['default'] = default
elif Accept.all_but_none == accept and default is not None:
print(f'setting {key}={default}')
info[key]['default'] = default
else:
print()
if info[key]['description']:
print(info[key]['description'])
val = click.prompt(f'{key}=', default)
if 'none' == val.lower():
val = None
info[key]['default'] = val


def get_user_input(deployment, accept):
if not deployment.variables['gcp_project']['default']:
deployment.variables['gcp_project']['default'] = run("gcloud config get-value project")

if not deployment.variables['gcp_service_account_id']['default']:
deployment.variables['gcp_service_account_id']['default'] = f'service-account-{deployment.stage}'

print(deployment.variables['API_DOMAIN_NAME'])

skip = ['DSS_DEPLOYMENT_STAGE']
for key in deployment.variables:
if key in skip:
continue
request_input(deployment.variables, key, deployment.stage, accept)


@click.command()
@click.option('--stage', prompt="Deployment stage name")
@click.option('--accept-defaults', is_flag=True, default=False)
def main(stage, accept_defaults):
deployment = dss_deployment.DSSDeployment(stage)
exists = os.path.exists(deployment.root)

if exists and accept_defaults:
accept = Accept.all
elif accept_defaults:
accept = Accept.all_but_none
else:
accept = Accept.nothing

get_user_input(deployment, accept)

deployment.write()
dss_deployment.set_active_stage(stage)

print()
print('Deployment Steps')
print('\t1. Customize Terraform scripting as needed:')
for comp in os.listdir(deployment.root):
path = os.path.join(deployment.root, comp)
if not os.path.isdir(path):
continue
print(f'\t\t{path}')
print('\t2. run `scripts/create_config_gs_service_account.sh`')
print('\t3. Visit the google console to aquire `application_secrets.json`')
print('\t4. run `source environment`')
print('\t5. run `make deploy-infra`')
print('\t6. run `make deploy`')


if __name__ == "__main__":
main()
3 changes: 0 additions & 3 deletions daemons/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,6 @@ $(SERIAL_AWS_DAEMONS) $(PARALLEL_AWS_DAEMONS) scheduled-ci-build:
../tests/daemons/sample_s3_bundle_created_event.json.template \
../tests/daemons/a47b90b2-0967-4fbf-87bc-c6c12db3fedf.2017-07-12T055120.037644Z; \
fi
@if [[ $@ == "dss-s3-copy-sfn" || %@ == "dss-s3-copy-write-metadata-sfn" || $@ == "dss-checkout" || $@ == "dss-scalability-test" ]]; then \
$(DSS_HOME)/scripts/deploy_checkout_lifecycle.py; \
fi

dss-gs-event-relay:
$(DSS_HOME)/scripts/deploy_gcf.py $@ --entry-point "dss_gs_bucket_events_$(subst -,_,$(DSS_GS_BUCKET))"
Expand Down
11 changes: 11 additions & 0 deletions deployment/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
*
!.gitignore
!Makefile
!active
!*/
!/dev/**
!/prod/**
**/gcp-credentials.json
**/application_secrets.json
**/.terraform
**/local_variables.tf
51 changes: 51 additions & 0 deletions deployment/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
COMPONENT=
STAGEPATH=${shell cd active && pwd -P}
STAGE=${shell basename $(STAGEPATH)}
DIRS=${shell find $(STAGE)/* -not -path "*/\.*" -type d}
COMPONENTS=${shell basename $(DIRS)}
AWS_PROFILE=${shell cat $(STAGE)/local_variables.tf | jq -r .variable.aws_profile.default}

all: init

init:
@echo $(STAGE)
@echo $(COMPONENTS)
@for c in $(COMPONENTS); do \
$(MAKE) init-component STAGE=$(STAGE) COMPONENT=$$c; \
done

apply:
@echo $(STAGE)
@for c in $(COMPONENTS); do \
$(MAKE) apply-component STAGE=$(STAGE) COMPONENT=$$c; \
done

destroy:
@echo $(STAGE)
@for c in $(COMPONENTS); do \
$(MAKE) destroy-component STAGE=$(STAGE) COMPONENT=$$c; \
done

clean:
@echo $(STAGE)
@for c in $(COMPONENTS); do \
$(MAKE) clean-component STAGE=$(STAGE) COMPONENT=$$c; \
done

init-component:
@if [[ -e $(STAGE)/backend_config.hcl ]]; then \
cd $(STAGE)/$(COMPONENT); AWS_PROFILE=$(AWS_PROFILE) terraform init --backend-config=../backend_config.hcl; \
else \
cd $(STAGE)/$(COMPONENT); AWS_PROFILE=$(AWS_PROFILE) terraform init; \
fi

apply-component: init-component
cd $(STAGE)/$(COMPONENT); AWS_PROFILE=$(AWS_PROFILE) terraform apply

destroy-component: init-component
cd $(STAGE)/$(COMPONENT); AWS_PROFILE=$(AWS_PROFILE) terraform destroy

clean-component:
cd $(STAGE)/$(COMPONENT); -rm -rf .terraform

.PHONY: init plan apply clean
1 change: 1 addition & 0 deletions deployment/active
11 changes: 11 additions & 0 deletions deployment/dev/buckets/backend.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"terraform": {
"backend": {
"s3": {
"bucket": "org-humancellatlas-dss-config",
"key": "dss-buckets-dev.tfstate",
"region": "us-east-1"
}
}
}
}
1 change: 1 addition & 0 deletions deployment/dev/buckets/dss_variables.tf
Loading