Part of Notify.gov
The Supplementary Service Broker (SSB) manages the lifecycle of services, filling gaps in cloud.gov's brokered services. The SSB is compliant with the Open Service Broker API specification. Using this API, the service broker advertises a catalog of service offerings and service plans, and interprets calls for provision (create), bind, unbind, and deprovision (delete). What the broker does with each call can vary between services. In general, provision
reserves resources on a service and bind
delivers information to an app necessary for accessing the resource. The reserved resource is called a service instance.
What a service instance represents can vary by service, for example a single database on a multi-tenant server, a dedicated cluster, or even just an account on a web app. Clients, often platforms in their own right, interact with the SSB to provision and manage instances of the services offered. The broker provides all the information that an application or container needs to connect to the service instance, regardless of how or where the service is running.
The SSB can also be used from the command-line with eden
, or integrated into other platforms that make use of the OSBAPI.
The SSB currently provides SMTP and SMS services.
Services are defined in a brokerpaks, bundles of Terraform and YAML that specify how each service should be advertised, provisioned, bound, unbound, and unprovisioned.
-
Credentials for an S3 bucket that will store the state of the broker deployment
This will ensure multiple people who manage the state of the broker will not conflict with each other. See the Terraform documentation for more information.
For example, you can create an S3 service instance on cloud.gov using the
basic
plan, then extract the credentials for use. RunningSERVICE_INSTANCE_NAME=<servicename> ./s3creds.sh
will create the service-key and provide the necessary environment variables. -
Cloud Foundry credentials with permission to register the service broker in the spaces where it should be available.
For example, you can create a
space-deployer
cloud.gov Service Account in one of the spaces, then grant theSpaceDeveloper
role to the service account for additional spaces as needed:cf create-service cloud-gov-service-account space-deployer ci-deployer cf create-service-key ci-deployer ssb-deployer-key cf service-key ci-deployer ssb-deployer-key cf set-space-role <accountname> <orgname> <additional-spacename> SpaceDeveloper
-
Credentials to be used for managing resources in AWS
To configure domains, set quotas, and create service accounts with the correct permissions, deployment requires an AWS access key id and secret for a user with at least IAM and Route53 policies, and the ability to make support requests.
The broker deployment is specified and managed using Docker.
-
Download the broker binary, desired brokerpaks, and prerequisite binaries into the respective
/app
directories by running these two shell scripts:./app-setup-smtp.sh ./app-setup-sms.sh
-
Copy the
backend.tfvars-template
and edit in the non-sensitive values for the S3 bucket.cp backend.tfvars-template backend.tfvars ${EDITOR} backend.tfvars
Paste in the GUID (into
bucket
) and theregion
of the S3 bucket that holds Terraform state. These should be the same as used in the notifications-api repo. -
Copy the
.backend.secrets-template
and edit in the sensitive values for the S3 bucket.cp .backend.secrets-template .backend.secrets ${EDITOR} .backend.secrets
The file must contain credentials for the S3 bucket that holds Terraform state. If you have previously set yourself up to run Terraform in Notify's other repos, then you may have these credentials in your local
~/.aws/credentials
file and you may copy them from there. Otherwise, use these instructions. -
Set a variable with the name of the environment you want to work with
export ENV_NAME=[environment_name]
If you are new to the project, you will probably start with the environment name
development
. -
You don't need to do this unless you are creating a brand-new environment. If you are doing that, copy the
terraform.tfvars-template
and edit in any variable customizations for the target environment.cp terraform.tfvars-template terraform.${ENV_NAME}.tfvars ${EDITOR} terraform.${ENV_NAME}.tfvars
-
Copy the
.env.secrets-template
and edit in the values for the Cloud Foundry service account and your AWS deployment user.cp .env.secrets-template .env.${ENV_NAME}.secrets ${EDITOR} .env.${ENV_NAME}.secrets
TF_VAR_aws_access_key_id
andTF_VAR_aws_secret_access_key
are created within the AWS console, associated with an IAM role with appropriate permissions.TF_VAR_cf_username
andpassword
refer to SpaceDeployer credentials which were output from thecf service-key
command in the prerequisites.AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
are the same as what you saved in.backend.secrets
in a previous step.
-
Run Terraform init to set up the backend.
docker-compose --env-file .backend.secrets run --rm terraform init -backend-config backend.tfvars
If you get the error
Failed to query available provider packages
that's a Zscaler problem. -
Create a Terraform workspace for your environment and switch to it
docker-compose --env-file=.backend.secrets run --rm terraform workspace new ${ENV_NAME} docker-compose --env-file=.backend.secrets run --rm terraform workspace select ${ENV_NAME}
-
Run Terraform plan and review the output
docker-compose --env-file=.env.${ENV_NAME}.secrets run --rm terraform plan -var-file=terraform.${ENV_NAME}.tfvars
-
If everything looks good, run this command:
docker-compose --env-file=.env.${ENV_NAME}.secrets run --rm terraform apply -var-file=terraform.${ENV_NAME}.tfvars
Review the plan again, and answer
yes
when prompted.
- Delete any instances managed by the brokers. (This will prevent orphaning of backend resources.)
- There's a safeguard in place to make sure you really mean it before you delete the broker: Enable deletion of the databases by changing the
prevent_destroy
attribute inbroker/main.tf
fromtrue
tofalse
. - Run Terraform destroy and answer
yes
when prompted.
docker-compose --env-file=.env.${ENV_NAME}.secrets run --rm terraform destroy -var-file=terraform.${ENV_NAME}.tfvars
This repository includes a GitHub Action that can continuously deploy the
main
branch for you. To configure it, fork this repository in GitHub, then follow these steps.
Set up a new workspace in the Terraform state for the staging environment.
docker-compose run --rm terraform workspace new staging
Enter the following into GitHub's Settings > Secrets
page on your fork:
Secret Name | Description |
---|---|
AWS_ACCESS_KEY_ID | the S3 bucket key for Terraform state |
AWS_SECRET_ACCESS_KEY | the S3 bucket secret for Terraform state |
Create "staging" and "production" environments in GitHub's Settings > Environments
page on your fork. In each environment, enter the following secrets:
Secret Name | Description |
---|---|
TF_VAR_AWS_ACCESS_KEY_ID | the key for brokering resources in AWS |
TF_VAR_AWS_SECRET_ACCESS_KEY | the secret for brokering resources in AWS |
TF_VAR_cf_username | the username for a Cloud Foundry user with SpaceDeveloper access to the target spaces |
TF_VAR_cf_password | the password for a Cloud Foundry user with SpaceDeveloper access to the target spaces |
Finally, edit the terraform.staging.tfvars
and terraform.production.tfvars
files to supply the target orgs and spaces for the deployment.
Once these secrets are in place, the GitHub Action should be operational.
- Any pull-requests will
- test the Terraform format
- test the Terraform validity for the staging environment
- test the Terraform validity for the production environment
- post a summary of the planned changes for each environment on the pull-request
- Any merges to the
main
branch will- deploy the changes to the staging environment
- run tests on the broker in the staging environment
- (if successful) deploy the changes to the production environment
SSB Environment | U.S. Notify Environments | AWS Account | AWS Region |
---|---|---|---|
Production | prod | GovCloud prod | us-gov-west-1 |
Staging | demo, staging, & sandbox | Commercial prod | us-west-2 |
Development | sandbox | Commercial dev | us-west-2 |
If the broker fails to provision or bind a service unexpectedly, Cloud Foundry's error-handling of the situation is not great. You might end up in a situation where the broker has provisioned resources or created a binding that Cloud Foundry doesn't know about. Or, you may find Cloud Foundry knows about the b0rked service (eg last operation shows "create failed"), but you're unable to cf delete-service
without then landing in a "delete failed" state.
In those situations, don't panic. Here's how you can clean up!
-
Get the GUID for the problem service instance.
$ cf service <servicename> --guid
-
Get a shell going inside an application instance.
$ cf ssh ssb-<brokerpakname> $ /tmp/lifecycle/shell
-
If this upstream bug has not been fixed, make sure the client uses a URL-encoded version of the password.
-
URL-encode the value of the
SECURITY_USER_PASSWORD
environment variable, then -
Set that encoded result as the new value
export SECURITY_USER_PASSWORD=${the-encoded-value}
-
-
Invoke the deprovision operation directly.
$ ./cloud-service-broker client deprovision --serviceid <serviceid> --planid <planid> --instanceid <instanceid>
- The
instanceid
is the GUID you extracted in step 1. - The
serviceid
andplanid
are the GUIDs from the service catalog.
- The
-
Log out of the SSH session
$ exit
-
Locally, purge the Cloud Foundry-side record of the service
$ cf purge-service-instance <servicename>
-
Purge any resources that the broker provisioned which are now orphaned in the backend service. (For example, you may need to manually delete resources that were created in AWS.)
The set of resources will vary by brokerpak and service/plan. See the README for the brokerpak for the appropriate steps to take.
See CONTRIBUTING for additional information.
This project is in the worldwide public domain. As stated in CONTRIBUTING:
This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.
Error creating SES domain identity verification
The error may say the verification is stuck in Pending
or Failed
.
This indicates AWS is unable to verify a domain used for emailing. This problem arises when provisioning resources with the SMTP Brokerpak. It may be caused by a DNS misconfiguration.
Needed DNS records are described in the output of:
docker-compose --env-file=.env.${ENV_NAME}.secrets run --rm terraform show
(The value of ENV_NAME
comes from § Creating and installing the broker)
For more, refer to the Troubleshooting section of the SMTP Brokerpak provisioning module.
Either of these errors indicate a mismatch between a Terraform plan
created and stored when a pull request was first made, checked against the output when plan
is run more recently:
Performing diff between the pull request plan and the plan generated at execution time
Plan not found on PR
This may be caused by a change in deployed resources or a change in Terraform's state that took place after a PR was first created. You could check that the CI/CD pipeline is still working by creating and merging a new, trivial PR.