OwnPath is an accessible, searchable online directory of behavioral and mental health care providers and services operating on Colorado. More information about the site can be found in this press release: https://bha.colorado.gov/blog-post/ownpath-launches-in-colorado
This application leverages libraries and frameworks that are built with accessibility in mind. By leveraging USWDS and react-boostrap, development starts from a foundation of accessibility by default.
Automated aXe tests run on every change to the application to check for adherence to WCAG guidelines and other best practices. However, automated testing will only catch a percentage of accessibility and usibility problems. To this end, we attempt to test new features and interactions with assistive technologies such as screen readers. Additionally, we have conducted research and testing with folks who are regular users of assistive technologies, to attempt to understand where real pain points exist. Ideally, continued research and testing with disabled users will be a regular part of the application development cycle.
Specifically, we have taken these steps to make the app accessible to all users:
- Human translation and localization of the entire application into Spanish, with more languages to come.
- Meaningful labels/titles/descriptions of all elements are present in the DOM for screen readers, even when they are not visible.
- Logical ordering and grouping of content, with appropriate heading levels throughout.
- Correct use of button and anchor elements based on functionality.
- Visual-only elements (e.g. maps and map markers) are hidden from screen readers.
This setups up a dev environment within a container to reduce "it works on my machine" problems and to create a clean, consistent environment (but adds some complexity compared to the non-container setup below). Optionally, it pairs really well with VSCode's Remote Container extension pack.
Run these instructions whenever you change the Dockerfile
or want to reset
your environment.
- Clone this repo
- Navigate into its base directory
- Run:
docker build -t coloradodigitalservice/co-care-directory .
- Then, you can jump into the container's command line:
docker run -p 3000:3000 -it -v $PWD:/app --rm coloradodigitalservice/co-care-directory bash
- Note:
$PWD
as the full path to the base directory. Change if you need something different.
- Download dependencies:
npm install
- Run whenever your
package.json
changes
- Start the debug server:
npm start
- Access the app at
http://localhost:3000
This is a simple, non-containerized setup, but might be impacted by other things installed on your machine (i.e. other devs might experience things differently than you).
Install these items first:
- Node.js at version 16.11.x LTS
- Clone this repo
- Navigate into its base directory
- Download dependencies:
npm install
- Run whenever your
package.json
changes
- Start the debug server:
npm start
- Access the app at
http://localhost:3000
OwnPath serves data from a flat CSV file that is located in this repository in
the raw_data
folder. Currently, this data is pulled from a single BHA data
source, LADDERS. In the future, the application data may be updated to include
multiple sources.
The process to update the data served in OwnPath, the CSV file in this repository must be updated. Currently, the process for this is comprised of manual steps:
- [data owner with LADDERS access] runs process to create an updated export of LADDERS data in the format expected by OwnPath.
- [github contributor] creates Pull Request in the OwnPath repository with the updated data file.
- [code owner or product owner] reviews the Pull Request by confirming:
- search results in associated review environment appear correctly (suggestion: spot check by comparing a search in review environment with production site)
- data structure has not been changed unexpectedly (e.g. new columns added to export, new value added to list field)
- If a breaking change has been made to the data export (a change that requires updates to the application code or data processing script (see below) to maintain application functionality), those changes can be addressed in a commit added to the data update Pull Request. Once any necessary changes are made, or data spot check is complete, [code owner or product owner] approves the Pull Request
- [github contributor, code owner, or product owner] merged approved Pull Request, and updated data is automatically deployed to the production site.
Because data extract creation, Pull Request creation, Pull Request review, and Pull Request merge are all manual steps, this process is highly manual. As a result we have set an SLA of maximum 8 days for data updates from LADDERS to appear in OwnPath. This allows for weekends and holidays, and any other unforseen delays that may occur.
We use a standalone script to transform a CSV export into a cleaned JSON file. To process data, run:
npm run processdata
This happens automatically as part of npm run build
and npm run start
, so
you don't typically have to run this unless you're debugging the data
transformation itself.
Translations are maintained in a google sheet, which is transformed into JSON for the application to use by a standalone script. To generate updated translations JSON, download the sheet to your local machine and run:
npm run generatetranslations [path to file]
The download of the google sheet needs to be an xlsx
file since there are
several tabs contained in the translation doc.
This process must be run manually whenever content changes or updates are made in the google doc. The changes then need to be committed to the repo and merged to be reflected in the application. Any rows missing translations will be printed to the console where the script is run to help avoid accidentally adding un-translated content.
For more information on the search filters implemented in the application, see [this google doc] (https://docs.google.com/document/d/1yEdo7IpHCtEKNNF6FMLapBUgcPw_8CY7wGwxKTdtJrU/edit)
For more information on GA analytics and events tracked by the tool, see this google doc
Using SVGs in React apps is super easy. To make them fully component prop/CSS customizable ensure you do these things:
- Only set width or height (not both) on the main SVG element. This allows you to scale the size of the SVG by setting width/height prop on the component.
- set "fill" prop on element to be "currentColor". This allows you to color the SVG with CSS "color" property.
- DON'T set "fill" prop on any inner elements within the . This will prohibit you from dynamically setting the color with CSS "color" property.
- Create a pipeline
- Connect to GitHub
- Enable Review Apps, choosing these options:
- Automatically create review apps for new PRs
Good for ondemand environments in Heroku running in debug mode.
To do this, first install and login to the Heroku CLI.
- Create a new app
- From the app's settings, go to the
Deploy
tab - Go to either
Automatic deploys
orManual deploys
and choose the branch you'd like deployed and trigger a deployment. - The deployment will initially fail because it doesn't understand that it is a
container-based app. Issue this command from the Heroku CLI:
heroku stack:set container -a (app name)
- Trigger another deployment
The first time you ever deploy the site to an AWS Account from any computer, run this. In other words, if an AWS Account has already been setup to store Terraform state centrally, you shouldn't run this (i.e. only for full Account recovery after a catastrohpic problem or for setting up a new dev AWS Account)
- Setup a new AWS Account or login to an existing one that you have admin rights on
- Create a user
- User name:
terraform
- Select AWS credential type:
- ✅ Access key
- Next
- Attach existing policies directly
- ✅ AdministratorAccess (TODO: This grants anything. Remove this and specify only what's needed)
- Set permission boundary: Create user without a permissions boundary (TODO Reduce this)
- Next
- Next
- Create user
- Copy the user ID, access key ID, and secret access key
- Close
- User name:
Next, we need to create storage for the Terraform state.
- Build the dev/deploy tools Docker container:
docker build -t coloradodigitalservice/co-care-directory-deploy -f Dockerfile.Deploy .
- Launch a terminal in the dev container from the root of the code base:
docker run -it -v $PWD:/app --rm coloradodigitalservice/co-care-directory-deploy bash
(TODO: Remove directory mapping after state is stored centrally) - Navigate to:
cd infra/aws/state
- Set
export TF_VAR_bucket_name="<S3 bucket name>"
with a valid name of the S3 bucket where built app files will be stored. This must be unique across all of AWS. - Set
export AWS_ACCESS_KEY_ID="<your AWS user's access key ID>"
- Set
export AWS_SECRET_ACCESS_KEY="<your AWS secret access key>"
- Setup Terraform:
terraform init
- Build the infrastructure:
terraform apply
and then typeyes
- Save a backup of the
terraform.state
file- This is a state file containing only the S3 storage for the deployment's state file and Dynamo DB table that locks Terraform runs to one user at a time. There is no central backup, so put it somewhere safe even though you'll probably never need it again.
- If you do lose this state file, you can manually modify/remove the S3
bucket and DynamoDB table both named
${TF_VAR_bucket_name}-terraform-state
These steps might need to be run if an automatic deployment fails, a prior state of the application needs to be restored, or if you're setting up a dev AWS Account. These assume that the First Time instructions have been run on the AWS Account from any computer onece before this (i.e. most likely has been).
- Clone the repo and set to the tag or branch you want to deploy.
- Build the dev/deploy tools Docker container:
docker build -t coloradodigitalservice/co-care-directory-deploy -f Dockerfile.Deploy .
- Launch a terminal in the dev container from the root of the code base:
docker run -it --rm coloradodigitalservice/co-care-directory-deploy bash
- Set
export TF_VAR_bucket_name="<S3 bucket name>"
with a valid name of the S3 bucket where built app files will be stored. This must be unique across all of AWS. - (optional) Set
export TF_VAR_domains='["domain1.com","domain2.org"]'
with the domains, with primary domain first- If no domains specified, it'll just use a CloudFront generated domain
- The order of the domains needs to be the same every time
- Set
export AWS_ACCESS_KEY_ID="<your AWS user's access key ID>"
- Set
export AWS_SECRET_ACCESS_KEY="<your AWS secret access key>"
- Run the deployment
sh ci/publish_build.sh