Skip to content

How to populate the Golden Beneficiary with PAC data using the RDA Bridge

Mahi Fentaye edited this page Dec 6, 2024 · 7 revisions

Steps to Generate and Process Synthea PAC data for Golden Beneficiary

1. Generate Data with Synthea

Run Synthea to generate beneficiary and claims data for multiple years. You can execute Synthea via Jenkins. For instructions on how to run Synthea, refer to the How to Run Synthea Automation. The output will be a set of beneficiary and claims files, which are stored in an S3 bucket. You’ll need to download these files for further processing.

2. Select a Beneficiary for Transformation

Choose a beneficiary from the generated files that has a variety of claims (preferably all or most claim types). Transform this beneficiary into the Golden Beneficiary by editing their bene_id and MBI across all relevant files.

3. Edit Data Manually

Review and manually adjust the beneficiary data in files like beneficiaries.csv, claims.csv, etc., to ensure it aligns with the Golden Beneficiary's details.

4. Run the RDA Bridge

Once the data is ready and the Golden Beneficiary details are updated, run the RDA Bridge to generate the necessary dJSON files for the MCS and FISS claims. Follow the documentation on How to Run rda-bridge for setup and execution.

This parameters are enough to generate the ndjson files. Other parameters are optional

./run_bridge.sh path/to/rif/ \
   -o output/ \
   -s 10000 \
   -z 8000 \
   -b beneficiaries.csv \
   -f inpatient.csv \
   -f outpatient.csv \
   -f hospice.csv \
   -f snf.csv \
   -f hha.csv \
   -m carrier.csv \
   -m dme.csv

Note: In command to run the RDA Bridge, ensure that the -s and -z parameter values (for the sequence) are not already used in the rda.claim_message_meta_data.

5. Gzip the dJSON Files

Once the dJSON files are generated, gzip them using the following command:

gzip <filename>

6. Set Up LocalStack for S3

Localstack Install LocalStack to simulate AWS services locally, including S3.

Install dependencies (if not already installed): Verify that awslocal is working by running:

awslocal

7. Create S3 Bucket in LocalStack

Create a new bucket in LocalStack for storing the files:

awslocal s3 mb s3://myBucket

8. Upload Files to S3

Create a directory in the bucket:

awslocal s3 mkdir s3://myBucket/rda_api_messages/

Copy the gzipped dJSON files (for MCS and FISS claims) into the rda_api_messages/ directory:

awslocal s3 cp <file>.json.gz s3://myBucket/rda_api_messages/

9. Run the BFD Pipeline

Execute the BFD pipeline script from your local machine to process the files and insert data into your local FHIR database (localhost:5432/fhirdb).

./run-bfd-pipeline -s local:my-bucket:rda_api_messages -d localhost:5432 -Z s3

10. Verify Data Insertion

Check if the claims data was successfully inserted into the local database by running the following SQL queries:

SELECT * FROM rda.mcs_claims;

SELECT * FROM rda.fiss_claims;

11. Upload the gzipped Files to the Appropriate Bucket in Each Environment and Run the Pipeline

After generating and gzipping the dJSON files and testing this in local, upload them to the appropriate S3 bucket for each environment:

Test

Prod-SBX

Prod

Change SSM Parameter for Test Environment:

If you're working in the test environment, update the SSM parameter:

/bfd/test/pipeline/nonsensitive/rda/grpc/server_type Set it to InProcess.

After updating the SSM parameter, restart the pipeline service to apply the changes.

Update rda_api_progress Table:

In the rda_api_progress table, set the values for FISS and MCS to start at the sequence number used when generating the dJSON files. This ensures the script starts scanning at the correct sequence.

12. Restore SSM Parameter and rda_api_progress Table to Their Previous Values

After the pipeline job has completed successfully, it’s important to restore the SSM parameter and the rda_api_progress table to their original values to maintain system consistency.

Clone this wiki locally