-
Notifications
You must be signed in to change notification settings - Fork 33
How to populate the Golden Beneficiary with PAC data using the RDA Bridge
Run Synthea to generate beneficiary and claims data for multiple years. You can execute Synthea via Jenkins. For instructions on how to run Synthea, refer to the How to Run Synthea Automation. The output will be a set of beneficiary and claims files, which are stored in an S3 bucket. You’ll need to download these files for further processing.
Choose a beneficiary from the generated files that has a variety of claims (preferably all or most claim types). Transform this beneficiary into the Golden Beneficiary by editing their bene_id and MBI across all relevant files.
Review and manually adjust the beneficiary data in files like beneficiaries.csv, claims.csv, etc., to ensure it aligns with the Golden Beneficiary's details.
Once the data is ready and the Golden Beneficiary details are updated, run the RDA Bridge to generate the necessary dJSON files for the MCS and FISS claims. Follow the documentation on How to Run rda-bridge for setup and execution.
This parameters are enough to generate the ndjson files. Other parameters are optional
./run_bridge.sh path/to/rif/ \
-o output/ \
-s 10000 \
-z 8000 \
-b beneficiaries.csv \
-f inpatient.csv \
-f outpatient.csv \
-f hospice.csv \
-f snf.csv \
-f hha.csv \
-m carrier.csv \
-m dme.csv
Note: In command to run the RDA Bridge, ensure that the -s and -z parameter values (for the sequence) are not already used in the rda.claim_message_meta_data
.
Once the dJSON files are generated, gzip them using the following command:
gzip <filename>
Localstack Install LocalStack to simulate AWS services locally, including S3.
Install dependencies (if not already installed): Verify that awslocal is working by running:
awslocal
Create a new bucket in LocalStack for storing the files:
awslocal s3 mb s3://myBucket
Create a directory in the bucket:
awslocal s3 mkdir s3://myBucket/rda_api_messages/
Copy the gzipped dJSON files (for MCS and FISS claims) into the rda_api_messages/ directory:
awslocal s3 cp <file>.json.gz s3://myBucket/rda_api_messages/
Execute the BFD pipeline script from your local machine to process the files and insert data into your local FHIR database (localhost:5432/fhirdb).
./run-bfd-pipeline -s local:my-bucket:rda_api_messages -d localhost:5432 -Z s3
Check if the claims data was successfully inserted into the local database by running the following SQL queries:
SELECT * FROM rda.mcs_claims;
SELECT * FROM rda.fiss_claims;
After generating and gzipping the dJSON files and testing this in local, upload them to the appropriate S3 bucket for each environment:
Test
Prod-SBX
Prod
Change SSM Parameter for Test Environment:
If you're working in the test environment, update the SSM parameter:
/bfd/test/pipeline/nonsensitive/rda/grpc/server_type
Set it to InProcess.
After updating the SSM parameter, restart the pipeline service to apply the changes.
Update rda_api_progress Table:
In the rda_api_progress
table, set the values for FISS and MCS to start at the sequence number used when generating the dJSON files. This ensures the script starts scanning at the correct sequence.
After the pipeline job has completed successfully, it’s important to restore the SSM parameter and the rda_api_progress
table to their original values to maintain system consistency.
- Home
- For BFD Users
- Making Requests to BFD
- API Changelog
- Migrating to V2 FAQ
- Synthetic and Synthea Data
- BFD SAMHSA Filtering