A repository for the combined pre-award stores (currently empty, but will eventually encompass Fund, Application and Assessment stores).
The recommended and supported way of running this service is using the docker runner. Please see that repository for instructions on running using docker-compose
.
You will likely still want to create a local virtual environment for python dependencies, but you should only run the application using the provided docker-compose file.
We use uv to manage Python versions and local virtual environments.
uv sync
To update requirements, use uv add
or uv remove
.
This service depends on:
- A postgres database
- Install pre-commit locally
- Pre-commit hooks can either be installed using pip
pip install pre-commit
or homebrew (for Mac users)brew install pre-commit
- From your checkout directory run
pre-commit install
to set up the git hook scripts
General instructions for local db development are available here: Local database development
Whenever you make changes to database models, please run:
uv run flask db migrate -m <message>
The message
should be a short description of the DB changes made. Don't specify a revision id (using --rev-id
) - it will be generated automatically.
The migration file for your changes will be created in ./db/migrations/versions. Please then commit and push these to github so that the migrations will be run in the pipelines to correctly upgrade the deployed db instances with your changes.
Details on how our pipelines work and the release process is available here
Paketo is used to build the docker image which gets deployed to our test and production environments. Details available here
Copilot is used for infrastructure deployment. Instructions are available here, with the following values for the fund store:
- service-name: fsd-pre-award-stores
To seed fund & round data to db for all funds and rounds, use the fund/round loaders scripts.
If running against a local postgresql instance:
python -m fund_store.scripts.load_all_fund_rounds
Further details on the fund/round loader scripts, and how to load data for a specific fund or round can be found here
This script allows you to open/close rounds using their dates to test different functionality as needed. You can also use the keywords 'PAST', 'FUTURE' and 'UNCHANGED' to save typing dates.
docker exec -ti $(docker ps -qf "name=pre-award-stores") python -m fund_store.scripts.amend_round_dates -q update-round-dates --round_id c603d114-5364-4474-a0c4-c41cbf4d3bbd --application_deadline "2023-03-30 12:00:00"
docker exec -ti $(docker ps -qf "name=pre-award-stores") python -m fund_store.scripts.amend_round_dates -q update-round-dates -r COF_R3W3 -o "2022-10-04 12:00:00" -d "2022-12-14 11:59:00" -ad "2023-03-30 12:00:00" -as NONE
docker exec -ti $(docker ps -qf "name=pre-award-stores") python -m fund_store.scripts.amend_round_dates -q update-round-dates -r COF_R3W3 -o PAST -d FUTURE
For an interactive prompt where you can supply (or leave unchanged) all dates:
docker exec -ti $(docker ps -qf "name=pre-award-stores") python -m fund_store.scripts.amend_round_dates update-round-dates
To reset the dates for a round to those in the fund loader config:
docker exec -ti $(docker ps -qf "name=pre-award-stores") python -m fund_store.scripts.amend_round_dates -q reset-round-dates -r COF_R4W1
And with an interactive prompt:
docker exec -ti $(docker ps -qf "name=pre-award-stores") python -m fund_store.scripts.amend_round_dates reset-round-dates
As part of the application submission workflow, we use a FIFO AWS SQS to automate our application export to assessment.
We export the application as a 'fat' payload. This includes all application data (including metadata/attributes), this ensure assessment does not need to call application_store for additional information.
We can simulate an SQS locally when using our docker runner instance. Our docker runner uses localstack to simulate these AWS services, see here.
If messages are not consumed and deleted they will be move to the Dead-Letter_Queue, here we can inspect the message for faults and retry.
The SQS queues have a number of confiuration options, we are using the AWS SDK for Python (Boto3), see docs here.
There is an API endpoint on this service to send a submitted application to assessment:
/queue/{queue_name}/{application_id}
You can seed test data to use in the running application (separate to unit test data seeding). The seeding process needs a running fund-store to retrieve fund/round form section config, so it runs within the docker container for application-store within the docker runner. To run the seeding script:
- Make sure your local docker-runner is running
- Find the container ID of
pre-award-stores
by usingdocker ps
- Use docker exec to get into that container:
docker exec -it <container_id> bash
- Execute the script:
python application_store/scripts/seed_db_test_data.py
. You will be prompted for inputs: fund, round, account_id (the UUID not email address), the status of the seeded applications and how many to create.
Unit tests exist in test_seed_db. They are marked as skipped as they require a running fund-store to retrieve form config (no point in duplicating this for tests) so they won't run in the pipeline but are fine locally. If your local fund store runs on a non-standard port etc, edit the local_fund_store
fixture in that tests file. If you want to run the tests, just comment out the skip marker.
To seed applicaitons, we need the completed form json. If you have that, skip to the end of part 1 and put that form json into the required file.
- Get a submitted application into your local DB. You can either do this manually or by running the automated tests against your local docker runner.
- Find the
application_id
of that submitted application._ - Edit the tests file to un-skip
test_retrieve_test_data
and then settarget_app
to be theapplication_id
you just submitted. - Update your unit test config to point at the same DB as the docker runner. Update pytest.ini so that
D:DATABASE_URL
points at the docker runner application store db:D:DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:5433/application_store # pragma: allowlist secret
- Run the single test
test_retrieve_test_data
- this should output the json of all the completed forms for that application into funding-service-design-store/forms.json. - Copy this file into seed_data and name it
<fund_short_code>_<round_short_code>_all_forms.json
. - IMPORTANT Change the config in pytest.ini back to what it was so you don't accidentally wipe your docker runner DB next time you run tests!
- In seed_db there is a constant called
FUND_CONFIG
- update this following the existing structure for your new fund/round (if it's a new round on an existing fund, just add it as another key torounds
item in that fund). You will need to know the name of the form that contains the field used to the name the application/project. - In the same file, update the
click.option
choice values for fund/round as required, to allow your new options. - Test it - update the unit tests to use this new config and check it works.
To import the submitted applications of a round into assessment, execute the below command.
docker exec -it <container_id> python -m scripts.import_from_application --roundid=c603d114-5364-4474-a0c4-c41cbf4d3bbd --app_type=COF
or with short name
docker exec -it <container_id> python -m scripts.import_from_application --fundround=COFR3W1
If using VsCode, select the launch config "Import Applications to Assessment" to import the application.
invoke seed_dev_db
Running the above command will prompt you to enter the number of applications, funds & rounds you would like to create as mock data within the database.
This will also work for the DB within the docker runner. Find the ID of the docker container running assessment-store (docker ps
) then execute:
docker exec -it <container_id> invoke seed_dev_db
To avoid the interactive prompt, alternatively the fund-round and application count can be provided as arguments such as:
invoke seed_dev_db --fundround COFR2W2 --appcount 1
If using VsCode, select the launch config "Seed Applications in assessment-store" to start seeding.
Test data is created on a per-test basis to prevent test pollution. To create test data for a test, request the seed_application_records
fixture in your test. That fixture then provides access to the inserted records and will clean up after itself at the end of the test session.
More details on the fixtures in utils: https://github.com/communitiesuk/funding-service-design-utils/blob/dcc64b0b253a1056ce99e8fe7ea8530406355c96/README.md#fixtures
Basic example:
@pytest.mark.apps_to_insert(
[
{
# Assessment_Records data
# For convencience a set of these are loaded into the variable test_input_data in conftest.py
test_input_data[0]
}
]
)
def test_stuff(seed_application_records):
app_id = seed_application_records[0].id
# do some testing
If you need your test assessment records to be flagged, you can supply flag config as part of the apps_to_insert data by including
an array of flag configs under the property flags
. Some example flag configs are contained in test_data/flags.py
@pytest.mark.apps_to_insert(
[
test_input_data[0],
{**test_input_data[1], "flags": [flag_config[2]]},
{**test_input_data[2], "flags": [flag_config[1]]},
]
)
def test_stuff(seed_application_records):
flag = retrieve_flag_for_application(seed_application_records[1].id)
assert flag == flag_config[2]
If you need all your test data to use the same fund and round ids, but be different from all other tests, use unique_fund_round
in your test. This generates a random ID for fund and round and uses this when creating test applications.
pytest.mark.apps_to_insert([test_input_data[0]])
@pytest.mark.unique_fund_round(True)
def test_some_reports(
seed_application_records
):
result = get_by_fund_round(
fund_id=seed_application_records[0]["fund_id"], round_id=seed_application_records[0]["round_id"]
)
You've deleted your unit test db or done something manually, so pytest's cache is confused.
Run pytest --cache-clear
to fix your problem.
If you are using VsCode, we have prepared frequently used scripts in the launch configuration that can be handy for quick development. Below are some launch configurations that you will find in the launch.json
file.
Import applications for the provided round from application_store to assessment_store. Please provide the --roundid
& --app_type
in the arguments as shown below.
{
"name": "Import Applications to Assessment",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/scripts/import_from_application.py",
.
.
.
// modify the args accordingly
"args": ["--fundround", "COFR3W1"]
},
Creates the mock assessments data for the provided round in interactive prompt.
{
"name": "Seed Applications in assessment-store",
"type": "python",
"request": "launch",
.
.
.
"justMyCode": false,
"args": ["seed_dev_db"]
},
Feed location in assessment-store - Populates the location data in the assessment records for the provided round.
Please provide the --fund_id
, --round_id
and any additional arguments as shown below.
{
"name": "Feed location in assessment-store",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/scripts/populate_location_data.py",
.
.
.
// modify the args accordingly
"args": ["--fundround", "NSTFR2",
"--update_db", "True",
"--write_csv", "False"]
},