Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding lateral flow test intervention code. #194

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

mattea
Copy link
Contributor

@mattea mattea commented Jan 27, 2021

Includes base code to test on symptoms and tracing, as well as a number of related parameters.

Note that the specific parameters (g/b) are currently untuned and not ready for production use.

This could have a few different alternate implementations, open to tweaks as desired, especially with regard to how the infectiousness curve is created and housed. Fine deleting the extra variables from the one_time_step_results too, but they were useful to have.

I have also added a bit of extra cleanup and parameters. If you need this separated out, I can.

  • new_trace_token -> create_trace_token (for stricter build & c++ compatibility)
  • misc whitespace cleanup
  • quarantine_compliance_positive parameter - defaulting to 1 for backwards compatibility, although surveys show this to be lower in practive.

@mattea mattea marked this pull request as ready for review January 28, 2021 00:05
Copy link
Collaborator

@roberthinch roberthinch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the size of this change, please can you perform some performance profiling analysis (we really need to add this a reg test as well). There are a few places which looks like it may be adding overhead (even if the feature is turned off), so we need to understand that.

src/disease.c Outdated Show resolved Hide resolved
src/interventions.c Outdated Show resolved Hide resolved
src/model.c Outdated Show resolved Hide resolved
model.one_time_step()
results = model.one_time_step_results()

np.testing.assert_equal( results[ "n_lateral_flow_tests" ] > 0, True, "Expected lateral flow tests not found." )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this have a better estimate of the number of tests that we would expect? Non-zero is a pretty low bar to check.

"""
end_time = test_params[ "end_time" ]

params = utils.get_params_swig()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to replace these 4 lines now with simply
model=Model(params = test_params)
And add Model to the from model import statement at the top.


# check the sensitivity at the peak.
df_test[ "symptomatic" ] = ( df_test["time_symptomatic"] > 0 )
true_pos = df_test[ ( df_test[ "time_since_inf" ] == 3 ) & ( df_test[ "symptomatic" ] == True ) & ( df_test[ "lateral_flow_status" ] == 1 ) & ( df_test[ "test_sensitive_inf" ] == True ) ].groupby( [ "time_since_inf" ] ).size()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 3? Would be good if this was not hard-coded but linked back to the code in the model.

@@ -1219,12 +1366,21 @@ void intervention_on_traced(

parameters *params = model->params;

int lateral_flow_test = params->lateral_flow_test_on_traced && gsl_ran_bernoulli( rng, params->lateral_flow_test_fraction );
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If params->lateral_flow_test_on_traced=FALSE does the gsl_ran_bernoulli() get called?
Probably best to move to inside the if statement below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, C does lazy boolean evaluation.

And I think it makes sense to order lateral flow tests even if quarantine_on_traced is off. Unless you don't want that to happen? Or maybe you meant move the rng into the lateral flow if? But I think whether or not the quarantine exclusion happens should depend on if they accepted the lateral flow test.

src/interventions.c Outdated Show resolved Hide resolved

}

/*****************************************************************************************
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function does not seem to be used (unless I've missed it's usage). Please remove.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in the python. I'm also fine removing it explicitly from the python, but I want to be able to at least call it manually in py each time step to understand the sensitivity.

}
intervention_quarantine_until( model, indiv, NULL, time_event, TRUE, NULL, model->time, 1 );
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this remain inside the if block?

Also, if the individual does not quarantine, then the household members should not quarantine either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to create the trace token whether or not quarantine_compliance_positive is true so tracing can still happen even if they don't quarantine (since the upload fraction is a separate parameter).

And both their quarantine and the household will be based on the time_event, so their quarantining should remain the same, no?

Will clean up spacing soon, just wanted to get this out for PR.
* Breaking out lfa test helpers.
* Gating computation on performing LFA.
* Condensing redundancy.
* Making test cleanup event driven.
* Additional tests for number of tests performed.
@mattea
Copy link
Contributor Author

mattea commented Mar 8, 2021

For some reason I can't respond to some comments, so adding it here:

  • I added additional tests for specifically computing the number of tests performed.
  • Switched the hardcoded peak to the calculated one.
  • I couldn't switch to the new Model system, it broke file writing (e.g. model.write_individual_file())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants