Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculation of [Initial I] flawed by use of user-input hospitalization rate. #291

Closed
btm4554 opened this issue Mar 27, 2020 · 4 comments
Closed
Labels
models Correct/improve the underlying models

Comments

@btm4554
Copy link

btm4554 commented Mar 27, 2020

Summary

Use of user-entered estimate of hospitalization rate leads to poor estimates of [initial I] and optimistically high rate-of-detection.

This can be best demonstrated by running the model using publicly available data.
For example - Philadelphia on 3/24 had 24 hospitalized and 252 confirmed.
(3/26) 40 hospitalized and 675 confirmed.

The model (fixed hospitalization rate) this implies that in the course of 48 hours, performing a small number of tests only on the at-risk and healthcare workers, rate of detection rises to an astonishing 40%.

Given the circumstance, especially limited testing, it seems only reasonable that rising confirmed positive cases would increase [Initial I] and decrease [Hospitalization Rate].
There is not evidence that a detection rate anywhere close to 30 or 40% is even a remote possibility given extremely limited testing; the likely explanation is the user simply input too high of a hospitalization rate to realistically explain the data.

###Suggested fix

Eliminate hospitalization rate as a user input The user, and realistically no one at all, can provide this parameter at a population level with any accuracy. Meanwhile use of the default value 2.5% results only in a simplistic and meaningless assumption that [Initial I] = 40 * (currently hospitalized).

[Initial I] could be calculated by any number of methods that do not assume a hospitalization rate - I see no reason to suggest one, that would be better decided by an infectious disease professional.

The hospitalization rate for the simulation would then be inferred by the ratio of currently hospitalized vs estimated infected.

@btm4554 btm4554 added the models Correct/improve the underlying models label Mar 27, 2020
@btm4554
Copy link
Author

btm4554 commented Mar 27, 2020

Potentially addressed by #257

@PhilMiller
Copy link
Collaborator

It looks like #255 is actually getting deployed

@btm4554
Copy link
Author

btm4554 commented Mar 27, 2020

These seem like unrelated issues - #255 extrapolating the doubling time by looking at time to double of hospitalizations, and absolutely makes sense.

Assume this reasonable scenario; a segment of the population is tested and known positive cases double, but hospitalizations do not instantaneously change.

This is very possible, especially in places with early outbreaks, and essentially breaks the simulation.

There's no reason to think going from 1000 to 2000 known positive cases without a corresponding doubling in hospitalizations implies a high rate of detection - if anything, the exact opposite.

@PhilMiller
Copy link
Collaborator

Tested cases input and Detection rate output have been removed from the code, as irrelevant to the hospital forecasting use case.

We may consider re-introducing confirmed tests as a lower-bound constraints on what the fit parameters say about number of infected on a day, but that's a very separate issue. It would only really come up if testing were happening at way higher rates than it is in most places.

Fitting the hospitalization rate to data is specifically called for in #452.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Correct/improve the underlying models
Projects
None yet
Development

No branches or pull requests

2 participants