You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use of user-entered estimate of hospitalization rate leads to poor estimates of [initial I] and optimistically high rate-of-detection.
This can be best demonstrated by running the model using publicly available data.
For example - Philadelphia on 3/24 had 24 hospitalized and 252 confirmed.
(3/26) 40 hospitalized and 675 confirmed.
The model (fixed hospitalization rate) this implies that in the course of 48 hours, performing a small number of tests only on the at-risk and healthcare workers, rate of detection rises to an astonishing 40%.
Given the circumstance, especially limited testing, it seems only reasonable that rising confirmed positive cases would increase [Initial I] and decrease [Hospitalization Rate].
There is not evidence that a detection rate anywhere close to 30 or 40% is even a remote possibility given extremely limited testing; the likely explanation is the user simply input too high of a hospitalization rate to realistically explain the data.
###Suggested fix
Eliminate hospitalization rate as a user input The user, and realistically no one at all, can provide this parameter at a population level with any accuracy. Meanwhile use of the default value 2.5% results only in a simplistic and meaningless assumption that [Initial I] = 40 * (currently hospitalized).
[Initial I] could be calculated by any number of methods that do not assume a hospitalization rate - I see no reason to suggest one, that would be better decided by an infectious disease professional.
The hospitalization rate for the simulation would then be inferred by the ratio of currently hospitalized vs estimated infected.
The text was updated successfully, but these errors were encountered:
These seem like unrelated issues - #255 extrapolating the doubling time by looking at time to double of hospitalizations, and absolutely makes sense.
Assume this reasonable scenario; a segment of the population is tested and known positive cases double, but hospitalizations do not instantaneously change.
This is very possible, especially in places with early outbreaks, and essentially breaks the simulation.
There's no reason to think going from 1000 to 2000 known positive cases without a corresponding doubling in hospitalizations implies a high rate of detection - if anything, the exact opposite.
Tested cases input and Detection rate output have been removed from the code, as irrelevant to the hospital forecasting use case.
We may consider re-introducing confirmed tests as a lower-bound constraints on what the fit parameters say about number of infected on a day, but that's a very separate issue. It would only really come up if testing were happening at way higher rates than it is in most places.
Fitting the hospitalization rate to data is specifically called for in #452.
Summary
Use of user-entered estimate of hospitalization rate leads to poor estimates of [initial I] and optimistically high rate-of-detection.
This can be best demonstrated by running the model using publicly available data.
For example - Philadelphia on 3/24 had 24 hospitalized and 252 confirmed.
(3/26) 40 hospitalized and 675 confirmed.
The model (fixed hospitalization rate) this implies that in the course of 48 hours, performing a small number of tests only on the at-risk and healthcare workers, rate of detection rises to an astonishing 40%.
Given the circumstance, especially limited testing, it seems only reasonable that rising confirmed positive cases would increase [Initial I] and decrease [Hospitalization Rate].
There is not evidence that a detection rate anywhere close to 30 or 40% is even a remote possibility given extremely limited testing; the likely explanation is the user simply input too high of a hospitalization rate to realistically explain the data.
###Suggested fix
Eliminate hospitalization rate as a user input The user, and realistically no one at all, can provide this parameter at a population level with any accuracy. Meanwhile use of the default value 2.5% results only in a simplistic and meaningless assumption that [Initial I] = 40 * (currently hospitalized).
[Initial I] could be calculated by any number of methods that do not assume a hospitalization rate - I see no reason to suggest one, that would be better decided by an infectious disease professional.
The hospitalization rate for the simulation would then be inferred by the ratio of currently hospitalized vs estimated infected.
The text was updated successfully, but these errors were encountered: