6 lessons learned to get ready for the next wave of COVID

Interesting perspective from DJ Patil 6 lessons learned to get ready for the next wave of COVID | by dj patil | Medium

In particular 4 — Why Modeling is hard

1. Resolution — they work mostly at the country or state level and few are at the county level (let alone city, neighborhood, or block)
2. Timeliness of data used — they have limited ability to leverage observed data
3. Model sophistication — at this time, they have limited ability to consider age distributions or socio-economic conditions.

Management of NPIs at the county and city level is mismatched with today's best models. It's akin to requiring a scalpel whereas today's models are a baseball bat.```
We have some of the leading experts on modeling Covid-19 in this community, curious if people agree with DJs description of the challenge.

Basically he is saying that our best models are not useful tools for local government decision makers.

I’ll bite.

Modeling is hard. It is hard for the reasons mentioned and also because we are modeling a system dependent on human behaviors. This consortium is full of social scientists that know how challenging humans are to model.

Deterministic compartmental models work on means that don’t do as well describing small populations subject to stochastic shocks that propagate through time. If one misses a shock today, future predictions will be further off.

IMO the value of modeling is to assess qualitative differences (and sometimes magnitude) between policy interventions (closures etc.). I strive to communicate the fragility of model predictions and emphasize the differences across interventions.

I think models are useful when we are open about the limitations

Modeling is hard, definitely; more so if it’s about behavior, as @Jude_Bayham_Colorado_State_U states above. But maybe we are asking models to do too much, i.e. models are more effective when they have a narrow purpose. One can accept shortcomings if they do one thing very well. One can forgo forecasting the number of cases for a model that rank-orders risk very well, in a forward-looking manner, for the purpose of identifying geographic areas or population segments of high risk, to prioritize or allocate scarce testing resources, for example. So we need to be ok with having a multitude of models, and point them to what they do best.

Another thing to consider is that the benchmark being used to judge the performance or accuracy of the variety of models out there is whether they are doing a good job at predicting confirmed cases. It is natural to do that as these are the readily available figures, but that is not in fact the target of the models, and certainly not what we should model to. Modeling covid-19 is hard because there is no truth/baseline - we don’t yet have a clear idea of its prevalence, that we can in turn use to build models and predict a variety of instances and scenarios. If your models are coming out over-predicting confirmed cases by 5x or 10x, maybe they are closer to the truth than confirmed cases - certainly at the beginning of the pandemic.