Social Influence in the COVID-19 Pandemic- Community Establishments’ Closure Decisions Follow Those of Nearby Chain Establishments

:rotating_light::rotating_light: New working paper alert!! :rotating_light::rotating_light:

Hello all,

Excited to share new working paper based on SG data. In this paper we show that closing decisions of large, national brands dont just affect their own franchisees. They also influence smaller, independent stores in the same industry/zip to close. The main takeaway is that leaders of large brands have an important role to play in curtailing the spread of the COVID-19 pandemic, not just through their own stores but also by influencing those around them.

Our full paper can be downloaded here: OSF

We would love any comments! We also devised and validated a method using SG data to determine whether or not an establishment is closed on a given date (check out appendix A) – this might be useful to others.

Social Influence in the COVID-19 Pandemic- Community Establishments’ Closure Decisions Follow Those of Nearby Chain Establishments
As conveners that bring various stakeholders into the same physical space, firms can powerfully influence the course of pandemics such as COVID-19. Even when operating under government orders and health guidelines, firms have considerable discretion to keep their establishments open or closed during a pandemic. We examine the role of social influence in the exercise of this discretion at the establishment level. We theorize that the decisions of chain establishments—which are associated with national brands—to stay open (or closed) will compel proximate, same-industry community establishments—which are independently owned or managed—to make the same choice. We further propose that the magnitude of this effect will diminish when community establishments are more socially embedded in their local environment. To evaluate these propositions, we use cellphone location tracking data from SafeGraph (www.safegraph.com) on daily visits to 230,403 community establishments that are co-located with chain establishments affiliated with 319 brands in the United States. We tease apart the effect of social influence from other factors by using an instrumental variables strategy that relies on a community establishment’s exposure to national variation in the timing of closure decisions by brands and find support for our propositions. We discuss implications of these findings for research on social influence and for policies to manage a pandemic such as COVID-19.

cc @Adit_IITG @Mathijs_De_Vaan_Berkeley @Sameer_Srivastava_Berkeley @Saqib_Choudhary

short thread: https://twitter.com/abhishekn/status/1289318184676458507

This is very cool, and the results certainly make sense to me! Loved the Oklahoma example. Some notes:

  1. I’m skeptical that the IV satisfies the exclusion restriction (I’m a labor economist, that’s my duty). Coming from an ed background, the IV reminds me of peer effects literature and the many problems therein. Specifically there’s a clear Manski-style reflection problem. If you’re saying that other Crossfits closing nationally is associated with (or causes) my specific Crossfit closing, which impacts my local Jim’s Gym closing, then national Crossfit closings affect local gyms in those locations, which may itself affect Jim’s Gym, violating the exclusion restriction. At the least you’re making some assumptions about how Jim’s Gym responds to the national gym scene.
  2. If that’s too picky, you might still want to go through the assumptions the Goldsmith-Pinkham et al paper you cite makes clear about Bartik-style instruments. Also, that paper is now forthcoming (or published already maybe?) at AER. Some of the trickier ones seem like they might hold for you, like exogeneity of initial share.
  3. It sounds like in the appendix you were using raw visit counts to compare February to shut-down times. This could cause you some problems as the size of the sample itself changes a fair amount of time, and the sample growth rate is different across different regions. It’s probably a good idea to scale for the size of the sample in a given region at a given time. Also, most of the time when SG is being used to make predictions about individual establishments, it’s ideal to use some sort of shrinkage using more-aggregated data. Since you’re trying to pick out differences between local and national establishments, that might not work for you. But you might want to consider downweighting or otherwise being real lax about the tiny establishments, as changes in open/closed for them are likely to be highly noisy.

Hi @Nick_H-K_Seattle_University – thanks so much for engaging with this. I’m new to shift-share/bartik and still learning about all the intricacies and trying to catch up on the lit, so this is very helpful.

  1. I agree that the exclusion restriction is alway a matter of debate, but we do have industry-date FEs to account for some level of industry level trends. we should make the clear the assumption that a national policy – affects a local gym far away – that then affects me might be a concern in our setting.

  2. Still digesting the goldsmith-pinkham paper – we implement one “balance check” type check in the paper, but theres clearly a lot more to do. thanks. if there is any particular test that you wuold find convicing / think is particularly important, I would love to hear it.

  3. the sample issue is a good one. we should check how stable estimates are after accounting for this issue, although we focus on establishments with continuous coverage over 6 weeks. We also did a lot of validation on the visit counts, and you are right that smaller places are an issue – larger places are an issue too, as it turns out. And so our methods account for this imperfectly. we show that one simpler alternate way of looking at counts doesnt affect our results, but this is an important point and we could do more.

thanks again!! i’ve enjoyed following you on twitter, so its nice to see you engage with our work and offer feedback!

  1. I think your setting is pretty good when it comes to the Goldsmith-Pinkham issues at least on a theoretical, since “shutting down” is something that didn’t really exist in the same way in February and is pretty plausibly random across brands in initial share. Pointing that out might be good, and makes the design more plausible. The place where it might come in is if the way you calculate Open picks up some prior-share differences that are endogenous, for example (to make something up) if it’s way easier for your calculation to notice that big things shut down than small things. You could probably test this using all-prior data, perhaps a Feb-to-early-March comparison, or Jan-to-Feb, maybe focusing just on areas where the Rotemberg weights are biggest.
  2. The issue isn’t so much change in the sample of businesses, but change in the sample of phones. If there are 100 phones in the sample in my neighborhood in February, and 100 phones in the sample in your neighborhood in February, and by April my neighborhood has 200 phones in the sample but yours still has 100, then “Open” is going to mean something pretty different in your and my neighborhood. This can be adjusted for using the home-panel-summary files.

Happy to look at this! It’s very cool work

really interesting!