Using Data to Fight COVID-19

Dec 07 2020

Article cover image

On November 17, XPRIZE launched the $500,000 Pandemic Response Challenge. A competition for data scientists to build models that can better predict how policy interventions will change the trajectory of COVID-19.

I sit on the advisory board for this competition and I also run the Oxford COVID-19 Government Response Tracker (OxCGRT). We track the policy responses of (almost) every country in the world, and our dataset forms a core part of the competition.

But government policy is just the tip of the iceberg when trying to understand the pandemic. We already know that similar policies can have vastly different outcomes in different countries. Compare Singapore and France, two countries with similar overall response as measured by our OxCGRT Stringency Index, but with quite different outcomes.

We simply cannot say that any specific policy will work in every case: it is too easy to find a real-world counter example where it didn't. 

What's more, even if we think a certain policy worked in the past, that doesn’t necessarily mean it will work again in different future contexts. Predicting the impact of today’s policy choices is a tall order – complicated over the next 12 months as vaccines slowly start to roll out. No one has succeeded yet at predicting which policies will work, and there's no guarantee anyone ever will. That's why we need the challenge. Context matters; and so data on the context is crucial. 

Competitors have until 8 December to register, with final entries to be submitted later this month. Their models will be judged in an isolated virtual environment, without access to real-time external data (apart from our OxCGRT data). But teams could certainly use existing data from the last year in training their models – assigning weights to interventions based on past patterns, or the current state immediately before the competition starts. XPRIZE is further encouraging teams to share their datasets, to create public goods for others. I want to describe a few areas where I think additional data will make a big difference.


If we think about the causal chain of viral transmission, individual behaviour is the holy grail.

Indeed, you can think of government interventions solely as a way to influence individual behaviour, reducing the volume of movement and interaction throughout the community. Less large-group interactions means less opportunities for viral transmission. It sounds trivial to say, but when we see that stay-at-home orders correlate with lower transmission, that is only because stay-at-home orders affect individual activity. So what if we could measure this activity directly?

There are many factors here that we know are important: people density, contact time, mixing between regions. Data on individual behaviour – whether from cell phones, public transit companies, or elsewhere – could shed light on the extent to which restrictions actually impact behaviour. In some countries, similar levels of government restrictions have much less impact on individual behaviour. Tracking how actual activities change over time will help models to better predict what will work tomorrow, rather than what did work yesterday. 


Another factor is compliance and enforcement: our OxCGRT database might report a mask mandate or a stay-at-home order, but are people actually complying with these rules? And how far is the government going to enforce it?

We know for a fact that compliance varies. One study found that 75% of people with Covid-19 symptoms did not adhere to self isolation rules. In other countries, self isolation was akin to house arrest, ankle bracelets and all. In South Africa, the military was deployed during a lockdown to prevent travel, whereas in the UK, the prime minister's adviser took his family on an outing to Barnard Castle 250 miles from London during lockdown.

There's also reason to believe compliance decays over time – the longer a policy is in place, the less of an impact it has on people's behaviour. Indeed, the effectiveness of a policy is heavily path dependent on what has been happening in that country before the policy is implemented.


Another indicator of actual behaviour could be public sentiment at a population scale. One of my untested hypotheses is that public attitudes and sentiment – the public zeitgeist – play a massive role in overall outcomes.

In countries where there is strong community acceptance of policy measures, we can (likely) assume that these measures will be more effective. In countries where there are large segments of the community opposed to government restrictions, we might expect interventions to have less of an impact.

In Australia, the country watched as Melbourne spent months in lockdown from July to October to eliminate the virus. In November, when a small cluster of 20 cases appeared in a different part of the country, there was a robust (and willing) response from the public: the city went into lockdown two days after the cases were identified; tens of thousands of people queued for testing. They did not want to be the next Melbourne.


Public sentiment is of course shaped, at least in part, by the messages and communications from government leaders. Indeed, few things are more important.

The OxCGRT describes the rules made by governments. But few citizens will read the text of an Emergency Declaration or Executive Order. People take their cues from press conferences, from the news, and from mass public communications.

In some countries, the messaging of senior government leaders has been in direct opposition to the country's containment and health policies. For example, the president of Madagascar promoted a herbal tea as a cure for COVID-19 (disputed by the WHO) at the same time as public health officials were trying to enforce restrictions on gatherings. The president of the United States publicly rejected mask wearing at the same time that many states were trying to enforce mask mandates.  


The goal of Pandemic Response Challenge competitors is to accurately predict how many cases will be reported by countries. This is the real-world outcome their predictions will be tested against.

Of course, the number of cases reported is not the same thing as the actual number of cases in a country. As with everything in COVID-19, there is extreme variation. Estimates suggest that in some places, the reported statistics have captured just 5% of real transmission, in other places, it is close enough to 100%. And of course, this changes over time: as cases rise in a country and capacity for testing and contact tracing is overwhelmed, it becomes much more likely that cases go unreported.

A model could be more successful if it is based upon an underlying estimate of "true" epidemiological transmission – something we can never directly measure or know. Data like testing rates or hospital admissions may be useful proxies here.


None of these ideas are easy. If they were, someone else would already be doing it.

With the exception of aggregate-level mobility data (such as that published by Google and Apple), and testing statistics (such as those collected by Our World in Data) I don't know of any good data sources on these issues that cover every country in the world. That is part of the challenge, and any contribution of additional data would be meaningful.

But I am optimistic that teams will rise to the challenge! I could imagine creative teams scraping social media posts to analyse public sentiment and 'willingness' to comply with stricter measures. Or perhaps using satellite imagery to track patterns of activity (cars on the road; night-time lights) or creating maps of population density over time. Air traffic and import records might show how international flows correlate with transmission. Financial transaction microdata might indicate the extent to which people in a village will comply with a lockdown. Global survey responses might tell us how values differ around the world, or how much the general population actually comprehends public health campaigns.

The Pandemic Response Challenge isn't about figuring out what worked in the past. It is much harder than that. It is about trying to predict what will work in the future.

I am convinced that the best entrants won't just have the most efficient ML algorithm to crunch through our OxCGRT policy data. They will find the most novel, relevant, and precise additional data to strengthen their models.

Toby Phillips is Executive Director of the Oxford COVID-19 Government Response Tracker at the Blavatnik School of Government, Oxford University.