By Mark Schneider, director of the Institute of Education Sciences (IES)
On Monday, March 22, IES announced its first sponsorship of a challenge with XPRIZE. The Digital Learning Challenge is designed to incentivize developers of digital learning platforms to build, modify, and then test an infrastructure to run rigorous experiments that can be implemented and replicated faster than traditional on-ground randomized control trials. The long-term goal of the competition is to modernize, accelerate, and improve the ways in which we identify effective learning tools and processes that improve learning outcomes.
IES is committed to identifying what works for whom under what circumstances. This consists of multiple tasks: the first is to identify what works—and we all know that by itself is a large hurdle. But knowing that an experiment worked in one or two places for one or two different types of students will never get us to a repertoire of effective practices that fit the multitude of different student populations found in the 100,000+ schools across the nation. That requires systematic replication.
IES has run several rounds of systematic replication RFAs for interventions that have evidence of effectiveness. But just like the original research these replications build upon, the replications will take a long time and change only a few parameters. Fulfilling our commitment to learning what works for whom using these traditional methods is like watching paint dry. It happens, but…
The goal of the challenge is to speed up the process of testing, replicating, retesting, and replicating again across a wider range of demographics than has been done using traditional RCTs.
Here's the key idea behind the competition:
The winning team of the Digital Learning Challenge sponsored by IES will (a) build the best infrastructure to conduct rapid, reproducible experiments and (b) demonstrate the resilience and rigor of this infrastructure in a formal learning context. The winning team must minimally demonstrate its ability to (a) conduct an RCT or QED using any meaningful and substantive educational intervention, and (b) systematically replicate that experiment at least five times in no more than 30 days with (c) at least three distinct demographics. While teams will not be judged on the effectiveness of the intervention they are testing, the intervention cannot be trivial. Teams will be awarded bonus points by Judges if they conduct tests with difficult-to-reach populations.
Consider the challenges this competition poses. Conduct an experiment, learn from it, replicate it across different demographics—including difficult-to-reach ones.
As the rules of the competition lay out, competitive solutions will be those that are best able to demonstrate the robustness of their platform to host a variety of experiments of education interventions, including randomly assigning students, teachers, classrooms, or schools to groups; collecting relevant high-quality data; and conducting reproducible analyses based on those data that demonstrate the capabilities of the platform. Ideally, the winning team will be able to provide comprehensive measures, multi-dimensional representation of learner engagement, robustness of measures in relation to the constructs under measurement, and contextual and granular data, as well.
In this competition, we are not focused on whether the intervention itself works. Rather we are rewarding the ability to deliver high quality experiments. And we are rewarding the ability to collect high quality data that can help refine the intervention as it is tested in new groups. Speed, accuracy, refinement based on data—all of this in the service of constantly improving the intervention, especially by measuring how well the intervention fits the needs of different students, especially traditionally underserved ones.
It’s the ability to learn rapidly from success and from failure and to adjust interventions quickly in light of those data that is at the core of this challenge.
Is this a Big, Hairy, Audacious Goal—a BHAG? You bet it is.