The Cato Institute hosted an excellent event last week on government preschool programs. The panel pitted two skeptics, Russ Whitehurst and David Armor, against two believers, Deborah Phillips and William Gormley.
I’m firmly in the skeptic camp, and I still have trouble understanding the source of Phillips and Gormley’s optimism. But perhaps part of the reason they disagree: Each side seems to be asking a different question.
The relevant public policy question – the one with which skeptics concern themselves – is not whether early education in general has value, but whether government preschool provides any additional value. For the clearest illustration, imagine a new government-funded preschool in which all the kids who attend have simply switched over from a private preschool of equal quality. In that case, the supposed public benefit of government preschool – fostering a more educated citizenry – would be non-existent.
Randomized experiments are designed to answer the value-added question. Both the “treatment” and “control” groups are made up of people who apply for placement in the government preschool, but only the children in the treatment group are selected (by lottery) to attend. Crucially, parents from the control group are still allowed to pursue alternative pre-K education for their kids. It may take the form of private preschool, individual tutoring, informal instruction at home – anything other than the government program under evaluation. When the experiment concludes, the difference in outcomes between the two groups reflects not the value of early education per se, but the value added by providing the government school.
At the Cato event, the two sides debated a methodology used by the believers called regression discontinuity design, or RDD. While the skeptics raised other technical concerns concerning sample dropout, to me the most important issue is one that they have noted in the past: RDD cannot answer the value-added question. Without getting too far into the weeds, RDD essentially compares the test scores of students who have completed government preschool with the age-adjusted scores of students who have just started in the program.
But that’s not measuring value added. With RDD, parents in the “control” group knew that their children would soon enter government preschool. They would not feel nearly as compelled to seek alternative educational arrangements as parents who do not have access to government preschool. That’s part of the reason that RDD shows large effects of government preschool, while randomized experiments show much smaller impacts that fade to nothing within a few years.