After 20 Years of Reform, Are America’s Schools Better Off?

President George W. Bush collects letters from students during his visit to a second grade class at General Philip Kearny School in Philadelphia, Pa., January 8, 2009. (Kevin Lamarque/Reuters)
On the surface, statistics show significant improvement. But if you dig a bit deeper, the status quo begins to look a lot less desirable.

Twenty years ago this spring, George W. Bush announced that he was forming an exploratory committee as a precursor to his first run for the presidency. In the announcement, he pledged to improve America’s schools, “set high standards, and insist on results” so as to “make sure that not one single child gets left behind.” An era of ambitious education reform had begun.

Two decades later, after sweeping efforts that included No Child Left Behind, Race to the Top, and the Common Core, are our schools better off? The answer is less reassuring than one would hope. On the whole, it’s certainly possible to find some evidence of improvement — but progress is easiest to find in the metrics most amenable to manipulation.

State tests in reading and math do appear to demonstrate that schools have significantly improved over the last 20 years. Between 2005 and 2009, as No Child Left Behind took full effect, the share of students who proved to be proficient in state tests rose by 1 to 2 percent per year. Over the next six years, state assessments were too varied to allow for meaningful comparison. But the same trend — with the share of proficient students increasing by 1 to 2 percent each year — did reemerge after 2015, when standardized Common Core tests became widely used. And high-school-graduation rates also skyrocketed, from 71 percent in 1997 to 85 percent in 2017.

Good news, right? Not exactly. The politicos and state education officials claiming credit for these gains are the same ones who choose state tests, define what qualifies as “proficient,” and monitor graduation rates to guard against funny business. The results are tied into state accountability systems, where lousy results can produce practical and political headaches. Thus, policymakers have both the means and the incentive to inflate the numbers any way they can.

Fortunately, the U.S. also regularly administers the National Assessment of Educational Progress (NAEP) to a random, nationally representative set of schools. Because the NAEP isn’t linked to state accountability systems, it’s a good way to check the seemingly positive results of state tests. From 2000 to 2017 (the most recent year for which data is available), NAEP scores showed that fourth-grade math results increased 14 points, which reflects a bit more than one year of extra learning. Eighth-grade math results also demonstrated significant improvement, increasing ten points in the same period. Fourth- and eighth-grade reading scores, meanwhile, barely budged. And almost all of the math gains were made in the decade from 2000 to 2010; performance has pretty much flatlined since then.

Put another way, the NAEP results raise hard questions about those cheery state-test results and graduation rates. George Washington University’s Center on Education Policy, for instance, analyzed the annual increase in the percentage of students whose NAEP results demonstrated proficiency and the percentage of students whose state tests demonstrated proficiency from 2005 to 2009. It found that, depending on the subject and grade, average gains on state assessments outpaced NAEP gains by 50 to over 100 percent. In other words, state-reported gains vastly exceeded the gains on NAEP. Similarly, high-school-graduation scandals and analysis of “credit recovery” programs have raised serious concerns about the validity of the dramatic graduation-rate gains.

Given the disparity between state tests and independent national results, it’s useful to see what the results look like on international tests. The Programme for International Student Assessment (PISA) is the only major international assessment of both reading and math performance. While PISA has its share of limitations, it offers a wholly independent view of American education and accountability systems.

From the time PISA was first administered in 2000 to the most recent results from 2015, U.S. scores have actually declined, while America’s international ranking has remained largely static. Average American reading scores have declined from 504 to 497, and average American math scores have declined from 483 to 470. Compared to other nations in the same time span, the U.S.’s world ranking dropped from 15th to 23rd in reading, and from 19th to 39th in math. (The number of nations participating has increased significantly over that time, from 43 to 72, so it’s fair to say that relative American performance has remained about the same.)

The PISA results should concern anyone eager to insist that 20 years of accountability-based school reform has obviously “worked,” even when we limit the discussion to K–8 math instruction.

Evaluating the success of any reform effort starts with a careful accounting. And a fair assessment of the two decades since President Bush’s bold challenge would admit that there has been a lot of action, but not much in the way of demonstrated improvement. Just why this is the case remains an open question. But going forward, education-reform proposals must start by acknowledging that the status quo appears deeply flawed the minute one looks below the surface of the numbers.

Frederick M. Hess is the director of education-policy studies at the American Enterprise Institute. RJ Martin is a research assistant at AEI.


The Latest