Politics & Policy

The New SAT Results Aren’t Pretty

(Dreamstime)

The Class of 2015 SAT results are out, and they’re ugly. The College Board reported this week that scores on the SAT have sunk to the lowest point since the venerable college-admission test was revamped in 2005.

Just how bad were the results? The 1.7 million test-takers in the Class of 2015 posted a combined average score of 1490 (out of 2400) for the three tests in math, critical reading, and writing. Those scores are down 28 points from the 1514 of the Class of 2006. The 2015 results maintained the steady downward trend — one that’s held for the past decade and on each of the three tests.

The story gets more interesting, though. That’s because the past decade has also been a time of steady improvement in the performance of fourth- and eighth-grade students in reading and math on the National Assessment of Educational Progress (NAEP). In 2013, fourth- and eighth-graders posted their best NAEP math performance since 1990, and their best reading performance since 1992 (except for 2011, when fourth-graders did even better).

The question is why these gains in elementary and middle school aren’t showing up at the end of high school. The conventional response in education circles is to conclude that we’re continuing to get high school “wrong” — that all of the frenzied efforts to adopt new teacher-evaluation systems, standards, and curricula, digital tools, and the rest have had a big impact in K–8 schools but not in high schools. (As the Washington Post headline had it: “Sliding SAT scores prompt an alarm over high schools.”) That diagnosis may be right. Whether it is or not, it has the added appeal of giving the various pundits and advocates an excuse to trot out their pet remedies — and call for more dollars to fund them.

Perhaps, though, we should be asking how valid those prized elementary and middle-school gains are if they’re melting away before students are even out of school. After all, we’ve dismissed 40 years of NAEP data showing no gains for 17-year-olds, because the test doesn’t matter for the students and we suspect that those 17-year-olds just aren’t taking the test very seriously. But that logic doesn’t apply to the SAT, a high-stakes test for college admissions.

No Child Left Behind focused almost entirely on how students fared on tests in grades three through eight. It wouldn’t be surprising, then, if schools found ways to boost those reading and math results by cannibalizing other instruction.

It could be, instead, that our data on fourth and eighth-grade performance are misleading. The No Child Left Behind (NCLB) systems that states introduced starting in 2002 focused  almost entirely on how students fared on reading and math in grades three through eight. It wouldn’t be surprising, then, if schools found ways to boost those reading and math results by cannibalizing other instruction, reassigning teachers, shifting time and resources from other grades and subjects, emphasizing test preparation, and the like. In fact, we know schools have done these things.

The state tests in question are different from the federally administered NAEP fourth- and eighth-grade reading and math tests, but it’s no great stretch to imagine that the things helping a student do well on the state’s fourth-grade reading test also help them do well on the NAEP fourth-grade reading test. What’s been less clear is whether or not those results reflect meaningful learning. The acid test is whether they carry over to what matters: success in high school, college, and beyond. A decade of diminishing SAT returns might be telling us that they do not.

If this is the case, we’re worse off than we recognize: Not only are our schools continuing to flounder, but the NCLB’s command-and-control effort to improve schooling has been producing results that have given a false sense of progress. (Such a result would not much surprise anyone who recalls how the Soviet Union’s reported grain production always grew miraculously in accord with Soviet five-year plans, even as actual grain production lagged spectacularly behind.) 

Now, we should be careful how much weight we give these SAT results. After all, the SAT is a voluntary test. The ACT test is actually taken by slightly more students. And changes in SAT results over time may be partly a question of who is taking the test. But it’s troubling that a decade of declining SAT scores tell a very different story from the test results that advocates and educators point to as proof that our schools are on the right course.

— Frederick M. Hess is director of education policy studies at the American Enterprise Institute.

Frederick M. Hess is the director of education-policy studies at the American Enterprise Institute.
Exit mobile version