The Testing Mess
The fastest way to "improve" students' performance: Lower your standards.


The only thing surprising about last week’s revelation that the fraction of New York City students passing the state’s reading and math tests had dropped by an average of 25 percentage points is that anyone was surprised at all. Student pass rates dropped precipitously all across New York State for one reason, and one reason only: State education commissioner David Steiner and Board of Regents chancellor Merryl Tisch decided to make the tests less predictable this year, and to raise the “cut scores” required for students to reach each of four designated achievement levels — below basic, basic, proficient (i.e., “passing”), and advanced. Student achievement levels had risen spectacularly from 2007 to 2009 because a different group of Albany education authorities decided to lower the bar for proficiency by reducing the cut scores.

Such elasticity in the definition of student achievement is one of the nation’s most serious education problems. The No Child Left Behind (NCLB) Act of 2001 left the door wide open to massive test inflation by stipulating that all American students “will be proficient” by the year 2014 — and imposing a series of increasingly onerous sanctions on districts and schools that do not move fast enough toward that goal — yet allowing each state to develop its own tests and set its own standard for “proficiency.” Since men are not angels, it was inevitable that state and local education authorities would lower the proficiency bar to make themselves look good politically and avoid federal sanctions.

The best evidence of test-score inflation is the wide gap between the number of students that states deem proficient on their own tests and the number called proficient by the National Assessment of Educational Progress (NAEP), often referred to as the “nation’s report card.” The NAEP tests are the gold standard in student assessment because they can’t be gamed by educators: Because the federal tests are given to only a sample of students in each state, teachers can’t “teach to the test” and schools can’t offer students practice tests.

The problem of test inflation has been particularly acute in New York. As shown in two separate state-comptroller reports, one in 1991 and another last year, the state’s education department has historically failed to maintain the integrity of the testing system (by, for example, establishing a standardized scoring system and verifying its use). The situation became far worse in 2002, when NCLB came into effect and mandated reading and math exams for grades three through eight. The state education department should have hired a highly qualified director of assessment, someone committed to creating an honest and transparent testing regime. Instead, the job went to David Abrams, a high-school English teacher who had spent ten years as an administrator in an Albany-area school district. Abrams lacks professional credentials in the field of education testing. One member of the Regents told me that the testing director “has no qualifications for the job, and he’s responsible for many of our blunders on the tests.”

Abrams’s most consequential blunder was ignoring a warning from assessment experts Daniel Koretz and Howard Everson about the integrity of the state tests. In a September 2008 memorandum to Abrams, they cited growing public skepticism about the reported score gains and requested the education department’s “support for a program of validation studies” to measure the extent of “score inflation and the undesirable instructional activities that produce it.” The inflation was produced not only at the state level with lowered standards, but locally through such practices as “teaching to the test,” having teachers grade their own students, and even the possibility of cheating.


Sign up for free NRO e-mails today:

NRO Polls on LockerDome

Subscribe to National Review