Wise Words from Rick Hess on ‘The Consequentialist Gambit’

Rick Hess of AEI, one of my favorite writers and thinkers, has just published The Same Thing Over and Over, a brilliant book on the limits of school reform from Harvard University Press. Hess’s writing on education has had an enormous influence on how I think about a wide range of public policy questions, and I was impressed by the following passage from the book — so impressed that I’ve typed it out for you:

In the past, education debates have too often seemed frustratingly impervious to evidence, but when combating the status quo and faddism, today’s would-be reformers have leaned heavily on evidence to make their case. This has had real benefits. Research documenting the efficacy of different approaches to reading instruction has been invaluable. However, data have also been stretched far beyond the breaking point, often in an attempt to promote new orthodoxies. President Obama illustrated this tendency in 2009 when he touted his administration’s Race to the Top proposals for charter schooling and improved data systems as “evidence-based,” despite the glaring lack of compelling research to support any of the various measures.

Consider the case of merit pay for teachers. Most research seeks to determine whether test scores go up after merit pay is adopted. This consequentialism asks whether linking salaries to student test results produces a predictable change in teacher behavior. Often it does. (No surprise there.) However, that Pavlovian strategy — which is not a whole lot more sophisticated in its vision of incentives than training a hamster to hit a lever to release food pellets — has little or nothing to do with the more fundamental argument for rethinking teacher pay. Assuming that the goal is to attract and keep talent in schooling, studies examining whether merit pay is associated with short-term bumps in student test scores on reading and math tests may be fundamentally misleading — none of the results reveal much about how altering pay and professional norms could help attract and retain more talented educators.

Indeed, the notion that rewarding performance ought to be subject to scientific validation before adoption is akin to suggesting that the National Institutes of Health should determine permissible compensation systems for doctors. If we applied that logic elsewhere in state government, presuming that states should only embark on reforms whose merits have been “scientifically” validated, we may well never have automated revenue departments, streamlined departments of motor vehicles, or adopted measures to control urban sprawl. A healthy concern for the impact of reforms is desirable. The risk is that proponents use short-term or partial outcomes that can serve as a way to short-circuit honest debate or to promote easy “guarantees” rather than more problem-oriented thinking.

In the end, absent a more coherent rethinking of salary structure, most merit pay proposals merely stack new bonuses atop entrenched pay scales while celebrating a new orthodoxy. All the existing commitments are taken as a given — meaning that such reform only comes by piling new dollars atop the old. In an era of tighter purse strings and overextended government, this is hardly a recipe for bold change.  

Hess is not questioning the importance of gathering good, reliable data. But not all important propositions can be tested in the way that we test Pavlovian strategies. Last month, I wrote a post on “the Steve Jobs method,” and I highlighted this slice of a blog post by Leander Kahney:

3. No focus groups — “Steve said: ‘How can I possibly ask somebody what a graphics-based computer ought to be when they have no idea what a graphic based computer is? No one has ever seen one before.’ He believed that showing someone a calculator, for example, would not give them any indication as to where the computer was going to go because it was just too big a leap.”

I added the following:

 

While it’s important to understand “best practices,” best practices won’t do you much good at the edge — at the point where you’re trying to do things that are genuinely new. This is the space where forward-looking, case-by-case judgment must come to the fore, as Amar Bhidé has argued. A dynamic market economy is always changing, and assumptions based on historical experience will often prove faulty. The ability to make good educated guesses about where consumers want to go, or where they might want to go given the right product and the right brand, is incredibly rare and, as Steve Jobs has demonstrated, incredibly valuable.

There are, of course, policymaking implications. Relying on past experience convinced regulators that a national housing meltdown couldn’t happen, for example. And it has convinced many bright people that the supposed return of Clinton-era tax rates will be economically harmless, as evidenced by the experience of the 1990s. These analysts could be right! But a forward-looking analysis suggests that there are important differences between then and now. 

Sometimes we have to rely on an understanding of psychological motivations, and we have to draw on analogies from organizations in other domains of the economy. We have to use case-by-case, narrative judgment in which we draw on empirical evidence the way we might in a criminal trial. This is not a perfect approach. But it has the advantage of being more honest than the alternative about our limitations, and more likely to take into account unintended consequences. 

Rest assured, I wish there were always an obvious “right answer.” But we don’t live on that planet. 

Reihan Salam — Reihan Salam is executive editor of National Review and a National Review Institute policy fellow.

Most Popular