The Corner

More on Lead and Crime, but Really on How to Make Decisions

Kevin Drum wrote a terrific article on the connection between lead and crime. I wrote a criticism in which I argued that one of the major studies he cites makes unfounded assumptions, and Drum graciously responded. I think that a simplified summary of his response is (in my words): “These assumptions are more sensible than you claim. And anyway, while no one study is definitive, what are the chances that all of these separate lines of evidence point in the same direction if there really is not a connection?”

I think that the argument for action that Drum proposes requires that an observer accept several statements:

1. Lead exposure causes brain damage.

2. This brain damage causes more crime.

3. If we execute the following $400 billion environmental lead remediation program, we will therefore reduce crime enough to more than justify the cost.

I think that the range of studies that Drum cites in his article can be broadly partitioned into two groups: biological studies and econometric studies. The biological studies all point to support for statement 1. They are convincing, and let’s stipulate for the sake of argument that we know statement 1 to be scientifically proven. As far as I can see, they do not address statements 2 and 3. 

What we are left with then for statements 2 and 3 is the group of econometric studies: the Reyes paper that I discussed at length in my prior post; the aggregate time of peak violent crime in several countries; the analyses of crime versus lead exposure across six cities; the analysis of a similar relationship at a neighborhood level within New Orleans; and (most important) a study which tracked a group of kids with higher and lower lead levels into adulthood to see if those who had higher lead in their bloodstreams as kids were more likely to commit crimes as adults. What is common to each of these is that the obvious question of correlation versus causality arises. Whether we are looking at countries, states, cities, neighborhoods, or individuals, it is the case that people more likely to be exposed to more lead will tend to have different characteristics than those who are less exposed. How do we know that these other characteristics are not the real cause of higher criminality? In each of the econometric studies, the authors built a regression model to “hold other factors equal.” 

I’ve read Drum’s response as to why the assumptions in the Reyes paper are better founded than I believe them to be, but I find his response entirely unconvincing. And further, I find the “accumulation of evidence” argument entirely unconvincing when applied to allegedly independent strands of evidence, because they all (in my view) share the systematic underlying weakness of the inability of regression models to properly control for potential confounding variables.#more#

I don’t want to try to use rhetoric to push Drum out on a limb. His view on the broad question of accumulated regression-type analyses as sufficient evidence to justify expensive or coercive interventions is much closer to the center of gravity of the views of most academic social scientists than is my skepticism. 

I am the one out on a limb here. But that is right where I want to be. I’ve written a somewhat technical book about why I think I’m right, and most social scientists are wrong — in that they are way over-confident about their ability to predict the effects of proposed interventions. Typically, each peer-reviewed study will have appropriate qualifiers, but when drawing broader conclusions across a body of work, researchers, regulators, activists, and others will make the case for action based on the research. 

Ultimately, the debate about lead and crime is really about what evidence we should rationally require before taking action. It is structurally the same debate as the debate about any proposed program in schools policy, crime policy, welfare policy, social security, Medicare/Medicaid, or whatever, when the argument is that the program is justified because “studies show” that it will work.

I’d ask those who support the orthodox view that accumulated econometric analyses can make reliable, non-obvious predictions about the success of interventions this: Why is it that when subjected to replicated, controlled experimental tests, on the order of 90 percent of all social-intervention programs fail to create benefits greater than costs? That ought to be pretty humbling.

Most programs that get to the stage of being so tested are backed by significant social-science theory and evidence. If econometric and other social-science methods tend to identify programs that only work something like one time out of ten, shouldn’t we be pretty skeptical about relying on these methods to make decisions? And if it is a program that cannot practically be tested, this shouldn’t give us a free pass to assume that somehow these methods must now magically work better on such a topic. If anything, these issues will tend to turn on even more complex, society-wide and long-term effects. That should, if anything, lead to even greater skepticism about assertions of analytically derived knowledge.

As I describe in Uncontrolled, this is not unique to social policy. For example, most new business programs fail when subjected to the same kind of testing. I think the rational conclusion is that figuring out how to improve human social organizations is much harder than analyzing them or talking about them. When evaluating proposed interventions, it is usually not the case that social scientists or other experts have built some knowably-correct consensus about what needs to be done, if only we could mobilize political momentum to overcome short-sighted, ignorant, or selfish interests (though this is sometimes true). In fact, we should typically be quite humble about the ability of analytical and “scientific” methods to predict non-obvious effects of our actions in social policy. This doesn’t mean that regression and similar analyses are useless, just that they are better thought of as theory-building than as tests of beliefs. 

I believe this nerdy perspective has profound implications for public policy. There is no bumper-sticker answer for how to make decisions in the face of what I assert to be deep and pervasive uncertainty. Broadly speaking, I think it leads in a Hayekian and Popperian direction. I wrote Uncontrolled to justify and describe this perspective, because I don’t know how to summarize it down to a blog post.

Jim Manzi is CEO of Applied Predictive Technologies (APT), an applied artificial intelligence software company.
Exit mobile version