The Corner

Politics & Policy

No, a Single Study Didn’t Magically End the Right-to-Carry Debate

Lately, liberals (most recently at The Nation) have been pretty enamored with a study from John J. Donohue, Abhay Aneja, and Kyle D. Weber purporting to show that right-to-carry laws increase violent crime. I’m a longtime skeptic of claims on both sides of this debate — my general take is that if these laws have an effect one way or another, that effect is too small to reliably detect given how chaotic crime trends can be — and while this study is a welcome addition to the literature, it doesn’t really move the needle too much for me. Here’s a quick(ish) explanation.

For starters, there have been a ton of right-to-carry studies reaching a bunch of different results since the mid ’90s — crime goes down, crime goes up, no detectable effect. The studies vary in countless ways in terms of the data they rely on and the methodology they use, and their authors are constantly bickering about whose methods are better. Here’s a recent response to a different paper Donohue co-authored, for instance, and here is his response to their response.

I’ve been following this debate for more than a decade, and I can assure you this never ends. The idea that a single study finally uncovered the definitive way to analyze this topic is absurd. And we should be especially skeptical when the lead author is someone who’s been arguing against right-to-carry for so long that he claimed to put “The Final Bullet in the Body of the More Guns, Less Crime Hypothesis” 15 years ago. This is just one volley of statistical results in a very long war.

But again, it is a study worth taking seriously, so let’s take a quick tour. The most striking thing about it is this claim from the abstract, about what happens when you add new years of data to statistical models that pro-right-to-carry authors have used in the past:

Our preferred panel data regression specification (the “DAWmodel”) and the Brennan Center (BC) model, as well as other statistical models by Lott and Mustard (LM) and Moody and Marvell (MM) that had previously been offered as evidence of crime-reducing RTC laws, now only generate statistically significant estimates showing RTC laws increase overall violent crime and/or murder when run on the most complete data.

This is a carefully phrased sentence. What it actually means is that most of the results are statistically insignificant, but the results that are significant are in the direction of more guns/more crime.

There are a ton of models here, mostly covering the late ’70s through 2014, but some starting in 2000. Each model has a “dummy” version and a “spline” version, each of which is run on two different measures of murder in addition to measures of violent and property crime. There are so many models that I had trouble tallying them all up accurately on the back of a dinosaur picture my three-year-old had scribbled on.

But by my count, of the 32 models covering the whole time period for murder, only three generate statistically significant results. Of the 16 for violent crime, seven do. And for the models starting in 2000, the numbers are four in 16 and one in eight, respectively. Even their “preferred” model produces no significant results for murder when applied to the full time period, and a significant result for violent crime in only one of its two versions. When applied to the data since 2000, two of four murder results are significant but neither of the violent-crime ones is.

So the models that “only generate statistically significant estimates showing RTC laws increase overall violent crime and/or murder” turn out to be pretty underwhelming, at least if you come to the study with a general skepticism, rather than a conviction that right-to-carry reduces crime. All they show is that it’s easy to make the results appear and disappear by changing the model.

I also find it rather weird that the results for violent crime are generally stronger than the results for murder (bigger effects and more likely to pass significance tests), when the argument against guns usually stems from the claim that they turn ordinary crimes into murders. Further worth noting is that there’s little evidence that permit holders themselves are committing many violent crimes. (The authors propose a variety of other mechanisms by which the laws could increase crime, such as guns getting stolen.)

To be fair, the authors also provide some extra tests to buttress their argument — including a technique called the LASSO, which is used to help researchers figure out which control variables to use in their analyses. I won’t pretend to be able to critique the concept on a technical level, but it certainly doesn’t address the various big-picture reasons I have for being skeptical of this kind of work in general, whatever the set of control variables included. We can simply never be sure we’ve properly accounted for all the various things that might affect crime rates aside from right-to-carry laws. Heck, with violent crime, we can’t even be sure we’re measuring it accurately: The numbers here come from police departments (via the FBI), meaning they cover only crimes that are actually reported, and even the broad, national trends in those numbers don’t always track estimates based on victimization surveys of the general public.

The paper concludes with an incredibly extensive analysis of violent-crime rates using “synthetic controls.” This is when, instead of comparing a state’s crime rate (or what have you) with national trends or rates in neighboring states, you calculate mathematically which other states have had the most similar crime trends in the past, and rely on those states as a comparison group. This has some intuitive appeal, but it often produces results that are absurd on their face. Here we are told that Texas, for example, can be thought of as 57.7 percent California, 9.7 percent Nebraska, and 32.6 percent Wisconsin. And since that combination of states had a steeper violent-crime decline than Texas did in the ten years after Texas enacted its right-to-carry law in 1996, it’s concluded that the law increased Texas violent crime 16.9 percent. This implies that one in seven violent crimes in Texas wouldn’t happen without the law. Color me unconvinced.

So again: This is a welcome addition to the literature. It adds new data to old models, and it introduces some new techniques to the crime data we’ve been arguing about for nearly a quarter century. I’m interested in seeing what criticisms it generates from the other side. But I’m far too cynical at this point to be bowled over by it.

Exit mobile version