‘Affirmative action” was the logical sequel to the civil-rights legislation of the 1960s. The initial reasoning was attractive enough. New guarantees of equality of opportunity were insufficient to achieve the promised social parity, given the legacy of slavery and the existence of ongoing racial bias. Therefore, to counteract the effects of historical discrimination, the race of individuals must be weighed into contemporary hiring and admissions practices. The key was to avoid the word “quota.” That did not sound very “affirmative” for a program that supposedly was about growing (or “enriching”) the pie, not a crass zero-sum game of taking a college spot or a job from one person and giving it to another on the basis of race.
Second, although slavery was confined to the Confederacy, there was the general assumption that, as blacks in the postbellum era had migrated northward, they were subjected to all sorts of bias, and so the recompense was to be a national, not just a southern, obligation.
Third, it was soon clear that all sorts of groups other than blacks could lodge historical claims against the supposedly dominant “white” culture. Soon Latinos, Native Americans, and Asians likewise petitioned for inclusion in set-aside and compensatory programs. The subtext was that these groups, given racial bias, would not intermarry and assimilate as quickly or to the same degree, and would not do as well economically, as had other terribly persecuted minorities like Jews, Italians, and the Irish, who after decades of discrimination seemingly had morphed into the so-called white majority.
As these original victimized groups experienced success (though at differing rates), and as a legion of other cadres sought inclusion in the preference industries, “affirmative action” insidiously was replaced by a new euphemism, “diversity” — which apparently denoted that almost anyone who was not a white male heterosexual Christian could qualify for preference on the basis of “difference.”
A university, for example, might highlight its “rich diversity” by pointing to gay students, female students, Punjabi students, Arab students, Korean students, and disabled students — even should they all come from quite affluent families and backgrounds. Key here was that “diversity” was admittedly cosmetic, or at least mostly to be distinguishable by the eye — skin color, gender, etc. — rather than internal and predicated on differences in political ideology or values. A Brown or an Amherst worried not at all that its classes included very few Mormons, libertarians, or ROTC candidates; instead, if the students looked diverse, but held identical political and social views, then in fact they were diverse.
But as we near a half-century of racial preferences, the entire industry is now obsolete, as illiberal as it is counterproductive. Quite simply, there were inherent flaws in affirmative action/diversity that were never addressed. And they now have come back to haunt the entire experiment as something as corrupt and unworkable as our far briefer trial with Prohibition. Here are five good reasons to dismantle the Diversity Industry, and simply attempt to judge Americans on the “content of our character” rather than by the way we look.
1. Who is a minority? There were never clear, established rules for racial set-asides. To make such rules would by definition require some sort of racial-purity protocols, whose history, whether in the antebellum South or National Socialist Germany, has been frightening. But America is still the melting pot where intermarriage and assimilation continue — far more so than the during the 1960s, when quotas were being established. Thus, are Barack Obama’s children (of one-half African-American, one-fourth Kenyan, and one-fourth white heritage) “black”? Who is Mexican-American? Does one qualify by having a half-Mexican-American father, which ensures a one-quarter-Mexican racial line and a Hispanic last name?