Over at Public Discourse, Mark Regnerus responds to the Australian study of 500 children of same-sex couples, which compared a convenience, volunteer sample of gay families with the average Australian child:
To compare the results from such an unusual sample with that of a population-based sample of everyone else is just suspect science. And I may be putting that too mildly.
Non-Random Samples and Social Desirability Bias
It’s not the first time this approach has met with considerable publication and media success. The ACHESS study is a lot like the National Longitudinal Lesbian Family Study (NLLFS), except that it’s larger and newer. I realize that 500 cases is not a number to scoff at, and that such populations are a small minority to begin with. But until social scientists decide to do the difficult, expensive work of locating same-sex attracted parents (however defined) through random, population-based sampling strategies—preferably ones that do not “give away” the primary research question(s) up front, as ACHESS did—we simply cannot know whether claims like “no differences” or “happier and healthier than” are true, valid, and on target. Why? Because this non-random sample reflects those who actively pursued participating in the study, personal and political motivations included. In such a charged environment, the public—including judges and media—would do well to demand better-quality research designs, not just results they approve of.”
He goes on to make an analogy to political polling:o
“Snowball sampling doesn’t cut it. When I want to know who’s most apt to win the next election, I don’t ask my friends whom they support. Nor do I field a survey asking interested people to participate. No, I want a random sample of the sort often conducted by Gallup, NORC, or Knowledge Networks.”
Qualitative research has its place but its place is not to make large scientific claims of certainty. Maybe not the scholars’ fault, but the media’s certainly.