Anti-Sex Ed Curriculum Makes the List: Don't Blame Obama, Blame the System
Written by Norman A. Constantine, Eva S. Goldfarb, Danny Ceballos, and Carmen Rita Nevarez for RH Reality Check. This diary is cross-posted; commenters wishing to engage directly with the author should do so at the original post. See all our coverage of Heritage Keepers Abstinence Education here.
A recently updated list of federally approved “evidence-based” teen pregnancy prevention programs has been causing a stir. This list specifies the programs that are eligible for federal funds and serves as the cornerstone of President Obama’s Teen Pregnancy Prevention Initiative. Among the three programs making the list for the first time is the Abstinence-Only-Until-Marriage program Heritage Keepers Abstinence Education. Our friends and fellow advocates in the adolescent sexual health promotion field have denounced this program as medically inaccurate, biased, fear- and shame-based, and otherwise inappropriate for the classroom. Here we all agree, completely. A program like this has no place in our schools and communities, and especially not with government funding.
But we take issue with criticisms of the Obama administration for “backroom deals and secrecy,” “political expediency,” and “blatant hypocrisy,” among other barbs and arrows recently launched by understandably frustrated advocates. Rather than blaming Obama for this unfortunate development, we’d all do better to recognize that it was the result of a fundamentally flawed system operating according to explicit agreed-upon rules—a system sorely in need of review and repair.
What’s wrong with this system? Simply put, it is based on a fundamental misunderstanding of the nature of scientific evidence and its appropriate use. To earn a place on the list, a program needs only to produce one statistically significant outcome in one evaluation study–no matter how many outcomes were tested across how many studies. Yet it is a well-known principle of research statistics that the likelihood of a false finding increases as the number of outcomes tested increases. In fact, if a program has no effect, for every twenty outcomes tested one outcome can be expected to be incorrectly identified as a statistically significant effect merely due to chance alone. Even testing just two outcomes raises the probability of a false finding of effectiveness beyond the traditionally tolerated level of less than five percent. The technical name for taking advantage of this principle to obtain a statistically significant finding is “fishing for significance.”