The Gold Standard
There is a widespread belief that the best way to improve education is to get practitioners to adopt practices that "scientific" methods have proven to be effective. These increasingly sophisticated methods are required by top research journals and for federal government improvement initiatives such as Investing in Innovation (i3) Initiative to fund further research or dissemination efforts. The US DOE established the What Works Clearinghouse (WWC) to identify the scientific gold-standards and apply them to certify for practitioners which programs "work." The Fed's "gold standard" is the Randomized Comparative Trial (RCT). In addition, there have been periodic implementations of US DOE policies that require practitioners to use government funds only for practices that the US DOE has certified to be effective.
However, an important new article published by Education Policy Analysis Archives, concludes that these gold-standard methods misrepresent the actual effectiveness of interventions and thereby mislead practitioners by advocating or requiring their use. The article is entitled “The Failure of the U.S. Education Research Establishment to Identify Effective Practices: Beware Effective Practices Policies.”
The Fool's Gold
Earlier published work by the author, Professor Stanley Pogrow of San Francisco State University, found that the most research validated program, Success for All, was not actually effective. Quite the contrary! Pogrow goes further and analyzes why these gold-standard methods can not be relied on to guide educators to more effective practice.
The Need For a New Standard
Key problems with the Randomized Comparative Trial include (1) the RCT almost never tells you how the experimental students actually performed, (2) that the difference between groups that researchers use to consider a program to be effective is typically so small that it is “difficult to detect” in the real world, and (3) statistically manipulating the data to the point that the numbers that are being compared are mathematical abstractions that have no real world meaning—and then trying to make them intelligible with hypothetical extrapolations such the difference favoring the experimental students is the equivalent of increasing results from the 50th to the 58th percentile, or an additional month of learning. The problem is that we do not know if the experimental students actually scored at the 58th or 28th percentile. So in the end, we end up not knowing how students in the intervention actually performed, and any benefits that are found are highly exaggerated.
Pogrow also shows that the notion that science requires the use of RCTs is wrong. Even the medical profession, which does use gold-standard experimental techniques to test the effectiveness of medicines, uses much simpler scientific methods without controlled experiments in other areas of clinical practice such as obstetrics and improving health care delivery in complex organizations such as hospitals. The latter methods, called “improvement science,” appear more relevant to identifying scalable effective practices in the complex settings of schools.
The “rigorous” gold-standard WWC-type method derived from the traditions of the psychology lab are also creating misleading results for clinical practice in psychology and psychiatry. There is increasing criticism of the practicality and applicability of the RCT for guiding professional practice in all complex organizations.
Dr. Pogrow concludes that current federal initiatives to disseminate programs “proven to be effective” are of little value, and that a recent call to establish a new round of expanded “Effective Practices Policies” is a dangerous and unwarranted intrusions into local decision-making. He urges the profession to resist seductive calls and bureaucratic pressure from the US DOE to adopt such policies. His message for the Feds and his colleagues in the professoriate is to (1) suspend the activities of the What Works Clearinghouse and i3 dissemination efforts, and to (2) develop methods that can certify which interventions consistently produce clearly observable benefits in real world context. The latter requires an expanded conception of what science is and how we teach applied research methods.
Gene V Glass Arizona State University ~ University of Colorado Boulder National Education Policy Center ~ San José State University |
The opinions expressed here are those of the author and do not represent the official position of the National Education Policy Center, Arizona State University, University of Colorado Boulder, nor San José State University.
Excellent Gene.
ReplyDeleteThanks Gene -- big thanks
ReplyDeleteGreat Post Thanks
ReplyDeleteGold standard