(From the ISHN Member information service) A constant refrain in practitioner and policy-maker commentary on random control trial based research (usually leading to systematic reviews and other conclusions that favour artificially "controlled" conditions over the real world). An article in Issue #3, 2014 of Child Development explains how the statistical methodology used in these studies (Frequentist methods) often dictates the nature of the investigation. Although the "gentle introduction" to Bayesian methods provided in the article is hardly such, the different methodology may help us all to get out of the RCT box. The authors note that "Conventional approaches to developmental research derive from the frequentist paradigm of statistics. This paradigm associates probability with long-run frequency. The canonical example of long-run frequency is the notion of an infinite coin toss. A sample space of possible outcomes (heads and tails) is enumerated, and probability is the proportion of the outcome (say heads) over the number of coin tosses. The Bayesian paradigm, in contrast, interprets probability as the subjective experience of uncertainty (De Finetti, 1974b). Bayes’ theorem is a model for learning from data. In this paradigm, the classic example of the subjective experience of uncertainty is the notion of placing a bet. Here, unlike with the frequentist paradigm, there is no notion of infinitely repeating an event of interest. Rather, placing a bet—for example, on a baseball game or horse race—involves using as much prior information as possible as well as personal judgment. Once the outcome is revealed, then prior information is updated. This is the model of learning from experience (data) that is the essence of the Bayersian method." The authors go on to explain that " the Bayesian paradigm offers a very different view of hypothesis testing (e.g., Kaplan & Depaoli, 2012, 2013; Walker, Gustafson, & Frimer, 2007; Zhang, Hamagami, Wang, Grimm, & Nesselroade, 2007). Specifically, Bayesian approaches allow researchers to incorporate background knowledge into their analyses instead of testing essentially the same null hypothesis over and over again, ignoring the lessons of previous studies. In contrast, statistical methods based on the frequentist (classical) paradigm (i.e., the default approach in most software) often involve testing the null hypothesis. In plain terms, the null hypothesis states that “nothing is going on.” This hypothesis might be a bad starting point because, based on previous research, it is almost always expected that “something is going on." It is this faulty assumption of "nothing going on" that may force RCT type studies to compare a new program/intervention to a controlled one (which is assumed to be the null hypothesis (nothing going on) but which actually may have a lot going on. The researchers using "frequentist" statistics then conclude that the new program works (or not) when in fact, they are really comparing the new program to others in which very similar programs, or similar but disorganized activities, are actually taking place. We leave it to others more schooled in statistics to respond, but from our vantage point, the increased use of Bayersian statistical methods deserves our consideration. (Full text of the article can be accessed) Read more>>
0 Comments
Leave a Reply. |
Welcome to our
|