An article in Volume 233, 2019 of Social Science & Medicine discusses the potential weaknesses of systematic reviewes, even when they are following the PRISMA Guidelines for such reviews. The authors "advocate that SR teams consider potential moderators (M) when defining their research problem, along with Time, Outcomes, Population, Intervention, Context, and Study design (i.e., TOPICS + M). We also show that, because the PRISMA reporting standards only partially overlap dimensions of methodological quality, it is possible for SRs to satisfy PRISMA standards yet still have poor methodological quality. As well, we discuss limitations of such standards and instruments in the face of the assumptions of the SR process, including meta-analysis spanning the other SR steps, which are highly synergistic: Study search and selection, coding of study characteristics and effects, analysis, interpretation, reporting, and finally, re-analysis and criticism". Read more...
0 Comments
Three articles in Issue #1, 2019 of Journal of the American Medical Association debate whether randomness is maintained when the samples from different studies are combined in meta-analysis of their data. Essentially, the concern is that "However, a long-term concern in meta-analysis has been the apples-and-oranges comparison problem when studies that are too different are combined into a single estimate of effect. Random-effects meta-analysis can exacerbate this problem". Further, there is concern that "risk estimates from small trials are overweighted. However, small trials are known to be biased toward reporting an effect (typically, why they were originally published) and are often of lower quality. This is the opposite of how one would disproportionally weight trials if such weighting were thought necessary" jamanetwork.com/journals/jama/issue/322/1Read more....
This blog has been identifying the limits of RCT's & systematic reviews. RCT's are problematic when seeking to assess complex, multiple interventions that develop over several years across several systems, agencies and professionals. An editorial in the October 2015 Issue of Journal of Epidemiology & Community Health continues this discussion by arguing for more use of "pragmatic, formative evaluations" when implementing a complex intervention. The editorial begins: "Recently published guidance on process evaluations by the Medical Research Council's (MRC's) Population Health Sciences Research Network (PHSRN) marks a significant advance in the evaluation of complex public health interventions. In presenting programmes as not just a set of mechanisms of change across multiple socioecological domains, but as an interaction of theory, context and implementation, the guidance extends the remit of evaluation. Process evaluations have emerged as vital instruments in these changing needs through modelling causal mechanisms; identification of contextual influences and monitoring fidelity & adaptations." They go on to say: "One particular conceptual space that needs to be carved out is pragmatic formative process evaluation, defined as the application of formative process evaluation criteria to interventions that have ostensibly been formulated, and are likely in routine practice, but have not been subjected to rigorous evaluation. Moreover, even where some understanding of the theory of change is present, it is unlikely that the unintended consequence of interventions will have been sufficiently explored. For example, our recent evaluation of a school-based social & emotional learning intervention, which had been recommended by the Welsh school inspectorate indicated a number of potential iatrogenic effects due to a stigmatising process". Read more>> (An item from the ISHN Member information service)
Since about 2005, our attention in school health and social development has included a focus on the country, community and neighbourhood contexts as a key factor in selecting the issues to be addressed, the programs to be used and the capacities to be strengthened. ISHN has worked with others to develop frameworks for indigenous communities, disadvantaged communities in high resource countries and more recently, in low resource countries. But those efforts are ahead of the good research required to guide practice. So we were pleased to note the article in the September 2015 Issue of Implementation Science describing a project to Identify the domains of context important to implementation science. "This research program will result in a framework that identifies the domains of context and their features that can facilitate or hinder: (1) healthcare professionals’ use of evidence in clinical practice and (2) the effectiveness of implementation interventions. The framework will increase the conceptual clarity of the term “context” for advancing implementation science, improving healthcare professionals’ use of evidence in clinical practice, and providing greater understanding of what interventions are likely to be effective in which contexts." Read more>> (An item from the ISHN Member information service)
Several articles in Issue #5, 2015 of Administration and Policy in Mental Health and Mental Health Services Research provide insights and methods for significantly changing the research in school health promotion and social development. Most studies, often replicated again and again, measure the linear impact of a selected intervention (instructional program, policy or service) delivered at the school level only on a behaviour or problem. Sometimes, combinations of interventions delivered at the school level, usually aimed at one or two behaviours or combination of problems, are evaluated for a short period of time. Rarely, we see attempts to group these singular or limited intervention studies into a health promoting schools model and assess whether the HPS model works (Again, the assessments almost never extend beyond the school to include clinics or other agencies or upwards into the health, education and other systems. Implementation research, a new type of work in health promotion and social development, has also been limited to this narrow, singular and front-line scope. This is because of the costs and complexity of multi-level research in large systems. As noted in the introduction to this set of articles in this journal "Implementing evidence-based and innovative practices, treatments, and services in large systems is highly complex, and has not, until recently, been guided by empirical or theoretical knowledge. Mixed method designs and approaches have been proposed to promote a more complete and efficient way of understanding the full range of factors that influence the dissemination and implementation of evidence-based innovations in large systems.This special issue provides both an overview of mixed methods designs and approaches, as well as applications and integration of sophisticated sampling, statistical methods and models (borrowed from various fields such as anthropology, statistics, engineering and computer science) to increase the range of solutions for handling the unique challenges of design, sampling, measurement, and analysis common in implementation research. In the six papers in this special issue, we describe conceptual issues and specific strategies for sampling, designing, and analyzing complex data using mixed methods. The papers provide both theoretically-informed frameworks, but also practical and grounded strategies that can be used to answer questions related to scaling-up new practices or services in large systems." Read more>> (An item from the ISHN Member information service)
(From the ISHN Member information service) ISHN has recent published several Commentaries on the value of Random Controlled Trials (RCT's) in health promotion/social development. Two articles in Issue #1, 2015 of Global Health: Science & Practice continues this discussion. The first article "takes issue with Shelton's (The Editor of the Journal) previous article that “randomized controlled trials (RCTs) have limited utility for public health” They argue that "setting RCTs in opposition to other systematic approaches for generating knowledge creates a false dichotomy, and it distracts from the more important question that Shelton addresses—namely, which research method is best suited for the question at hand?" They conclude with the traditional defence of RCT's that "It is imperative for public health practitioners to answer definitively, “Does it work?” before asking, “How can it be made to work practicably at scale?” In the rebuttal article, the author states that "“Does it work” is always affected by context. The paradigm Hatt et al. put forward asserts that one strength of randomized trials is to answer definitively, “Does it work?” But for the kinds of complex programs public health must muster, there is generally no absolute answer to that question" This article states that "RCTs are only one piece of the picture in triangulating evidence for public health programming." but, he also goes on to say that "While I really do appreciate randomized studies, perhaps my biggest concern is the “hierarchy” whereby some colleagues place controlled trials at the top of a pyramid as manifestly the best evidence. For understanding public health programming, I see that as quite misguided. Randomized studies help us to understand some things, but they are only one piece of the picture in “triangulating” evidence for programming. And evidence from real-world programming is especially key." Read more>>
(From the ISHN Member information service) Recently we have been discussing the value and limits of Random Control Trials (RCT) and subsequent systematic reviews based on those trials. Although it does not discuss school-related programs directly, an article the February 2015 issue of Children and Youth Services Review continues this discussion, noting the need to control or at least accurately describe the control groups used in comparison with the intervention being tested. The authors ask if "Does allocation to a control condition in a Randomized Controlled Trial affect the routine care foster parents receive? And, of course, the answer is yes. "Strengthening Foster parents in Parenting’ (SFP) is a support program for foster parents who care for foster children with externalizing problem behavior. Its effectiveness was examined with a Randomized Controlled Trial (RCT). In this paper, we examine the treatment as usual (TAU) that was offered in the control condition of this RCT. For this purpose, the TAU from the SFP control group was compared with TAU provided to a similar group of foster parents outside any RCT. Our results show that TAU is diverse and varies widely. Furthermore, being part of the control condition was positively associated with both the counseling frequency from the foster care services and with external help-seeking behavior (finding and using additional support). In order to prevent condition contamination in future trials, TAU should be clearly described and standardized, and treatment fidelity should be carefully monitored." We suspect that this natural inclination to improve performance while being part of a research study will also affect school-based studies. Further, as the authors note (and as we have commented before, the Treatment As Usual" condition, even as is, may actually be quite close to the intervention being tested. Read more>>
(From the ISHN Member information service) An editorial in the January 2015 issue Cochrane Database Systematic Reviews discusses the challenges of reviews of complex interventions such as school health promotion. Although the editorial is discussing coordinated case management of dementia patient care, the comments will likely apply to the complexity of reviewing the variable, multiple, coordinated interventions required in school health promotion.
The authors suggest that "Guidelines have recommended the use of case management but are cautious about the evidence, judged as at least partially inconclusive.There is also uncertainty about the most suitable components of case management interventions.This is no surprise as case management is a prototypical example of a complex intervention. There is complexity in the intervention components as well as in the theoretical background of the intervention, the implementation context, and the targeted outcomes. As with many complex interventions, case management also targets more than one recipient: people with dementia and/or their carers. The challenges of synthesising the evidence for complex interventions have been acknowledged by Cochrane, with a recent series of articles forming the basis for an upcoming new chapter in the Cochrane Handbook for Systematic Reviews of Interventions." The authors laud the particular review of dementia with comments that could be applied to the variations in school health promotion; " Comprehensive tables allow readers to compare the goals of case management interventions, components of case management and control interventions, methods of intervention implementation, tasks and components of case management, and outcome measures used. Interventions are also categorised into three different approaches to case management. Still, for many studies there is not enough information to clearly describe what has been done. Also, case management interventions were often implemented as a part of wider health system changes, making it more difficult to attribute observations to case management, let alone to distinct components of case management interventions.". The authors also make suggestions for reviews of complex interventions that also apply to school health and other school-related strategies; "Guidance on conducting systematic reviews of complex interventions often demands the inclusion of further studies to allow for in-depth descriptions of study components and the context and process of implementing the intervention. This frequently requires the inclusion of mixed-method or qualitative studies that could help to disentangle the intervention components and their distinct roles. While this undoubtedly adds to Cochrane authors' already demanding workload, it seems essential if the most meaningful use is to be made of the data. Reporting is a problem, and information is often difficult or even impossible to acquire. Recent reporting guidelines may help authors look for important aspects concerning the intervention (TIDieR guideline) or the whole process of complex intervention development and evaluation (CReDECI guideline)" They also mention other problems; "Apart from the problems described above, the present review suffers from the fact that most studies are fairly small, with fewer than 100 participants per group in all but one study". We would add that the time period for assessing school health approaches is also problematic. A truly comprehensive, ecological and systems-based approach to SH does more than examine a few schools or some selected interventions. it is an approach that is developed over several years at a national or sate level, with the delivery of multiple policies, funding, personnel and programs from several ministries, local agencies/school boards and then local professionals as well as the people working in the school building. Indeed, reviews of school health promotion and social development are actually far more complex than the one discussed in this editorial, which examines coordinated case management of a single health problem. It is in the light of this January 2015 Cochrane editorial that we can turn to two major recent and previous reviews of school health promotion (Langford et al, 2014; Stewart-Brown, 2006) and understand better why both of these reviews as well as others conclude that SH promotion is promising but there is insufficient evidence. For further discussion, readers might want to listen to our recent October 23, 2014 ISHN webinar with the authors of the most recent review, as it discusses the limits of RCT studies and the ensuing systematic reviews even further. We hereby challenge researchers and research funding organizations to address this challenge, perhaps beginning with the impending Cochrane Handbook Chapter on complex interventions. (From the ISHN Member information service) A constant refrain in practitioner and policy-maker commentary on random control trial based research (usually leading to systematic reviews and other conclusions that favour artificially "controlled" conditions over the real world). An article in Issue #3, 2014 of Child Development explains how the statistical methodology used in these studies (Frequentist methods) often dictates the nature of the investigation. Although the "gentle introduction" to Bayesian methods provided in the article is hardly such, the different methodology may help us all to get out of the RCT box. The authors note that "Conventional approaches to developmental research derive from the frequentist paradigm of statistics. This paradigm associates probability with long-run frequency. The canonical example of long-run frequency is the notion of an infinite coin toss. A sample space of possible outcomes (heads and tails) is enumerated, and probability is the proportion of the outcome (say heads) over the number of coin tosses. The Bayesian paradigm, in contrast, interprets probability as the subjective experience of uncertainty (De Finetti, 1974b). Bayes’ theorem is a model for learning from data. In this paradigm, the classic example of the subjective experience of uncertainty is the notion of placing a bet. Here, unlike with the frequentist paradigm, there is no notion of infinitely repeating an event of interest. Rather, placing a bet—for example, on a baseball game or horse race—involves using as much prior information as possible as well as personal judgment. Once the outcome is revealed, then prior information is updated. This is the model of learning from experience (data) that is the essence of the Bayersian method." The authors go on to explain that " the Bayesian paradigm offers a very different view of hypothesis testing (e.g., Kaplan & Depaoli, 2012, 2013; Walker, Gustafson, & Frimer, 2007; Zhang, Hamagami, Wang, Grimm, & Nesselroade, 2007). Specifically, Bayesian approaches allow researchers to incorporate background knowledge into their analyses instead of testing essentially the same null hypothesis over and over again, ignoring the lessons of previous studies. In contrast, statistical methods based on the frequentist (classical) paradigm (i.e., the default approach in most software) often involve testing the null hypothesis. In plain terms, the null hypothesis states that “nothing is going on.” This hypothesis might be a bad starting point because, based on previous research, it is almost always expected that “something is going on." It is this faulty assumption of "nothing going on" that may force RCT type studies to compare a new program/intervention to a controlled one (which is assumed to be the null hypothesis (nothing going on) but which actually may have a lot going on. The researchers using "frequentist" statistics then conclude that the new program works (or not) when in fact, they are really comparing the new program to others in which very similar programs, or similar but disorganized activities, are actually taking place. We leave it to others more schooled in statistics to respond, but from our vantage point, the increased use of Bayersian statistical methods deserves our consideration. (Full text of the article can be accessed) Read more>>
(An item from ISHN Member information service) An article in the January 2013 Issue of Preventing Chronic Disease tries to move us toward a more applied and practical approach to intervention research by developing a framework that goes beyond simply identifying significant differences in effects towards a more in-depth look at factors and results such as identifying the "preventable Burden" as well as the burden of the disease, then economic value, and going beyond "efficacy" (ie does the study work in a controlled setting towards studies that demonstrate impact in actual practice (effectiveness and generalizability). Read more.
(An item from ISHN Member information service) An article in the December 2012 Issue of Social Science & Medicine suggests that realist perspectives should be integrated within random controlled trials in order to better understand the complexity of interventions and how their components and their characteristics interact with the local context. The authors suggest that `Randomized trials of complex public health interventions generally aim to identify what works, accrediting specific intervention ‘products’ as effective. This approach often fails to give sufficient consideration to how intervention components interact with each other and with local context. ‘Realists’ argue that trials misunderstand the scientific method, offer only a ‘successionist’ approach to causation, which brackets out the complexity of social causation, and fail to ask which interventions work, for whom and under what circumstances. We counter-argue that trials are useful in evaluating social interventions because randomized control groups actually take proper account of rather than bracket out the complexity of social causation. Nonetheless, realists are right to stress understanding of ‘what works, for whom and under what circumstances’ and to argue for the importance of theorizing and empirically examining underlying mechanisms.`The authors also propose that ‘realist’ trials should aim to: examine the effects of intervention components separately and in combination, explore mechanisms of change, analysing how pathway variables mediate intervention effects; use multiple trials across contexts; draw on qualitative & quantitative data; and be oriented towards building theories setting out how interventions interact with context. This last suggestion resonates with recent suggestions that, in delivering truly ‘complex’ interventions, fidelity is important not so much in terms of precise activities but, rather, key intervention ‘processes’ and ‘functions’. Read more
(An item from ISHN Member information service) We have been following articles that discuss behavioural intentions and this latest one indicates that the transparency of BI research may need improvement. An article in the November 2012 issue of Addictions reports on an analysis of BI research studies and " used the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) Statement to develop the 59-question Adapted TREND Questionnaire (ATQ). Each ATQ question corresponds to a transparency guideline and asks how clearly a study reports its objectives, research design, analytical methods and conclusions". The authors noted that "The average report adhered to 38.4 (65.1%) of the 59 ATQ transparency guidelines. Each of the 59 ATQ questions received positive responses from an average of 16.9 (63.8%) of the reports." They conclude that " Gambling intervention reports need to improve their transparency by adhering to currently neglected and particularly relevant guidelines. Among them are recommendations for comparing study participants who are lost to follow-up and those who are retained, comparing study participants with the target population, describing methods used to minimize potential bias due to group assignment, and reporting adverse events or unintended effects." Given the potential challenges associated with BI and its importance as a tool for school health studies, where behavioural outcomes are expensive to track for more than a few months after the intervention, this article appears very relevant. Read more.
(An item from ISHN Member information service) Earlier this month, we noted a debate about the usefulness of RCT's in real world conditions that was initiated by a leading authors of systematic reviews, Sarah Stewart-Brown. Two articles in Issue #6, 2012 of the American Psychologist continue these revolutionary thoughts. One article questions the wisdom of basing scholarship and knowledge development on an ever-increasing number of research reviews, that examine different interventions in different contexts and often clumped together in inappropriate ways. The second article suggests that rather than trying to reframe systems in the light of accumulated evidence from research, we seek to identify "disruptive innovations" such as micro-clinics in retail chain drug stores, $2 generic eyeglasses and automatic teller machines that fit into real world situations and offer practical convenience to the intended users. Read more..
|
Welcome to our
|