Contrasting contrasts: Reasons why and ways to compare indirect effects in mediation models

Loading...
Thumbnail Image

Date

2022-03

Journal Title

Journal ISSN

Volume Title

Publisher

Research Projects

Organizational Units

Journal Issue

Abstract

Establishing a cause–effect relationship between two variables is a fundamental goal of scientific research. It is valuable information to know that one variable causally influences another. For instance, knowing that participation in an Alcoholics Anonymous (AA) program reduces or eliminates alcohol consumption is important to ensure that those who struggle with alcohol addiction receive the treatment they need to take back control of their lives. Although establishing this if X then Y relationship is critical, it can be just as important to understand how, or through what mechanism(s), this effect operates. What is it specifically about AA that causes one to drink less? Do AA programs first cause an individual to believe in themselves more, and is this increased self-efficacy ultimately causing one to drink less? That is, does participation in an AA program indirectly cause one to reduce their drinking behavior because of this increased self-efficacy? Mediation analysis is a popular method used to answer such questions by testing whether an indirect effect (e.g., the effect of AA on drinking through self-efficacy) is statistically significant. Although mediation models are popular in many areas of study (e.g., nursing, psychology, public health), it is unlikely that the complexity of the real world can be adequately explained/modeled by just one mechanism. For example, in the example above, it is likely that AA programs change more than just self-efficacy to reduce the amount one drinks after completion of the program. Perhaps AA programs also cause an individual to be less depressed, and this decrease in depression is also responsible for the reduction in drinking. In this instance, it is theoretically important to know a.) whether self-efficacy and depression mediate this relationship and b.) whether one of these effects is stronger than the other. Time and resources in an AA program are limited. If focusing on alleviating depressive symptoms is more impactful on drinking than self-efficacy, practitioners would likely devote more attention to treating depression in the program. Although this is theoretically important to do, in practice, comparing indirect effects is relatively rare. Additionally, there is little-to-no methodological research identifying the best way to compare indirect effects. This is the purpose of this study. I conducted a simulation study to identify the best way to compare indirect effects and obtain inference for this difference. Specifically, I looked at comparing indirect effects through their raw difference (i.e.., A-B), the difference of the absolute values (i.e., |A|-|B|), the ratio (i.e., A/B), the proportion of the absolute values (i.e., (|A|)/(|A|+|B|)). Date were simulated for a wide range of sample and effect sizes common in research. In each of these simulated conditions, indirect effects were compared using one of the above methods. Additionally, percentile, bias-corrected (BC), and bias-corrected and accelerated (BCa) bootstrap confidence intervals were constructed for inference about this difference. These confidence intervals are commonly used for inference in mediation research and have trade-offs regarding Type I error rate and power. Thus, the inferential and contrast method with the greatest level of power at the specified Type I error rate (i.e., .05) will be recommended for substantive researchers (to mimic the idea of a "most powerful test" in statistical theory). The results indicate that percentile bootstrap confidence intervals had optimal performance in comparison to BC and BCa confidence intervals. Consistent with prior literature, percentile confidence intervals had acceptable coverage (always at or above nominal levels, ≥ .95) and moderate power (only having 5-10% lower power than BC and BCa confidence intervals at small sample and effect sizes). The raw difference performed the best across all sample and effects sizes. The Type I error rate (set at .05) never eclipsed .05 ± .025, whereas the difference of absolute values, proportion of the absolute values, and ratio each eclipsed this threshold at small effect sizes (which may be common in areas such as the behavioral sciences). When indirect effects were of opposing signs, the difference of the absolute values had the best performance (eclipsing a Type I error rate of .05 ± .025 at only the smallest effect size). In contrast, the ratio and proportion of absolute values consistently had inflated Type I error rates (≥ .10) unless the sample size was moderate to large. There were no noticeable differences in power between the contrast methods. These results can have real-world implications for those interested in comparing indirect effects. Whether one wants to make an intervention program (e.g., AA) more efficacious or compare competing mechanisms to enhance theory in a substantive area, comparing indirect effects—with tests that have adequate statistical properties—is essential. I recommend that researchers and clinicians compare indirect effects of opposing signs using the difference of the absolute values of the effects and comparing indirect effects of the same sign using the raw difference. I also recommend constructing percentile bootstrap confidence intervals for inference about the difference.

Description

Social and Behavioral Sciences: 3rd Place (The Ohio State University Edward F. Hayes Graduate Research Forum)

Keywords

mediation, indirect effect, contrasts, parallel mediation, bootstrapping

Citation