RECENT RESEARCH & VISUAL REPRESENTATION 5
Statisticalpower investigation centers on the relationships between fourfactors: sample size (n), the significance criterion (a), populationeffect size, and statistical power (1 – b) (Cohen 1992a).Statistical power increases with larger sample sizes per group,a-values, and effect sizes. Conducted during the design phase ofresearch projects, the aim of power analysis is to calculate howlarge a sample size needs to be for expected endings (effect sizes)to be statistically significant. Statistical power is the long-termprobability that the null hypothesis will be rejected when it isfalse (Cohen 1992b). That is it is the probability of ending astatistically-significant result. Statistical power investigationcenters on the relationships.
Betweenfour factors: sample size (n), the significance criterion (a),population effect size, and statistical power (1 – b) (Cohen1992a). Statistical power increases with larger sample sizes pergroup, a-values, and effect sizes. Conducted during the design phaseof research projects, the aim of power analysis is to calculate howlarge a sample size needs to be for expected endings (effect sizes)to be statistically significant. Statistical power is the long-termprobability that the null hypothesis will be rejected when it isfalse (Cohen 1992b). That is it is the probability of ending astatistically-significant result.
Thisstudy was conducted using the papers published in the last twovolumes (19 and 20) of the International Journal of Mental HealthNursing. Papers were included in the analysis if the authors hadreported at least one inferential test. Papers were excluded if theywere editorials, guest editorials, letters to the editor, publishedin supplementary issues (i.e. conference proceedings), or bookreviews. Papers were also excluded if they included statisticaltests, for which power tables are not currently available. Followingthe exclusion of editorials, guest editorials, letters to the editor,conference proceedings published in supplementary issues, and bookreviews, 101 papers were reviewed to see if they met the inclusioncriteria. Of these papers, 67 were excluded because they did notcontain at least one inferential test. These papers were critiques (n= 3), commentaries (n = 11), literature reviews (n = 8), qualitativeresearch (n = 43), and descriptions of practices (n = 2). In theremaining 34 papers, quantitative (n = 29) or mixed methods (n = 5)were used. Of these papers, six were excluded because the authors didnot report using any inferential statistics and ﬁve were excludedbecause the researchers used tests for which power tables are notavailable. The remaining 23 papers provided the data for this study(Sterne & Smith 2001).
Thepower analyses presented here are based on 23 papers, in which 484inferential tests are reported (X2 tests, n = 98 correlations, n =109 simple linear regression n = 10 logistic regression, n = 172t-tests, n = 44 ANOVA/MANOVA, n = 51). The mean average power of the23 studies to detect small, medium, and large effect sizes was 0.34(SD = 0.23), 0.79 (SD = 0.24), and 0.94 (SD = 0.13), respectively.
Accordingto the tests that were done, the mean average of 23 studies was 0.34and 0.79 which indicates no difference made, but only reduction inthe inferential tests carried out. Therefore, if the null hypothesisis true, the likelihood we observe a test statistic as extreme as wedid. This means that the probability that we would see our teststatistic from our sample given the null hypothesis holds for thepopulation. While alternative hypothesis, the tests had no increase.
TheDependent Variable (DV) was that the test carried out involved teststaken from different sources indicating effect on the size will beinfluenced by different measurements. This, therefore, indicates thatmental health research will be affected by lack of enough resourcesto be used in the study. While Independent Variable (IV), theresearch will have to be determined by the missing inferential testswhich will determine fully the research on the mental health.
Onelimitation of this study is the assumption that a small, medium, andlarge effects, as Cohen (1988) deﬁned, have relevance to mentalhealth nursing research. An effect size of a certain magnitude mighthave huge relevance in some ﬁelds and the measurement of somevariables, but hold minimal importance in others (Stoové &Andersen 2003). Within mental health nursing research itself, therewill be wide-ranging expectations about the magnitudes of effectsizes between studies. The problem is, however, without access to apriori power analysis calculations researchers undertaking analyses,such as this one, have no knowledge of what effect sizes the studieswere powered to ﬁnd.
Cohen,J. (1988). Statistical Power Analysis for the Behavioral Sciences.HilIsdale, NJ: Erlbaum.
Cohen,J. (1992a). A power primer. Psychological Bulletin, 112, 155–159.
Sterne,J. A. C. & Smith, G. D. (2001). Sifting the evidence – what’swrong with signiﬁcance tests?BMJ (Clinical Research Ed.), 322,226–231.
Stoové,M. A. & Andersen, M. B. (2003). What are we looking at, and howbig is it? Physical Therapy in Sport, 4, 93–97.