Misestimation as a consequence of small sample sizes, small effect sizes, and noisy measurement may be particularly problematic in biomarker studies, the cost of which can adversely affect design decisions. This simulation study used real study designs reported in a meta-analysis of psychosocial correlates of the cortisol awakening response to investigate the probability that the results of these designs would yield misestimates in a cross-sectional study.
For each of the 212 designs, 100,000 simulated data sets were produced and the percentages of effects that were in the wrong direction and/or that differed by more than 0.10 from the true effect (b = 0.10) were calculated.
As expected, small samples (n < 100) and noisy measurement contributed to higher probability of errors. The average probability of an effect being in the wrong direction was around 20%, with some designs reaching 40%; misestimation probabilities were around 40%, with some designs reaching 80%. This was true for all studies as well as those reporting statistically significant effects.
Results call for better study designs, and this article provides suggestions for how to achieve more accurate estimates.