Sun Sentinel Palm Beach Edition
Understanding the terms in study results
Dear Dr. Roach: In a recent column you made reference to the results of a study that were neither “statistically significant” nor “clinically meaningful.” I am not familiar with the latter concept. — P.J.B.
Statistical significance is a concept central to understanding medical or other scientific studies. Often, one group (an experimental group, who may get a new treatment, for example) is compared with another group (the control group, who got some other treatment, usually the standard treatment or a placebo). The difference in the outcome is looked at between the two groups. A statistician employs one of several methods to calculate the likelihood that the difference between the two groups could have happened by chance, called the pvalue. If the p-value is less than 5 percent, then that is usually considered statistically significant. The lower the p-value, the less the likelihood that the observed difference between the two groups could have happened by chance if the two treatments were identically effective. Clinical significance, or clinical meaningfulness, refers to the real-world usefulness of the intervention. While the term “clinical significance” is subjective and therefore an opinion, the term nevertheless is useful. For example, with a very large trial, a small difference in effectiveness between the two treatments could have a very significant p-value, as low as 0.0001, with only a 1 in 10,000 chance that the two treatments are equivalent. However, the effectiveness may be 50.1 percent in one group and 49.9 percent in the other group. Although statistically significant, the clinical significance is marginal. If a result is not statistically significant, it cannot be considered clinically meaningful, because there is not enough evidence to reject the hypothesis that the difference could have occurred by chance.