National Post (National Edition)
Unsignificant statistics
Proofs based on probabilities are usually meaningless
A projection at ‘Universe of Particles’ exhibition in Geneva, at the Large Hadron Collider (LHC), the world’s largest atom
smasher, searching for the Higgs boson. us what to say but not what to do. Think of the he-said, she-said quality of the debate about “age” as a “significant” factor in mammogram testing. Young women could be forgiven for thinking the need to test is a real coin flip.
Or consider the “significance” of damage done in a case involving thousands of humans, some of them dead. In the early 2000s quite a few Vioxx takers experienced the wrath of the so-called 5% rule of statistical significance.
The clinical trial was conducted in 2000 and published in the Annals of Internal Medicine (2003). The sponsoring company, Merck, reported that five patients taking Vioxx suffered heart troubles – fatal and not – during the clinical phase. That compared with only one bad result in the control group, “a difference [in bad outcomes] that did not reach statistical significance.” The erroneous belief among junk scientists is that failing to reach statistical significance is the same as finding no important difference between the two outcomes. On top of that investigators discovered they did not report three of eight total bad outcomes – to achieve an insignificant difference, it seems – the error opposite of the one committed by whalers.
Banishing this significance junk seems possible. Even the U.S. Supreme Court agrees. On March 22nd, 2011, in Matrixx Initiatives, Inc. v Siracusano, an important case of securities law, the Supreme Court unanimously rejected use of bright-line rules of statistical significance as a way of hiding adverse information from investors.
The case involved a homeopathic medicine called Zicam, a zinc-based common cold remedy produced by Matrixx Initiatives, Inc. When applied through the nose the drug causes some users to experience burning sensations