Understanding the science
DESIGNING AN experiment to truly measure outcomes and responses can be a quagmire, so how do you assess scientific claims about a product? The key over-arching questions are: • does the research replicate real-world use? • is the experimental methodology appropriate? • does the experimental design measure the correct parameters? • do we know what to measure? • is the experiment run over a long enough time frame?
A recent review paper on biostimulants (see the reference on page 29) listed a wide range and number of experiments, but the majority of these were not undertaken in ‘real world’ conditions and this is an important caveat.
Scientists often start work in the laboratory ‘in vitro’ (meaning ‘in glass’), mostly because it is quick and cheap and they can get a research publication out of the work. If the lab work looks promising (or not), they then proceed to pot trials, growing plants in pots in a glasshouse. This is more expensive than the lab work and more realistic, and it produces another paper.
But experienced scientists know that neither of these methods usually bears any relation to performance on a farm or crop field so they often start ‘real world’ trials as soon as possible.
If research, even high quality research, is not conducted under real world conditions that match your crop and farm using the exact crop species, even the same cultivars for some species (eg grapes) on similar soils and similar climates, they may not be relevant to your operation. In New Zealand, Canterbury and Hawkes Bay results should be considered comparable, but others areas would not.
This all means you should pretty much ignore lab and pot-based experiments. Results from experiments that sound like they could have been done on your block are the ones you should pay the closest attention to.
Can you trust how the experiment is done?
‘Experimental methodology’ is scientific jargon for how an experiment was done. It covers things such as the treatments used, the amount and type of fertiliser used, the untreated ‘null’ controls, the statistical analysis, the general setup (eg in-vitro lab experiments, pot experiments, field experiments), and all the details like soil type, soil tests, soil moisture, weather for the whole experiment, plant species and cultivar, age when planted, etc.
Determining if the experimental methodology is appropriate is unfortunately where the quagmire gives way to the snake pit. It is surprisingly easy for scientists to set experiments up to get the results they want, and it is even easier for scientists that don’t have the right expertise to set an experiment up that fools them into thinking they have an accurate result.
Then there is the interpretation because scientists can disagree over what the results mean. Just because a paper has been “published in a peer reviewed journal” does not mean that the information is inviolable. Scientists often undertake ‘meta-analysis’ where they take all the experiments in journal papers that have researched a particular topic, then combine the results into one giant statistical analysis. But they often throw out 10-40% of the papers due to invalid methodology, where they consider the results of those trials to be unreliable.
It is also pretty common for different experiments to give contrary results due to the vagaries of nature and agricultural science. As an example, in the European Union, cultivar comparison experiments have to comply with the 5 × 5 Rule: the comparisons have to be done in at least five locations for a minimum of five years for the data to be considered reliable so you get 25 repeats of the same field experiment.
It takes a lot of scientific training and even more experience to make a good call, and in the end it is still a subjective decision. There is little chance that a layperson can make that judgement – if you want a view on a particular experiment then you need to find an independent scientist experienced in the same specialism, but even then they can only give you their opinion.
At the end of the day, individual papers count for little. It is the amassed results from across a large number of experiments, across many years, plus the experiences from farmers and growers using products and techniques for real, that eventually determines if an effect is real or not. Until such broad consensus it built up, caveat emptor applies.
Are they measuring the right parameters?
From a farmer and grower perspective, it may seem pretty obvious what parameters