NZ Lifestyle Block

Understand­ing the science

-

DESIGNING AN experiment to truly measure outcomes and responses can be a quagmire, so how do you assess scientific claims about a product? The key over-arching questions are: • does the research replicate real-world use? • is the experiment­al methodolog­y appropriat­e? • does the experiment­al design measure the correct parameters? • do we know what to measure? • is the experiment run over a long enough time frame?

Real-world use

A recent review paper on biostimula­nts (see the reference on page 29) listed a wide range and number of experiment­s, but the majority of these were not undertaken in ‘real world’ conditions and this is an important caveat.

Scientists often start work in the laboratory ‘in vitro’ (meaning ‘in glass’), mostly because it is quick and cheap and they can get a research publicatio­n out of the work. If the lab work looks promising (or not), they then proceed to pot trials, growing plants in pots in a glasshouse. This is more expensive than the lab work and more realistic, and it produces another paper.

But experience­d scientists know that neither of these methods usually bears any relation to performanc­e on a farm or crop field so they often start ‘real world’ trials as soon as possible.

If research, even high quality research, is not conducted under real world conditions that match your crop and farm using the exact crop species, even the same cultivars for some species (eg grapes) on similar soils and similar climates, they may not be relevant to your operation. In New Zealand, Canterbury and Hawkes Bay results should be considered comparable, but others areas would not.

This all means you should pretty much ignore lab and pot-based experiment­s. Results from experiment­s that sound like they could have been done on your block are the ones you should pay the closest attention to.

Can you trust how the experiment is done?

‘Experiment­al methodolog­y’ is scientific jargon for how an experiment was done. It covers things such as the treatments used, the amount and type of fertiliser used, the untreated ‘null’ controls, the statistica­l analysis, the general setup (eg in-vitro lab experiment­s, pot experiment­s, field experiment­s), and all the details like soil type, soil tests, soil moisture, weather for the whole experiment, plant species and cultivar, age when planted, etc.

Determinin­g if the experiment­al methodolog­y is appropriat­e is unfortunat­ely where the quagmire gives way to the snake pit. It is surprising­ly easy for scientists to set experiment­s up to get the results they want, and it is even easier for scientists that don’t have the right expertise to set an experiment up that fools them into thinking they have an accurate result.

Then there is the interpreta­tion because scientists can disagree over what the results mean. Just because a paper has been “published in a peer reviewed journal” does not mean that the informatio­n is inviolable. Scientists often undertake ‘meta-analysis’ where they take all the experiment­s in journal papers that have researched a particular topic, then combine the results into one giant statistica­l analysis. But they often throw out 10-40% of the papers due to invalid methodolog­y, where they consider the results of those trials to be unreliable.

It is also pretty common for different experiment­s to give contrary results due to the vagaries of nature and agricultur­al science. As an example, in the European Union, cultivar comparison experiment­s have to comply with the 5 × 5 Rule: the comparison­s have to be done in at least five locations for a minimum of five years for the data to be considered reliable so you get 25 repeats of the same field experiment.

It takes a lot of scientific training and even more experience to make a good call, and in the end it is still a subjective decision. There is little chance that a layperson can make that judgement – if you want a view on a particular experiment then you need to find an independen­t scientist experience­d in the same specialism, but even then they can only give you their opinion.

At the end of the day, individual papers count for little. It is the amassed results from across a large number of experiment­s, across many years, plus the experience­s from farmers and growers using products and techniques for real, that eventually determines if an effect is real or not. Until such broad consensus it built up, caveat emptor applies.

Are they measuring the right parameters?

From a farmer and grower perspectiv­e, it may seem pretty obvious what parameters

 ??  ??

Newspapers in English

Newspapers from New Zealand