Un­der­stand­ing the sci­ence

NZ Lifestyle Block - - Feature -

DE­SIGN­ING AN ex­per­i­ment to truly mea­sure out­comes and re­sponses can be a quag­mire, so how do you as­sess sci­en­tific claims about a prod­uct? The key over-arch­ing ques­tions are: • does the re­search repli­cate real-world use? • is the ex­per­i­men­tal method­ol­ogy ap­pro­pri­ate? • does the ex­per­i­men­tal de­sign mea­sure the cor­rect pa­ram­e­ters? • do we know what to mea­sure? • is the ex­per­i­ment run over a long enough time frame?

Real-world use

A re­cent re­view pa­per on bios­tim­u­lants (see the ref­er­ence on page 29) listed a wide range and num­ber of ex­per­i­ments, but the ma­jor­ity of these were not un­der­taken in ‘real world’ con­di­tions and this is an im­por­tant caveat.

Sci­en­tists of­ten start work in the lab­o­ra­tory ‘in vitro’ (mean­ing ‘in glass’), mostly be­cause it is quick and cheap and they can get a re­search pub­li­ca­tion out of the work. If the lab work looks promis­ing (or not), they then pro­ceed to pot tri­als, grow­ing plants in pots in a glasshouse. This is more ex­pen­sive than the lab work and more re­al­is­tic, and it pro­duces another pa­per.

But ex­pe­ri­enced sci­en­tists know that nei­ther of these meth­ods usu­ally bears any re­la­tion to per­for­mance on a farm or crop field so they of­ten start ‘real world’ tri­als as soon as pos­si­ble.

If re­search, even high qual­ity re­search, is not con­ducted un­der real world con­di­tions that match your crop and farm us­ing the ex­act crop species, even the same cul­ti­vars for some species (eg grapes) on sim­i­lar soils and sim­i­lar cli­mates, they may not be rel­e­vant to your op­er­a­tion. In New Zealand, Can­ter­bury and Hawkes Bay re­sults should be con­sid­ered com­pa­ra­ble, but oth­ers ar­eas would not.

This all means you should pretty much ig­nore lab and pot-based ex­per­i­ments. Re­sults from ex­per­i­ments that sound like they could have been done on your block are the ones you should pay the clos­est at­ten­tion to.

Can you trust how the ex­per­i­ment is done?

‘Ex­per­i­men­tal method­ol­ogy’ is sci­en­tific jar­gon for how an ex­per­i­ment was done. It cov­ers things such as the treat­ments used, the amount and type of fer­tiliser used, the un­treated ‘null’ con­trols, the sta­tis­ti­cal anal­y­sis, the gen­eral setup (eg in-vitro lab ex­per­i­ments, pot ex­per­i­ments, field ex­per­i­ments), and all the de­tails like soil type, soil tests, soil mois­ture, weather for the whole ex­per­i­ment, plant species and cul­ti­var, age when planted, etc.

De­ter­min­ing if the ex­per­i­men­tal method­ol­ogy is ap­pro­pri­ate is un­for­tu­nately where the quag­mire gives way to the snake pit. It is sur­pris­ingly easy for sci­en­tists to set ex­per­i­ments up to get the re­sults they want, and it is even eas­ier for sci­en­tists that don’t have the right ex­per­tise to set an ex­per­i­ment up that fools them into think­ing they have an ac­cu­rate re­sult.

Then there is the in­ter­pre­ta­tion be­cause sci­en­tists can dis­agree over what the re­sults mean. Just be­cause a pa­per has been “pub­lished in a peer re­viewed jour­nal” does not mean that the in­for­ma­tion is in­vi­o­lable. Sci­en­tists of­ten un­der­take ‘meta-anal­y­sis’ where they take all the ex­per­i­ments in jour­nal papers that have re­searched a par­tic­u­lar topic, then com­bine the re­sults into one gi­ant sta­tis­ti­cal anal­y­sis. But they of­ten throw out 10-40% of the papers due to in­valid method­ol­ogy, where they con­sider the re­sults of those tri­als to be un­re­li­able.

It is also pretty com­mon for dif­fer­ent ex­per­i­ments to give con­trary re­sults due to the va­garies of na­ture and agri­cul­tural sci­ence. As an ex­am­ple, in the Euro­pean Union, cul­ti­var com­par­i­son ex­per­i­ments have to com­ply with the 5 × 5 Rule: the com­par­isons have to be done in at least five lo­ca­tions for a min­i­mum of five years for the data to be con­sid­ered re­li­able so you get 25 re­peats of the same field ex­per­i­ment.

It takes a lot of sci­en­tific train­ing and even more ex­pe­ri­ence to make a good call, and in the end it is still a sub­jec­tive de­ci­sion. There is lit­tle chance that a layper­son can make that judge­ment – if you want a view on a par­tic­u­lar ex­per­i­ment then you need to find an in­de­pen­dent sci­en­tist ex­pe­ri­enced in the same spe­cial­ism, but even then they can only give you their opin­ion.

At the end of the day, in­di­vid­ual papers count for lit­tle. It is the amassed re­sults from across a large num­ber of ex­per­i­ments, across many years, plus the ex­pe­ri­ences from farm­ers and growers us­ing prod­ucts and tech­niques for real, that even­tu­ally de­ter­mines if an ef­fect is real or not. Un­til such broad con­sen­sus it built up, caveat emp­tor ap­plies.

Are they mea­sur­ing the right pa­ram­e­ters?

From a farmer and grower per­spec­tive, it may seem pretty ob­vi­ous what pa­ram­e­ters

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.