Com­pe­ti­tion and con­flict in Victorian wa­ter­fowl man­age­ment

In this pa­per As­so­ciate Pro­fes­sor Gra­ham Hall and Ali­son Cash from the Univer­sity of New Eng­land ques­tion where Vic­to­ria is head­ing with wa­ter­fowl hunt­ing.

Field and Game - - GAME MANAGEMENT -

In the long and proud his­tory of wa­ter­fowl hunt­ing in Vic­to­ria, 2016 will surely go down as a year of poor de­ci­sion mak­ing and very poor com­mu­ni­ca­tions.

Much of the con­fu­sion can be di­rectly at­trib­uted to the way the an­i­mal rights groups se­duced the politi­cians and bu­reau­cracy with mis­in­for­ma­tion, halftruths and down­right lies, but some of the con­fu­sion also arose from hunters.

For ex­am­ple, dur­ing the an­nual col­lec­tion of head and wing sam­ples from har­vested ducks we of­ten ob­served amongst hunters that they viewed man­age­ment, re­search and mon­i­tor­ing ac­tiv­i­ties dis­tinctly.

‘Man­age­ment’ was typ­i­cally viewed as the bu­reau­cratic as­pects of con­ser­va­tion: pre­serv­ing and man­ag­ing habi­tats, reg­u­lat­ing har­vest, and other as­pects of govern­ment work.

‘Re­search’, al­though recog­nised as im­por­tant, was of­ten viewed as less im­por­tant than mon­i­tor­ing, and cer­tainly than man­age­ment; some­what as a lux­ury of out­side aca­demics that should be per­formed, but only if suf­fi­cient time and fund­ing per­mit.

‘Mon­i­tor­ing’ was viewed as a way of as­sess­ing the sta­tus of pop­u­la­tions, com­mu­ni­ties and ecosys­tems, but typ­i­cally was not for­mally con­nected to con­ser­va­tion de­ci­sions.

Here, we sug­gest that man­age­ment, re­search and mon­i­tor­ing are ac­tu­ally com­ple­men­tary, not com­pet­i­tive ac­tiv­i­ties. All three are im­por­tant to suc­cess­ful con­ser­va­tion, and loss of any one of the three dis­rupts the other two.

Man­age­ment is sim­ply tak­ing an ac­tion to ob­tain some de­sired out­come. It re­quires a range of al­ter­na­tive ac­tions that can be taken, and spec­i­fi­ca­tion of an ob­jec­tive that we are try­ing to achieve. Ex­am­ples of man­age­ment in­clude: the ap­pli­ca­tion of pre­scribed fire to in­crease or im­prove habi­tats and, pre­sum­ably, sus­tain larger pop­u­la­tions; the set­ting of har­vest reg­u­la­tions to pro­vide hunt­ing op­por­tu­ni­ties; and the dec­la­ra­tion and main­te­nance of game re­serves to main­tain species di­ver­sity and hunt­ing op­por­tu­ni­ties.

Re­search is a process of in­quiry that in­cludes de­scrip­tion of nat­u­ral sys­tems, but also in­volves ad­dress­ing ques­tions about how these sys­tems func­tion. Thus, re­search could in­clude test­ing and quan­ti­fy­ing the re­la­tion­ships be­tween wa­ter­fowl num­bers and water lev­els.

Mon­i­tor­ing in­volves the ob­ser­va­tion of var­i­ous lo­ca­tions over time, and may be sim­ply ori­ented to­ward es­tab­lish­ing trends in wa­ter­fowl num­bers, but may also be con­nected di­rectly to re­search (by pro­vid­ing an­swers to testable pre­dic­tions) or man­age­ment (by pro­vid­ing feed­back about the re­sults of man­age­ment ac­tions).

Field & Game Aus­tralia is in­volved in all three ac­tiv­i­ties through the wa­ter­fowl sea­son-set­ting process (man­age­ment), the duck head and wing pro­gram (re­search), and the twice-yearly wa­ter­fowl counts (mon­i­tor­ing).

The re­al­ity is that un­cer­tainty nearly al­ways con­founds a sim­ple de­ci­sion. That is, the man­ager can never have 100 per cent cer­tainty that any given de­ci­sion will re­sult in the de­sired out­come.

Man­age­ment un­cer­tainty comes in four ba­sic types: en­vi­ron­men­tal un­cer­tainty, par­tial con­trol­la­bil­ity, par­tial ob­serv­abil­ity and struc­tural un­cer­tainty. One ba­sic but im­por­tant form of un­cer­tainty is that habi­tats and pop­u­la­tions are in­flu­enced by fac­tors that may not be un­der man­age­ment con­trol.

For ex­am­ple, man­agers may de­cide to de­clare a hunt­ing sea­son, but an un­usu­ally se­vere sum­mer may oc­cur that re­sults in a lower than pre­dicted num­ber of ducks.

Like­wise, even if man­agers don’t de­clare a hunt­ing sea­son, favourable fac­tors may cause the pop­u­la­tion to per­form bet­ter than pre­dicted.

The in­flu­ence of fac­tors in the en­vi­ron­ment that are un­pre­dictable, and that add to the in­flu­ence of the man­age­ment de­ci­sions, is termed en­vi­ron­men­tal un­cer­tainty. A sim­i­lar re­sult

An ex­cel­lent ex­am­ple of poor de­ci­sion mak­ing be­cause of un­cer­tainty oc­curred in Vic­to­ria in 2016 when the ap­pear­ance of pro­tected blue-billed ducks re­sulted in the hasty clo­sure of Lake El­iz­a­beth Game Re­serve to hunt­ing.

can oc­cur be­cause the man­age­ment it­self is only par­tially con­trol­lable; for in­stance, as we saw in Vic­to­ria in 2016 the ap­pear­ance of non-game species of ducks caused much an­guish to hunters due to bu­reau­crats clos­ing some Game Re­serves.

This is re­ferred to as par­tial con­trol­la­bil­ity. In ad­di­tion to these ‘real’ sources of un­cer­tainty, mon­i­tor­ing pro­grams gen­er­ally will not be able to per­fectly mea­sure the sys­tems re­sponse to any man­age­ment. Espe­cially when we are mon­i­tor­ing abun­dance and other pop­u­la­tion or habi­tat at­tributes, these will usu­ally be based on some type of sta­tis­ti­cal sam­ple, and thus sub­ject to er­ror. This is re­ferred to as par­tial ob­serv­abil­ity, or some­times, sta­tis­ti­cal un­cer­tainty.

Fi­nally, in ad­di­tion to all the above sources of un­cer­tainty, cur­rent knowl­edge is based on past ob­ser­va­tion and re­search. How­ever, this past knowl­edge is sel­dom com­pletely ac­cu­rate, and is of­ten very in­com­plete. Un­less we are ab­so­lutely cer­tain about the ba­sic mech­a­nisms that de­ter­mine our sys­tem, we should be hon­est and ad­mit that our knowl­edge about how the sys­tem works is not per­fect. We refer to this last source of un­cer­tainty as struc­tural un­cer­tainty. Seen in this light, struc­tural un­cer­tainty is both a re­search is­sue — it oc­curs be­cause our un­der­stand­ing is im­per­fect — and a man­age­ment is­sue — re­solv­ing or re­duc­ing it leads to bet­ter de­ci­sion mak­ing.

Al­though there are sev­eral pos­si­ble ways of deal­ing with un­cer­tainty, ig­nor­ing un­cer­tainty can have se­vere con­se­quences. Fail­ing to deal with un­cer­tainty may lead to a false sense of se­cu­rity in de­ci­sion mak­ing and ul­ti­mately com­pro­mises our abil­ity to reach our con­ser­va­tion ob­jec­tives. But the con­verse is also true. If we al­low un­cer­tainty to paral­yse or stop de­ci­sion mak­ing, this too will lead to poor con­ser­va­tion out­comes.

An ex­cel­lent ex­am­ple of poor de­ci­sion mak­ing be­cause of un­cer­tainty oc­curred in Vic­to­ria in 2016 when the ap­pear­ance of pro­tected blue-billed ducks re­sulted in the hasty clo­sure of Lake El­iz­a­beth Game Re­serve to hunt­ing.

In­stead of al­low­ing the lake to re­main open to hunt­ing, and mon­i­tor any ef­fect of hunt­ing on dis­turb­ing non-game ducks, the lake was closed, thus de­priv­ing man­agers of valu­able in­for­ma­tion for fu­ture man­age­ment. Al­ter­na­tively, an ex­clu­sion zone could have been es­tab­lished around the deep water part of Lake El­iz­a­beth where the blue-billed ducks had con­gre­gated and where hunt­ing or dis­tur­bance would have been specif­i­cally pro­hib­ited.

Such adap­tive and proac­tive man­age­ment would have been pre­ferred to the heavy-handed re­sponse of a hunt­ing ban and clo­sure of the en­tire Lake El­iz­a­beth State Game Re­serve.

Some types of un­cer­tainty, such as en­vi­ron­men­tal un­cer­tainty, are es­sen­tially im­pos­si­ble to con­trol. These must be con­sid­ered in de­ci­sion mak­ing, but in all like­li­hood can­not be re­duced. Oth­ers can be at least par­tially re­duced by bet­ter field tech­niques that may re­duce (but likely not elim­i­nate) par­tial con­trol­la­bil­ity and bet­ter sur­vey meth­ods may re­duce par­tial ob­serv­abil­ity.

The use of un­manned aerial ve­hi­cles to mon­i­tor wa­ter­fowl pop­u­la­tions is a good ex­am­ple of bet­ter, safer and cheaper sur­vey meth­ods.

Spe­cial at­ten­tion should be de­voted to struc­tural un­cer­tainty, be­cause it is the one source of un­cer­tainty that 1) is very fre­quently ig­nored, and 2) can be re­duced through time via an adap­tive ap­proach.

Be­fore dis­cussing adap­tive ap­proaches, two other ma­jor ap­proaches de­serve men­tion that can be used to re­duce struc­tural un­cer­tainty, be­cause the public is likely more fa­mil­iar with these ap­proaches, they have oc­curred more fre­quently, and they con­tinue to have merit.

Ex­per­i­ments — which are de­fined as in­volv­ing con­trol, ran­domi­sa­tion, and repli­ca­tion of independent sub­jects — are the “gold stan­dard” of sci­en­tific in­quiry. Ex­per­i­ments are ideally ca­pa­ble of re­duc­ing un­cer­tainty very quickly, and thus are at­trac­tive. How­ever, re­al­is­tic ex­per­i­ments at any mean­ing­ful scale are dif­fi­cult or im­pos­si­ble to con­duct in most con­ser­va­tion sys­tems.

In ad­di­tion, be­cause ex­per­i­ments are di­rected at sci­en­tific hy­pothe­ses, rather than man­age­ment ob­jec­tives, they are not nec­es­sar­ily ef­fi­cient means of re­duc­ing un­cer­tainty for de­ci­sion mak­ing.

In con­trast to ex­per­i­ments, ret­ro­spec­tive stud­ies are based on an ex­am­i­na­tion of pat­terns in data that have been col­lected in the past; thus they are an­a­lysed “ret­ro­spec­tively”.

These of­ten can pro­vide a good ini­tial ba­sis for the con­struc­tion of al­ter­na­tive hy­pothe­ses and pre­dic­tive mod­els used in con­ser­va­tion.

With­out deny­ing the im­por­tance of both ex­per­i­men­ta­tion and ret­ro­spec­tive anal­y­sis, we ad­vo­cate a third ap­proach, called adap­tive re­source man­age­ment (ARM), as be­ing gen­er­ally more suited to con­ser­va­tion de­ci­sion mak­ing.

ARM can be im­ple­mented in vir­tu­ally any re­source sys­tem, and has the ad­van­tage of be­ing di­rected at meet­ing the con­ser­va­tion ob­jec­tive, not at meet­ing a sci­en­tific ob­jec­tive. ARM is the method ad­vo­cated for wa­ter­fowl man­age­ment in 2009 with the pub­li­ca­tion of De­vel­op­ing a sus­tain­able har­vest model for Victorian wa­ter­fowl. (Arthur Ry­lah, In­sti­tute for En­vi­ron­men­tal Re­search Tech­ni­cal Re­port Se­ries No. 195)

Hunters are en­ti­tled to ask why, af­ter the pas­sage of seven years and the es­tab­lish­ment of the Game Man­age­ment Author­ity, is ARM not yet im­ple­mented in Vic­to­ria?

ARM con­sists of three es­sen­tial com­po­nents. The first is ex­plicit pre­dic­tions of the ef­fect of man­age­ment ac­tions on pop­u­la­tion size and har­vest un­der two or more mod­els. These mod­els pro­vide the means for com­par­ing the rel­a­tive sup­port for dif­fer­ent man­age­ment ac­tions. Here, struc­tural un­cer­tainty is ex­pressed in the form of al­ter­na­tive mod­els. Pre­dic­tions are made un­der each al­ter­na­tive model,

weighted by the rel­a­tive sup­port for the model, and de­ci­sions then are made based on com­par­ing the pre­dic­tions as­so­ci­ated with each man­age­ment ac­tion.

Se­quen­tial de­ci­sion mak­ing is an­other re­quire­ment of ARM and in­volves track­ing a pop­u­la­tion or habi­tat through time and mak­ing de­ci­sions based, in part, on the ob­served sta­tus of the pop­u­la­tion or habi­tat con­di­tion.

The set of man­age­ment ob­jec­tives and ac­tions are usu­ally con­stant, so that the same de­ci­sions are con­tin­u­ously re­vis­ited. In the­ory this is the sea­son-set­ting process in Vic­to­ria whereby duck num­bers and wa­ter­bod­ies are mon­i­tored.

Un­for­tu­nately, pol­i­tics and weak bu­reau­cracy of­ten in­hibits such an ob­vi­ous and cru­cial process!

Se­quen­tial de­ci­sion mak­ing need not take place on an an­nual ba­sis and can oc­cur in space as well as in time. The for­mer is par­tic­u­larly use­ful in sit­u­a­tions where de­ci­sions will not be re­vis­ited at a par­tic­u­lar site on a short time scale but are made over a num­ber of sites — for ex­am­ple, the whole of Vic­to­ria.

In­for­ma­tion feed­back, in this sense, is used to im­prove fu­ture de­ci­sions at sites that have yet to be man­aged. Re­gard­less of whether se­quen­tial de­ci­sion mak­ing is through space or time, the key is to pro­vide feed­back on the ef­fects of man­age­ment ac­tions in a timely man­ner to im­prove fu­ture de­ci­sion mak­ing.

Mon­i­tor­ing is the third re­quired com­po­nent of ARM. It pro­vides in­for­ma­tion that is used to re­solve key un­cer­tain­ties. To re­solve this un­cer­tainty, it is im­por­tant to de­ter­mine the model that best ap­prox­i­mates the sys­tem dy­nam­ics and then up­date the model to re­flect new­found knowl­edge.

Op­er­a­tionally, this is accomplished by com­par­ing model pre­dic­tions to sub­se­quent ob­ser­va­tions of the pop­u­la­tion size. The gen­eral ap­proach is to use mod­el­ling to ac­count for these fac­tors. This is im­por­tant, both be­cause it gives a more hon­est pic­ture of the rates of learn­ing un­der ARM, and helps to di­rect re­search and mon­i­tor­ing pri­or­i­ties for re­duc­ing un­cer­tainty.

Al­though ARM is a use­ful ap­proach to man­ag­ing game­birds, to our knowl­edge, cu­ri­ously ARM has only been for­mally ap­plied to wa­ter­fowl har­vest de­ci­sion mak­ing in non-aus­tralian ju­ris­dic­tions.

The fail­ure to im­ple­ment ARM in Aus­tralia is partly due to in­sti­tu­tional re­sis­tance, but we think it is also at­trib­ut­able to wide­spread mis­con­cep­tions.

Per­haps the most com­mon mis­un­der­stand­ing is that ARM is re­search. First and fore­most, it is man­age­ment.

The pri­mary ob­jec­tive of ARM is to make the best de­ci­sion with re­spect to man­age­ment ob­jec­tives. Learn­ing oc­curs as a by-prod­uct of man­age­ment rather than ex­per­i­men­ta­tion. In fact, ex­per­i­men­ta­tion can be sub­op­ti­mal be­cause the pop­u­la­tion or habi­tat can be driven to a state that is un­de­sir­able. In ARM, the goal of learn­ing is to re­duce the un­cer­tainty that has the greatest di­rect im­pact on de­ci­sion mak­ing. Thus, learn­ing is tar­geted on those key com­po­nents that re­sult in im­proved de­ci­sion mak­ing and pre­sum­ably, greater gains.

An­other com­mon ARM myth is that it is too risky. We con­tend that nat­u­ral re­source de­ci­sion mak­ing is in­her­ently risky, and de­ci­sion mak­ing is al­ways fraught with un­cer­tainty. Hence, all man­age­ment ac­tions (or in­ac­tions) can have un­in­tended and unan­tic­i­pated con­se­quences. Un­cer­tainty can be re­duced by the ac­qui­si­tion of greater knowl­edge through study and ex­per­i­men­ta­tion, which can take con­sid­er­able time. Man­age­ment de­ci­sions, how­ever, should not be de­layed un­til suf­fi­cient knowl­edge has been ac­quired. This is the fal­lacy of the Pre­cau­tion­ary Prin­ci­ple which bu­reau­crats of­ten hide be­hind, claim­ing that with­out per­fect knowl­edge no man­age­ment de­ci­sion can be made.

Be­liefs that ARM is costly and com­pli­cated also are un­founded. Given the right at­ti­tudes most agen­cies can per­form most of the tasks re­quired for ARM now, so it should not re­quire ad­di­tional ex­pen­di­ture.

Man­age­ment, re­search and mon­i­tor­ing are all cru­cial for nat­u­ral re­source con­ser­va­tion and the loss of any one of these el­e­ments re­duces the ef­fec­tive­ness of the oth­ers. The elim­i­na­tion of re­search of­ten re­sults in stag­na­tion, where new sci­en­tific ideas do not be­come part of man­age­ment. This also per­pet­u­ates a false sep­a­ra­tion of “man­age­ment” from “sci­ence,” thereby re­duc­ing the ef­fec­tive­ness of the for­mer and elim­i­nat­ing the con­text for the lat­ter.

Sim­i­larly, the elim­i­na­tion of mon­i­tor­ing re­duces the ef­fec­tive­ness of man­age­ment be­cause de­ci­sion mak­ers no longer have a ba­sis for judg­ing the ef­fec­tive­ness of dif­fer­ent man­age­ment ob­jec­tives. With­out the feed­back pro­vided by mon­i­tor­ing, there is no abil­ity to as­sess model pre­dic­tions with data, which elim­i­nates the po­ten­tial for learn­ing about how sys­tems op­er­ate.

Man­age­ment, re­search and mon­i­tor­ing pro­grams should be viewed as mu­tu­ally sup­port­ive of con­ser­va­tion goals, where the loss of any one of the three is dis­rup­tive to the re­main­ing two.

Man­age­ment ex­plic­itly in­cludes the goals of the de­ci­sion maker and other stake­hold­ers in eval­u­at­ing the pos­si­ble con­se­quences of any po­ten­tial ac­tion. Re­search al­lows us to ex­plore the pos­si­ble con­se­quences of man­age­ment ac­tions, which can be used to com­pare al­ter­na­tives and select the most ap­pro­pri­ate ac­tion. This leads to a de­ci­sion that ap­pears most likely (tak­ing into ac­count un­cer­tainty) to achieve the de­sired out­comes. Mon­i­tor­ing pro­vides in­for­ma­tion to as­sess if stated goals are be­ing achieved or if a di­ver­gence has oc­curred. It also pro­vides in­for­ma­tion feed­back that al­lows test­ing of the pre­dic­tions of de­ci­sion mod­els, and re­duce un­cer­tainty through time.

This “closed loop” process, known as ARM, for­mally in­te­grates man­age­ment, re­search and mon­i­tor­ing for more ef­fec­tive nat­u­ral re­source de­ci­sion mak­ing.

All three of these legs — man­age­ment, re­search and mon­i­tor­ing — are es­sen­tial to sound con­ser­va­tion. Re­moval of any one of the legs is dis­rup­tive to con­ser­va­tion, and ul­ti­mately coun­ter­pro­duc­tive. In par­tic­u­lar, ac­tionori­ented man­age­ment is some­times pit­ted against re­search and mon­i­tor­ing in the com­pe­ti­tion for lim­ited funds. This sets up a false choice, a bit like ask­ing whether chil­dren need food or ed­u­ca­tion in or­der to be­come pro­duc­tive adults.

In con­trast, un­der ARM, re­search and mon­i­tor­ing have ex­plicit value for their con­tri­bu­tions to de­ci­sion mak­ing. Con­versely, we “learn by do­ing”, with man­age­ment ac­tions pro­vid­ing the grist for the test­ing of crit­i­cal as­sump­tions, ul­ti­mately re­duc­ing un­cer­tainty and im­prov­ing de­ci­sion mak­ing.

As­so­ciate Pro­fes­sor Gra­ham Hall ob­serv­ing ducks at Lake El­iz­a­beth near Kerang

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.