Gon­ski 2.0:

A con­trolled flight into ter­rain

AQ: Australian Quarterly - - CONTENTS - DR KEN GAN­NI­COTT

There are many rea­sons why air­craft crash and burn. Of­ten the ex­pla­na­tion lies in a com­bi­na­tion of me­chan­i­cal prob­lem and re­sponse by the flight crew. One of the most baf­fling types of crash is a con­trolled flight into ter­rain. This oc­curs when an air­craft not ex­pe­ri­enc­ing any me­chan­i­cal prob­lem, and un­der com­plete con­trol by the pi­lot, is flown into the ground. The lat­est Gon­ski Re­port ( Through Growth to Achieve­ment, Re­port of the Re­view to Achieve Ed­u­ca­tional Ex­cel­lence in Aus­tralian Schools, and in­evitably chris­tened Gon­ski 2.0) pro­vides a spec­tac­u­lar ex­am­ple of a crash and burn after fly­ing straight into ter­rain.

The re­lease of the Gon­ski 2.0 re­port in early 2018 pro­voked a cho­rus of crit­i­cism, much of it de­ri­sive, itemis­ing the re­liance on plat­i­tudes and clichés and its fail­ure to ad­dress the terms of ref­er­ence in any mean­ing­ful way.

Par­tic­u­larly baf­fling is that the Re­view was es­tab­lished with ev­ery­thing in work­ing or­der. It had just one job, which was to pro­vide ad­vice on how fund­ing should be used to im­prove

Aus­tralia’s poor aca­demic achieve­ment is nei­ther fake news nor mere pol­i­tick­ing over fund­ing.

stu­dent achieve­ment. It was in the bliss­ful po­si­tion of not need­ing to ar­gue the case for ex­tra fund­ing be­cause $24.5 bil­lion over ten years had al­ready been com­mit­ted by govern­ment. The ‘pi­lot' en­joyed en­vi­able pub­lic es­teem. And, not least, there is now an ex­ten­sive lit­er­a­ture, draw­ing on ev­i­dence from high-per­form­ing coun­tries, on the poli­cies re­quired for im­proved ed­u­ca­tional per­for­mance.

What, as they say, could pos­si­bly go wrong?

Then-min­is­ter Si­mon Birm­ing­ham promised that the Gon­ski rec­om­men­da­tions would be im­ple­mented. Lit­tle of sub­stance has hap­pened since then, but it would be rash to con­clude that Gon­ski 2.0 has been shelved and can be safely left to col­lect dust.

The Aus­tralian Cur­ricu­lum, As­sess­ment and Re­port­ing Au­thor­ity (ACARA) has started work on a pro­posed new cur­ricu­lum that ap­pears heav­ily in­flu­enced by Gon­ski 2.0. Agree­ment on a new method of fund­ing Catholic and in­de­pen­dent schools has prompted Op­po­si­tion prom­ises of an ad­di­tional $15 bil­lion for pub­lic schools, but these ‘fund­ing wars' have been un­ac­com­pa­nied by any ev­i­dence that the money will pro­mote bet­ter per­for­mance.

These de­vel­op­ments pro­vide re­newed im­pe­tus for check­ing whether Gon­ski 2.0 was on the right track for im­proved achieve­ment by our schools. We start by sum­maris­ing the wor­ri­some lev­els of stu­dent per­for­mance that pro­vided the ra­tio­nale for Gon­ski 2.0. We then re­view the in­ter­na­tional ev­i­dence on school per­for­mance and fol­low this by ask­ing whether Gon­ski 2.0 was con­sis­tent with that ev­i­dence.

Aus­tralia's school per­for­mance: same old story but a fresh per­spec­tive

Aus­tralia's poor aca­demic achieve­ment is nei­ther fake news nor

mere pol­i­tick­ing over fund­ing. It is a well-known story that has been given fresh and dra­matic re­in­force­ment in new work from the World Bank. We

1

see from Fig­ure 1 that in quan­ti­ta­tive terms Aus­tralia is per­form­ing well. An Aus­tralian child can ex­pect to com­plete 13.8 years of school­ing by his/her eigh­teenth birth­day. This quan­tity of school­ing is higher than would be pre­dicted for our in­come level and it puts us well into the top rank of coun­tries.

The story be­comes less be­nign when we turn to the is­sue of how much our chil­dren learn in school. A ma­jor fea­ture of the World Bank's re­cent work is the pro­duc­tion of a glob­ally com­pa­ra­ble data­base of learn­ing out­comes. Con­ver­sion fac­tors are used to put the con­sid­er­able num­ber of in­ter­na­tional and re­gional as­sess­ment tests on a com­mon, or har­monised, scale. Fig­ure

2

2 shows these har­monised scores for the same sam­ple as Fig­ure 1.

We can see that, in con­trast to its high quan­ti­ta­tive rank, Aus­tralia's per­for­mance in the qual­ity of learn­ing puts it scarcely bet­ter than mid­dle of the pack.

Aus­tralia's po­si­tion in a league ta­ble of as­sess­ment scores may be of no great im­por­tance in it­self. No one sug­gests that stan­dard­ised as­sess­ments such as PISA fully mea­sure the mul­ti­ple as­pects of learn­ing out­comes. Psy­cho­me­t­ri­cally-de­signed achieve­ment tests have, how­ever, be­come very so­phis­ti­cated and can rea­son­ably be re­garded as a proxy for learn­ing out­comes.

Whether we look at learn­ing in mainly ed­u­ca­tional, so­cial or eco­nomic terms of hu­man cap­i­tal and fu­ture pro­duc­tiv­ity, there is not much point in sit­ting be­hind a school desk if learn­ing falls short. We can di­rectly mea­sure this by com­bin­ing Fig­ures 1 and 2 to es­ti­mate ex­pected years of learn­ing-ad­justed school.

From Fig­ure 1, chil­dren in Aus­tralia can ex­pect to com­plete 13.8 years of school­ing. When ad­justed for rel­a­tive per­for­mance on in­ter­na­tional achieve­ment tests, the amount of ef­fec­tive school­ing drops to only 11.6 years, a learn­ing gap of 2.2 years. This is equiv­a­lent to say­ing that Aus­tralian stu­dents lose more than 2 years of learn­ing due to the in­ad­e­quate qual­ity of their school­ing, ef­fec­tively learn­ing less than stu­dents in Asia and Europe, de­spite sim­i­lar years of ed­u­ca­tion.

Se­vere prac­ti­cal con­se­quences fol­low from the dis­par­ity be­tween our quan­ti­ta­tive and qual­i­ta­tive per­for­mance. There is clear ev­i­dence that learn­ing out­comes, as mea­sured by such tests,

Aus­tralian stu­dents lose more than 2 years of learn­ing due to the in­ad­e­quate qual­ity of their school­ing.

mat­ter sub­stan­tially for eco­nomic well-be­ing.

3

The sources of per­for­mance im­prove­ment

There is no mys­tery about the fac­tors con­ducive to bet­ter ed­u­ca­tional per­for­mance. Since the ‘ef­fec­tive schools' lit­er­a­ture of the 1980s, there has been a steady ac­cu­mu­la­tion of ev­i­dence draw­ing upon in­ter­na­tional ex­pe­ri­ence of “what works”. There is no uniquely cor­rect recipe, but a core of agreed per­for­mance ev­i­dence has emerged.

This ev­i­dence is widely ac­ces­si­ble, whether it is path­break­ing aca­demic work on the im­por­tance of cog­ni­tive skills by Hanushek and Woess­mann, man­age­ment in­sights from con­sul­tants Mck­in­sey & Co, or the OECD'S sta­tis­ti­cal anal­y­sis draw­ing upon the mas­sive PISA data­base. Six key is­sues make up this core ev­i­dence.

1. Ef­fec­tive teach­ers. Teach­ers are the most im­por­tant fac­tor by which pol­icy mak­ers can di­rectly im­prove stu­dent achieve­ment. To­day, all teach­ers in OECD coun­tries are qual­i­fied, but just like any other oc­cu­pa­tional group there is a dis­tri­bu­tion of ef­fec­tive­ness. The dif­fer­ence be­tween good and bad teach­ers is very large. On UK ev­i­dence, dur­ing one year with a very ef­fec­tive maths teacher, pupils gain 40% more in their learn­ing than they would with a poorly per­form­ing teacher. The ef­fects of high-qual­ity

5 teach­ing are es­pe­cially sig­nif­i­cant for pupils from dis­ad­van­taged back­grounds. Over a school year, these pupils gain 1.5 years' worth of learn­ing with very ef­fec­tive teach­ers, com­pared with 0.5 years with poorly per­form­ing teach­ers. In short, for pupils from a low so­cio-eco­nomic back­ground the dif­fer­ence be­tween a good teacher and a bad teacher is a whole year's learn­ing.

2. Adap­tive In­struc­tion. Ter­mi­nol­ogy varies, but the ba­sic con­cept is di­rect in­struc­tion, some­times called ex­plicit teach­ing. With ex­plicit teach­ing, the teacher shows stu­dents what to do and how to do it. Recog­nis­ing that learn­ing is a cu­mu­la­tive and sys­tem­atic process, it de­cides the learn­ing in­ten­tions and suc­cess cri­te­ria, demon­strates them, and eval­u­ates if stu­dents un­der­stand what they have been told. To this ba­sic con­cept must be added whole class teach­ing, which self-ev­i­dently means

For pupils from a low so­cio-eco­nomic back­ground the dif­fer­ence be­tween a good teacher and a bad teacher is a whole year’s learn­ing.

that a teacher teaches an en­tire class of chil­dren at one and the same time. Whole class teach­ing is typ­i­cally de­liv­ered through di­rect in­struc­tion. Adap­tive in­struc­tion de­scribes the process of in­cor­po­rat­ing in­di­vid­ual help into ex­plicit/whole class teach­ing when a stu­dent has dif­fi­cul­ties un­der­stand­ing a topic or task. Kirschner, Sweller and Clark in­sist that “the em­pir­i­cal re­search [sup­port­ing ex­plicit teach­ing is] over­whelm­ing and un­am­bigu­ous.” This con­clu­sion is

6

con­firmed in OECD'S list of 38 fac­tors as­so­ci­ated with sci­ence per­for­mance on the 2015 PISA tests: adap­tive in­struc­tion and di­rect in­struc­tion rank sec­ond and third on that list.

3. The im­por­tance of cog­ni­tive skills.

Par­ents have al­ways known that ba­sic skills are cru­cial, and economists have long known that a coun­try's de­vel­op­ment is con­nected to the skills of its work­ers. The prob­lem is that we did not have re­search-based ev­i­dence about the con­tri­bu­tion of ba­sic cog­ni­tive skills. There was, un­der­stand­ably, a ten­dency to think that rais­ing en­rol­ment rates or keep­ing chil­dren in school for longer would be suf­fi­cient. Re­cent re­search has filled that gap. Ev­i­dence from Hanushek and Woess­mann (a pre­cur­sor to the World Bank's work on the Hu­man Cap­i­tal In­dex cited ear­lier) is both clear and highly in­flu­en­tial. From East Asia,

7 with high ed­u­ca­tional per­for­mance and high eco­nomic growth, down to sub-sa­ha­ran Africa with low scores on each mea­sure, there is a clear and con­sis­tent cor­re­la­tion be­tween mas­tery of ba­sic cog­ni­tive skills and ed­u­ca­tional/eco­nomic per­for­mance.

4. High ex­pec­ta­tions: High ex­pec­ta­tions are linked with higher per­for­mance. Most teach­ers would as­sert that they al­ready have high ex­pec­ta­tions of their stu­dents, but re­search demon­strates that in prac­tice there is a wide range of at­ti­tudes. A per­sis­tent re­search find­ing is that stu­dents from dis­ad­van­taged back­grounds may achieve less than their po­ten­tial due to low ex­pec­ta­tions of their abil­ity.

5. Mea­sure­ment of ef­fec­tive learn­ing

and feed­back: Con­tin­u­ing datadriven anal­y­sis, as­sess­ment and eval­u­a­tion of­ten un­der­pin school per­for­mance. They are crit­i­cal in mon­i­tor­ing the im­pact of pol­icy and they pro­vide a ba­sis for teacher feed­back (for­ma­tive as­sess­ment) to the stu­dent.

6. Col­lab­o­ra­tion: Col­lab­o­ra­tive prac­tices be­tween teach­ers within and across schools are im­por­tant fea­tures of many high-per­form­ing school­ing sys­tems. In coun­tries such as Fin­land and Ja­pan teach­ers are en­cour­aged to work to­gether, through joint les­son plan­ning, ob­serv­ing each other's lessons, and help­ing each other im­prove. In China, teach­ing and de­vel­op­ment teams, or Jiaoy­anzu, work to­gether within and across schools to plan how the cur­ricu­lum will be taught, to share learn­ing, and ob­serve each other's prac­tice.

Per­for­mance ev­i­dence in a nut­shell

These core fac­tors are not sep­a­rate items on a shop­ping list: they over­lap and re­in­force each other. The NSW Cen­tre for Ed­u­ca­tion Sta­tis­tics and

Col­lab­o­ra­tive prac­tices be­tween teach­ers within and across schools are im­por­tant fea­tures of many high-per­form­ing school­ing sys­tems.

Eval­u­a­tion points out that pro­vid­ing timely and ef­fec­tive feed­back to stu­dents (item 5 above) is an­other el­e­ment of ex­plicit teach­ing (item 2). Fo­cus­ing stu­dents' at­ten­tion on the task at hand and the way they are pro­cess­ing that task are two ef­fec­tive types of di­rect feed­back.

Sim­i­larly, be­ing ex­plicit about the learn­ing goals of a les­son and the cri­te­ria for suc­cess (item 2) gives high ex­pec­ta­tions (item 4) a con­crete form that stu­dents can un­der­stand and aim for. As a fur­ther ex­am­ple, the lit­er­a­ture in­di­cates that teach­ers are more likely to make ef­fec­tive use of stu­dent data (item 5) when work­ing col­lab­o­ra­tively (item 6) than when work­ing alone.

No doubt, some highly ef­fec­tive teach­ers (item 1) are born teach­ers, but many more can be trained to be­come ef­fec­tive by adopt­ing bet­ter ped­a­gog­i­cal prac­tices (item 2) or learn­ing from their col­leagues (item 6).

Once we al­low for such in­ter­ac­tions, it be­comes clear that the core per­for­mance fac­tors of­fer an in­te­grated and co­her­ent model of ed­u­ca­tion. It is a model con­sist­ing of ob­jec­tive achieve­ment stan­dards, high ex­pec­ta­tions of all stu­dents, a fo­cus on cog­ni­tive skills, and ex­plicit teach­ing by ef­fec­tive and col­lab­o­rat­ing teach­ers us­ing di­rect and adap­tive in­struc­tion. It is a model grounded in the in­ter­na­tional ev­i­dence and ex­pe­ri­ence.

The core per­for­mance fac­tors and their in­ter­ac­tion should have pro­vided base­line ev­i­dence for Gon­ski 2.0, but vir­tu­ally no such ev­i­dence ap­pears in the re­port.

Per­for­mance ev­i­dence and the Gon­ski ap­proach

The core per­for­mance fac­tors and their in­ter­ac­tion should have pro­vided base­line ev­i­dence for Gon­ski 2.0, but vir­tu­ally no such ev­i­dence ap­pears in the re­port. In­stead of at least re­view­ing the ev­i­dence, Gon­ski makes the ex­tra­or­di­nary claim (page ix) that there is “a lack of re­search-based ev­i­dence on what works best in ed­u­ca­tion”. The con­se­quence is that in place of a ju­di­cious dis­cus­sion of the in­ter­na­tional ev­i­dence, Gon­ski 2.0 of­fers a model in which (to para­phrase the line of ar­gu­ment):

1. The ma­jor con­straint is the rigid­ity of cur­ricu­lum de­liv­ery be­cause all stu­dents re­ceive the same fixed year-level diet of knowl­edge, skill and

un­der­stand­ing.

2. Lock­step de­liv­ery of the year-level based cur­ricu­lum makes it dif­fi­cult to de­velop teach­ing pro­grams for stu­dents who are above or below year-level ex­pec­ta­tions. Aus­tralian stu­dents from low so­cio-eco­nomic back­grounds are less likely to have growth mind­sets, that is, a be­lief they can suc­ceed if they work hard. At the other end of the spec­trum, some stu­dents may not be chal­lenged enough.

3. Many stu­dents in our schools are not re­al­is­ing their full po­ten­tial be­cause our school sys­tem prevents teach­ers from putting in­di­vid­u­alised growth­fo­cused teach­ing and learn­ing into prac­tice.

4. There­fore (so the ar­gu­ment runs) Aus­tralia should move from a year­based cur­ricu­lum to a cur­ricu­lum ex­pressed as learn­ing pro­gres­sions in­de­pen­dent of year or age. In­stead of con­tent and achieve­ment stan­dards, Aus­tralia would adopt a struc­tured roadmap of long-term learn­ing progress.

5. To sup­port this, a new on­line for­ma­tive as­sess­ment sys­tem would give teach­ers the tools with which to iden­tify in­di­vid­ual learn­ing growth. 6. Shifts in tech­nol­ogy and jobs are chang­ing the bal­ance of the skills our stu­dents need to de­velop, so there should be in­creased em­pha­sis on gen­eral ca­pa­bil­i­ties in the cur­ricu­lum.

Some of this ar­gu­ment is in­con­testable. It has long been un­der­stood that so­cio-eco­nomic dif­fer­ences have a ma­jor bear­ing on aca­demic per­for­mance. Teach­ing a class of widely dif­fer­ing abil­i­ties is very de­mand­ing. These strengths ac­knowl­edged, for the most part the Gon­ski model is ei­ther wrong or not sup­ported by any ev­i­dence.

That ev­i­dence is un­am­bigu­ous: whereas adap­tive and di­rect in­struc­tion rank near the top in their mea­sured im­pact on ed­u­ca­tional ef­fec­tive­ness, in­quiry-based teach­ing ranks near the bot­tom.

Teach­ers and ped­a­gogy

Aside from much ver­biage (“teach­ers de­serve greater recog­ni­tion and higher es­teem”) Gon­ski has lit­tle to say about the role of teach­ers or ped­a­gogy in ex­plain­ing Aus­tralia's low per­for­mance. De­spite ex­ten­sive in­ter­na­tional ev­i­dence about the role of teach­ers and ped­a­gogy, the re­port does not ex­plore the is­sue of al­ter­na­tive ap­proaches to teach­ing. This leaves the re­port ad­vo­cat­ing a gee-whiz tech­no­log­i­cal fix for the as­sess­ment sys­tem, but nowhere out­lin­ing the ped­a­gog­i­cal ap­proaches to be used for the en­su­ing in­ter­ven­tions.

This is a se­ri­ous omis­sion, be­cause in this coun­try we per­sist with a teach­ing ap­proach that is known to be in­ef­fec­tive. For many years in­quiry-based teach­ing has been the pre­dom­i­nant ap­proach in Aus­tralia's schools.

In­quiry learn­ing is a con­struc­tivist, stu­dent-cen­tred ap­proach, with the teacher as fa­cil­i­ta­tor and the stu­dents them­selves mak­ing mean­ing. Guided in­quiry may set pa­ram­e­ters for class ac­tiv­ity, but the essence is that stu­dents are ac­tively in­volved, of­ten through small-group ac­tiv­ity, in con­struct­ing their own un­der­stand­ing and learn­ing.

It might be thought that a small group, stu­dent-cen­tred ap­proach was ide­ally suited to en­gag­ing groups of vary­ing abil­i­ties in a class, thereby ex­tract­ing max­i­mum per­for­mance. Noth­ing could be fur­ther from the truth. In­quiry-based learn­ing ranks near the bot­tom (34th out of 38) of OECD'S per­for­mance-en­hanc­ing fac­tors, and in fact has a strong neg­a­tive as­so­ci­a­tion with per­for­mance scores.

Gon­ski is sim­ply wrong in as­sert­ing that “it is im­prac­ti­cal to ex­pect that the same cur­ricu­lum con­tent can ad­e­quately cater to each stu­dent's dif­fer­ent learn­ing needs”. Whole­class di­rect in­struc­tion has been the dom­i­nant style in most Asian coun­tries, and a ma­jor fea­ture from East Asian test re­sults is that they do not in gen­eral have the long tail of non-per­form­ing stu­dents seen in so many other coun­tries.

The spec­tac­u­lar re­sults achieved by Noel Pear­son with di­rec­tion in­struc­tion for in­dige­nous stu­dents is en­tirely con­sis­tent with the in­ter­na­tional ev­i­dence. That ev­i­dence is un­am­bigu­ous: whereas adap­tive and di­rect in­struc­tion rank near the top in their mea­sured im­pact on ed­u­ca­tional ef­fec­tive­ness, in­quiry-based teach­ing ranks near the bot­tom.

There is abun­dant ev­i­dence that

so­cio-eco­nomic (dis)ad­van­tage is a ma­jor de­ter­mi­nant of ed­u­ca­tional per­for­mance. But hand-wring­ing about so­cial dis­ad­van­tage achieves noth­ing. There is not much that any of us can do to change our par­ents, and planned pol­icy changes in so­cio-eco­nomic struc­ture take years to ma­te­ri­alise. Di­rect and adap­tive in­struc­tion are ef­fec­tive means of do­ing some­thing about the prob­lem – in­deed, on OECD'S rank­ing they are by far the most im­por­tant means of do­ing some­thing use­ful to over­come so­cial dis­ad­van­tage.

Men­tion di­rect in­struc­tion in any fac­ulty of ed­u­ca­tion and the tea-room will erupt as though you are ad­vo­cat­ing a re­turn to Dothe­boys Hall and Wack­ford Squeers. With di­rect and adap­tive di­rect in­struc­tion rank­ing sec­ond and third in OECD'S list of 38 fac­tors as­so­ci­ated with sci­ence per­for­mance, we are long past the time for a grown-up dis­cus­sion of ex­plicit in­struc­tion in Aus­tralia.

Gen­eral ca­pa­bil­i­ties

The rec­om­men­da­tion to em­pha­sise gen­eral ca­pa­bil­i­ties rather than spe­cific cog­ni­tive skills is back to the fu­ture with a vengeance. In 2013, the Draft Na­tional Cur­ricu­lum had to be rewrit­ten be­cause it con­sisted largely of un­sup­ported rhetoric about gen­eral ca­pa­bil­i­ties and cross-cur­ricu­lum themes.

8

It was not dif­fi­cult to see that a

Chil­dren can­not learn to be crit­i­cal thinkers un­til they have ac­tu­ally learned some­thing to think about.

na­tional cur­ricu­lum which fil­tered maths, sci­ence and lit­er­acy sub­jects through a per­spec­tive of gen­eral ca­pa­bil­i­ties would not come close to giv­ing our stu­dents the nec­es­sary level of achieve­ment. Zom­bie-like, gen­eral ca­pa­bil­i­ties have risen from the dead in Gon­ski 2.0, with claims that “gen­eral ca­pa­bil­i­ties need to be at the core of our cur­ricu­lum” (page 38).

No sen­si­ble per­son wants an ed­u­ca­tion sys­tem that lacks, say, the study of his­tory, or which pays no at­ten­tion to wider per­sonal de­vel­op­ment in drama, mu­sic and art. The im­por­tance of crit­i­cal and creative think­ing, per­sonal and so­cial ca­pa­bil­ity, and eth­i­cal un­der­stand­ing are well un­der­stood. That much ac­cepted, there is un­am­bigu­ous ev­i­dence (cited ear­lier as item 3) that what counts for our long-term well­be­ing is high per­for­mance in the main­stream sub­ject ar­eas.

Sub­ject con­tent knowl­edge is some­times dis­missed as rote learn­ing and set in op­po­si­tion to crit­i­cal think­ing, but gen­eral ca­pa­bil­i­ties need to build upon spe­cific sub­ject knowl­edge, not re­place it. Crit­i­cal think­ing pro­cesses de­pend on some knowl­edge of the topic. Schwartz has pointed out, waspishly but ac­cu­rately, that “chil­dren can­not learn to be crit­i­cal thinkers un­til they have ac­tu­ally learned some­thing to think about”.

9 This is ex­actly the ap­proach taken by Sin­ga­pore in its re­cently an­nounced re­form pack­age. Sin­ga­pore has long had a rep­u­ta­tion for aca­demic ex­cel­lence, but the sys­tem is not known for en­cour­ag­ing crit­i­cal think­ing pro­cesses. Sin­ga­pore now wants to pro­duce more well-rounded stu­dents.

Cru­cially, this will not be at the ex­pense of con­tin­ued high per­for­mance in spe­cific sub­jects: in some grades stu­dents “will be ex­posed to new sub­jects and/or higher con­tent rigour and ex­pec­ta­tion”. De­spite

10 these re­quire­ments, stu­dents will have more time for self-di­rected think­ing and to “de­velop 21st cen­tury com­pe­ten­cies” be­cause a sub­stan­tial re­duc­tion in the num­ber of school­based as­sess­ments and high-stakes ex­am­i­na­tions will make avail­able much class time presently taken up with cram­ming for the tests.

This care­ful bal­anc­ing of as­sess­ments, spe­cific sub­jects and gen­eral ca­pa­bil­i­ties, taken from a po­si­tion of great ed­u­ca­tional strength, is a far cry from waf­fle about giv­ing gen­eral ca­pa­bil­i­ties pride of place in the Aus­tralian cur­ricu­lum. Do­ing so will fur­ther re­duce our stu­dents' achieve­ment.

A new sys­tem of for­ma­tive as­sess­ment

There is con­vinc­ing ev­i­dence that data-driven as­sess­ment and feed­back are vi­tal for stu­dent per­for­mance. There can of course be too much test­ing as well as too lit­tle. There are in­di­ca­tions that Aus­tralian par­ents wel­come both the di­ag­nos­tic in­for­ma­tion about their child and the school per­for­mance data pro­vided by NA­PLAN, but there is con­stant de­bate.

The United States and Is­rael have re­duced the amount of test­ing and, as we have seen, Sin­ga­pore is fol­low­ing suit from 2019. The prob­lem is not so much the fre­quency of test­ing as such: the dilemma is that as­sess­ments of­ten do dou­ble duty, partly as for­ma­tive as­sess­ment for each stu­dent but also as high-stakes per­for­mance in­di­ca­tors for each school.

It fol­lows that the fre­quency of as­sess­ments should de­pend on a ju­di­cious ap­praisal of the ev­i­dence, so it is ex­tra­or­di­nary that Gon­ski pro­posed a new and

There Is Con­vinc­ing ev­i­dence THAT data-driven As­sess­ment And feed­back Are VI­TAL for stu­dent per­for­mance.

It is ex­tra­or­di­nary that Gon­ski pro­posed a new and more elab­o­rate for­ma­tive as­sess­ment tool based on no ev­i­dence at all.

more elab­o­rate for­ma­tive as­sess­ment tool based on no ev­i­dence at all. This new tool would switch from NA­PLAN'S mea­sure­ment of achieve­ment to mea­sur­ing a stu­dent's learn­ing pro­gres­sion, or growth.

There is not a shred of ev­i­dence that the rigid­ity of cur­ricu­lum de­liv­ery is the ma­jor ex­pla­na­tion of low aca­demic per­for­mance and that as­sess­ment geared to more flex­i­ble learn­ing pro­gres­sions will fix the prob­lem. Gon­ski 2.0 nowhere poses, let alone an­swers, the ques­tion why many Asian and Eu­ro­pean coun­tries op­er­ate a tra­di­tional cur­ricu­lum based on year-by-year as­sess­ment, yet score in the top 10 on the 2015 PISA.

Putting ‘snap­shot' achieve­ment data on­line is one thing. As­sess­ment of growth, or learn­ing pro­gres­sions, for the en­tire cur­ricu­lum is quite an­other. Hav­ing it use­able by teach­ers is yet an­other. As­sess­ment scales can be hard to in­ter­pret: if Year 5 stu­dents in one school score 50 scale points below stu­dents in an­other school, this means very lit­tle to most teach­ers or par­ents.

Com­par­isons are fur­ther com­pli­cated be­cause as­sess­ment scales are non­lin­ear: in gen­eral, stu­dents show greater in­creases in scores in ear­lier rather than later years of school­ing. Com­par­ing the rel­a­tive progress of dif­fer­ent groups of stu­dents can be mis­lead­ing un­less we know the start­ing point for each group.

There are ways to solve these tech­ni­cal is­sues. It has be­come stan­dard in the re­search lit­er­a­ture to mea­sure stu­dent progress by con­vert­ing as­sess­ment scores to equiv­a­lent years of progress. The Grat­tan In­sti­tute has used this tech­nique with NA­PLAN data, but a glance at the tech­ni­cal cal­cu­la­tions demon­strates that this is in­deed a re­search tool. Its

11 in­ter­pre­ta­tion needs more sta­tis­ti­cal fi­nesse than the av­er­age school teacher or par­ent is likely to pos­sess. And, like most sta­tis­ti­cal cal­cu­la­tions, it works well when we com­pare groups, whereas mea­sure­ment er­rors limit its ap­pli­ca­bil­ity to in­di­vid­ual stu­dents.

The Gon­ski vi­sion of teach­ers ar­riv­ing in the class­room and jump­ing nim­bly on­line to look up cur­ricu­lum-wide “achieve­ment data cal­i­brated against learn­ing pro­gres­sions, to di­ag­nose a stu­dent's cur­rent level of knowl­edge, skill and un­der­stand­ing, [and] to iden­tify the next steps in learn­ing to achieve the next stage in growth”, has good en­ter­tain­ment value. It is, for the most part, fan­tasy.

It goes with­out say­ing that a stu­dent who is not aca­dem­i­cally gifted but who is do­ing his/her damnedest to make progress needs to be en­cour­aged and sup­ported. This is quite dif­fer­ent from sub­sti­tut­ing progress for ob­jec­tive at­tain­ment stan­dards across the en­tire sys­tem. Learn­ing pro­gres­sions can be char­ac­terised as a pro­posal to sup­plant ob­jec­tive stan­dards of at­tain­ment with the no­tion of the per­sonal best.

Nowhere in the re­port is there any recog­ni­tion of the para­dox that a fo­cus on rel­a­tive progress can worsen mea­sured per­for­mance. Rel­a­tive mea­sures can lead you not to ex­pect enough of your stu­dents by ac­cept­ing a ceil­ing on achieve­ment that is far below what is pos­si­ble. Low ex­pec­ta­tions then be­come self-ful­fill­ing.

Learn­ing pro­gres­sions can be char­ac­terised as a pro­posal to sup­plant ob­jec­tive stan­dards of at­tain­ment with the no­tion of the per­sonal best.

Sal­vaging the wreck­age

So where does the Govern­ment go from here? A fur­ther lengthy in­quiry is prob­a­bly not the an­swer. Per­haps we could seek world's best-prac­tice by hold­ing a com­pe­ti­tion. That sounds flip­pant, but the mar­ket­ing slo­gan writes it­self: "We sought the world's best for the ar­chi­tec­ture of the Opera House and got one of the great build­ings of the 20th cen­tury; now we want the best ed­u­ca­tional ar­chi­tec­ture for the new cen­tury".

One vi­tal is­sue needs to be con­sid­ered in any fu­ture re­view of the al­lo­ca­tion of school fund­ing. Gon­ski 2.0 seems to have de­pended heav­ily on sub­mis­sions from the State de­part­ments of ed­u­ca­tion, with many of its pro­pos­als ap­par­ently orig­i­nat­ing in the State ad­min­is­tra­tions. It is en­tirely proper that State think­ing should fig­ure promi­nently, but the prob­lem lies in what is re­vealed about that think­ing.

As Hewett has noted, Gon­ski 2.0 has un­wit­tingly re­vealed that most State de­part­ments of ed­u­ca­tion re­main “de­voted to ed­u­ca­tion fads long since dis­carded in other coun­tries”.

12 Pro­pos­als for gen­eral ca­pa­bil­i­ties, learn­ing pro­gres­sions and a new sys­tem of for­ma­tive as­sess­ment all ap­pear to be based on State sub­mis­sions.

This is where the sin­gle, best rec­om­men­da­tion from Gon­ski 2.0 comes in. A re­search and ev­i­dence in­sti­tute to pro­vide prac­ti­cal ad­vice for teach­ers, school lead­ers and de­ci­sion mak­ers to drive bet­ter prac­tice should be im­ple­mented as ur­gently as pos­si­ble. It should be at the na­tional level.

It is clear from the re­port that we can­not rely on State-based ad­min­is­tra­tions to de­velop the nec­es­sary poli­cies for ev­i­dence-based per­for­mance im­prove­ment. At State level, only the NSW Cen­tre for Ed­u­ca­tion Sta­tis­tics and Eval­u­a­tion ‘has form'. The model could be the Pro­duc­tiv­ity Com­mis­sion in Can­berra, whose rec­om­men­da­tions are not al­ways ac­cepted but which has a rep­u­ta­tion for an­a­lyt­i­cal, ev­i­dence­based work.

Fi­nally, barely tack­led in the Re­view is the ques­tion of how we ac­tu­ally de­liver pro­grams for per­for­mance im­prove­ment. It's clear from the core per­for­mance fac­tors that im­prove­ments must be made at the school level, with a fo­cus on teach­ers and ped­a­gogy. That much is ob­vi­ous, but there is an arith­metic ‘wrin­kle'.

In 2017 there were some 282,000 full-time equiv­a­lent teach­ers in Aus­tralia. An­nual en­try into the pro­fes­sion varies, but be­tween 2016 and 2017 an ad­di­tional 5,600 were em­ployed. Im­prove­ment in teach­ing meth­ods, such as adap­tive in­struc­tion or phon­ics for read­ing, will be painfully slow if we rely on changes to what is taught in pre-ser­vice teacher ed­u­ca­tion. With­out ma­jor in­vest­ment in pro­fes­sional de­vel­op­ment for ex­ist­ing teach­ers, it will take many years for proven bet­ter ways of teach­ing to per­co­late through the sys­tem.

Aus­tralia al­ready of­fers a wide va­ri­ety of pro­fes­sional de­vel­op­ment cour­ses, but sur­vey ev­i­dence in­di­cates that Aus­tralian teach­ers are less likely than teach­ers else­where to re­port favourably on the class­room ben­e­fits. From the va­ri­ety of cour­ses of­fered, it seems likely that much pro­fes­sional de­vel­op­ment in Aus­tralia lacks fo­cus and has lit­tle rel­e­vance to the core busi­ness of per­for­mance-ori­ented class­room teach­ing.

An im­por­tant el­e­ment of the ad­di­tional ex­pen­di­ture promised by Can­berra should be a re­form of pro­fes­sional de­vel­op­ment, mak­ing such de­vel­op­ment the um­brella for up­dat­ing ex­ist­ing teach­ers on adap­tive in­struc­tion, col­lab­o­ra­tion with col­leagues, the im­por­tance of cog­ni­tive skills, phon­ics, class­room man­age­ment, and in­cul­cat­ing ev­i­dence-based ap­proaches.

Gon­ski 2.0 has un­wit­tingly re­vealed that most State de­part­ments of ed­u­ca­tion re­main “de­voted to ed­u­ca­tion fads long since dis­carded in other coun­tries”.

IM­AGE: © Si­mon Fraser Univer­sity

IM­AGE: © Bris­bane City Coun­cil-flickr

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.