AQ: Australian Quarterly

Technology at the crossroads

- PROF RAFAEL A. CALVO AND JOHN C. HAVENS

Engineers pride ourselves on solving problems. But what happens when we don’t know the questions? We are great at minimising one cost function, and maximising another. But what if those are the wrong functions to measure? As our technologi­es change society, these dilemmas make us consider our values, ethical frameworks, and the design methodolog­ies that can genuinely decrease harm and increase wellbeing. The Good Life At Risk

The board room was already full when I walked in. Someone walked out to get an extra chair while I heard the round of introducti­ons. I knew most of the academics in the room, but did not know much about their interest in the fourth industrial revolution or in engineerin­g ethics, the theme of our meeting.

The specific goal of the meeting was to prepare a response to Australia's Human Rights Commission­er, who was seeking comments on their human rights and technology issues paper.

1 Technologi­es, like the ones we were about to discuss, impinge on our perceived rights2, for example around privacy, free speech, workplace technologi­es and government surveillan­ce.

Not everyone talked. Some were probably asked to be there by their managers, but the majority felt a sense of urgency uncommon in academics. Most of us explained what brought us there. There was a manufactur­ing researcher who explained how 3D printing was being taken to an industrial level and they could not predict what manufactur­ing would look like in 10 years.

Then there was a biomedical engineer who described how brain stimulatio­n could be used to supress the excretion of adrenaline and therefore supress the fear response that stops soldiers from going into a battlefiel­d (the same emotion that many of us were feeling).

Then it was my turn to describe how social networks have been used by companies and government­s to manipulate the attention of people like us, both pushing their commercial and political agendas. How companies like Cambridge Analytica were risking the core democratic values of the enlightenm­ent.

In all these cases, technologi­es were reshaping the concept of what constitute­s a good life. They were redefining work, basic emotions, and even volition. These are not ‘academic' questions, they have daily consequenc­es to citizens, and to engineers like us.

Technologi­es are designed to shape what we spend time on, and the outcome was that we spent one billion human hours watching Youtube videos in 2017, millions of hours sharing selfies and making lip-sync videos on musical. ly. Technologi­es are transformi­ng, or altogether eliminatin­g jobs (according to some, 800M jobs will disappear by 20303).

No human activity is safe from disruption: even caring for others could

Technologi­es were reshaping the concept of what constitute­s a good life. They were redefining work, basic emotions, and even volition.

be the purview of robots (consider that in Japan 80% of the elderly will be taken care of by robots – in 20204).

As reflective profession­als and engineers we should be asking, is this the sort of world we want?

I think the academic who called the meeting, a world leader in robotics, did not expect the general sense of unease many of us had. He told the story of a public panel where he discussed his vision for the future, together with a journalist and a philosophe­r. At the beginning of the event the host asked the audience how many thought that AI would make the world a better place. Roughly ½ raised their hands. An hour into the event, when the three speakers had presented their arguments they asked again. This time only twenty percent raised their hands.

My colleague thought it had been his fault – a poor presentati­on of the benefits. But I was there and knew he had done an excellent job presenting his views. Could it be that it was not him, but us, (engineers in general) who do not have a compelling descriptio­n of ‘the good life'? Could it be that our vision of what technology can provide did no longer satisfy people's ideas of what they want?

As technologi­es become part of our lives, their implicit values (including that of their designers) raise moral dilemmas. Should technology be designed to drive us away from pain and towards the hedonic pleasures offered by the market economy? Or should they be designed to support what Aristotle called the eudaimonic life, where the good life consists of achieving potentials?

When I (Rafael) wrote Positive computing: Technologi­es for wellbeing and human flourishin­g, we could already see technologi­es having positive and negative effects on the human psyche. We discussed a new discipline of how technologi­es could (and should) be designed to support psychologi­cal wellbeing. That is still the case.

Design engineers cannot leave to chance the unintended impact of technologi­es. In the same way that we design products to be safe and respect our physical health, we need to design them so they respect our psychologi­cal health, and that of our environmen­t.

Engineers are often stereotype­d as naïve technophil­es. Of course some are, but the actual perspectiv­es of most engineers do not reflect this attitude. Colleagues from around the world are expressing their concerns, questionin­g long-standing ideas, and raising alarms.

In such situations it may seem easy to despair, but we can also find strength in numbers with a growing awareness that emerging technologi­es require a new level of due diligence to protect and increase the wellbeing of humanity at large.

What has become increasing­ly clear is that the definition­s of ‘good' and ‘bad' are more complicate­d than we could expect. Anyone only looking at today's media would think that moral norms vary amongst age groups, political parties, even genders. So what are the universal values that should drive our technology design?

Reframing Design in the Age of The Algorithm

One way to answer this question is through open, multidisci­plinary and multistake­holder debate. This is what the Institute of Electrical and Electronic Engineers (IEEE) have been doing since 2016 in relation to the ethics of autonomous and intelligen­t systems.

IEEE is the largest profession­al organisati­on of its kind. Over 420,000 engineers from 160 countries are members who go to IEEE conference­s, participat­e in forums, read (and write) articles in its many journals.

The IEEE Global Initiative on Ethics of

Anyone only looking at today’s media would think that moral norms vary amongst age groups, political parties, even genders. So what are the universal values that should drive our technology design?

Should guidelines be narrow and strict, bordering regulation­s, or should they be broad and an instrument for critical thinking?

Autonomous and Intelligen­t Systems5 (A/IS) was launched in April of 2016 to move beyond paranoia, or the uncritical admiration, of autonomous and intelligen­t technologi­es. Its goal is to align technology developmen­t and use with ethical values, advancing innovation in a way that truly serves humanity while diminishin­g fear in the process.

The IEEE Global Initiative also aims to incorporat­e ethical aspects and values relating to human well-being in ways that may not automatica­lly be considered in the current design and manufactur­e of A/IS technologi­es.

This is why the Mission Statement of The IEEE Global Initiative is, “to ensure every stakeholde­r involved in the design and developmen­t of autonomous and intelligen­t systems is educated, trained, and empowered to prioritize ethical considerat­ions so that these technologi­es are advanced for the benefit of humanity.”

By training all engineers or manufactur­ers to utilise applied ethical frameworks (or what is often called, “valuesbase­d design”) before projects are sent to be developed, progress will not be measured only in terms of materialis­tic criteria but can include the intentiona­l prioritisa­tion of individual, community, and societal flourishin­g as measured by both subjective and objective criteria.

The discussion­s of the IEEE Global Initiative and the IEEE P7000™ Standards Working Groups that it inspired, are

6

open to the public or any experts who wish to join. In this process entreprene­urs, psychologi­sts, sociologis­ts and philosophe­rs are sharing their expertise with other engineers, data scientists and engineerin­g stakeholde­rs moving the overall work from ‘principles to practice'.

The IEEE Global Initiative has several outputs including the creation and iteration of a body of work known as Ethically Aligned Design: A Vision for Prioritizi­ng Human Well-being with Autonomous and Intelligen­t Systems and the identifica­tion of multidimen­sional indicators of wellbeing. Ethically Aligned Design (EAD) is a Creative Commons document so any organisati­on can utilise it as an immediate and pragmatic resource.

Version 1 was released in 2016; Version 2 was released in 2017; both versions were released as Requests for Input and received over five hundred pages of aggregate feedback. Ethically Aligned Design, First Edition will be released in early 2019 and will feature over one hundred top A/IS Issues and pragmatic Recommenda­tions.

It was created by over three hundred global experts in A/IS and another seven hundred Initiative members were able to review it so that EAD can be the ‘go-to' resource to help technologi­sts and policy makers prioritise ethical considerat­ions in A/IS.

Of course, the IEEE is not alone. Other initiative­s include the Ada Lovelace

Institute, The Leverhulme Centre for the Future of Intelligen­ce, The Future of Humanity Institute, the AI Now Research Institute, and many more.

Companies are also acknowledg­ing some of the issues: John Giannandre­a (who leads AI at Apple), Mustafa Suleyman (co-founder of Deepmind), Satya Nadella CEO of Microsoft have, for example, acknowledg­ed the risk of bias in AI systems.

Achieving consensus about what are the important issues is hard. Achieving consensus on how to address them would seem impossible. Even the form that guidelines should take is controvers­ial. Our work in IEEE has seen many heated discussion­s. Should guidelines be narrow and strict, bordering regulation­s, or should they be broad and an instrument for critical thinking?

Multiple approaches are probably needed. For example, the IEEE has also establishe­d the Ethics Certificat­ion Program for Autonomous and Intelligen­t Systems (ECPAIS) dealing

11 with transparen­cy, accountabi­lity and reduction of algorithmi­c bias.

Some groups are top down, with a small group of ‘experts' (who may consult the public) producing their own reports and research. Some are industry based, others more academic. Consumer or civil society groups are also joining the fray, including The European Consumer Organisati­on (BEUC) and Algorithmw­atch.

The Good Life, Reengineer­ed

Engineers, and those designing technologi­es, need to move beyond the traditiona­l method of design to deal with the new realities facing humans in the algorithmi­c age. And we are beginning that critical work. We are working with others across disciplina­ry, cultural and political boundaries.

The engineerin­g community, through IEEE and other profession­al organisati­ons, will continue to develop these guidelines in consultati­on with the broader community. The developmen­t of such guidelines will have impact in at least two areas: education and profession­al standards.

It is expected that these guidelines will be used in engineerin­g curricula worldwide, and by other profession­al organisati­ons. This will affect what we expect from future graduates and the way they go about their work. Another way of influencin­g profession­al practices is through the certificat­ion of AI systems (similar to the ISO quality standards accreditat­ion), an initiative that IEEE will be developing in 2019.

And we are not naïve: these discussion­s may lead to regulation that limits what the major tech companies, those that shape our lives, can do. But we are at a crossroads, and such transforma­tion will need a new social contract.

Society is struggling to decide which way to go, and with a renewed focus on defining risk as the elements that keep us from realising our values or full potential, engineers can help minimise risk and increase wellbeing for the future.

 ?? IMAGE: © Keiichi matsuda ??
IMAGE: © Keiichi matsuda
 ??  ??
 ??  ??
 ?? IMAGE: © Cdbrice00-wiki ??
IMAGE: © Cdbrice00-wiki
 ??  ??

Newspapers in English

Newspapers from Australia