The challenge of ethics in artificial intelligence
Iwatched a Sandra Bullock movie where her identity was completely re-engineered by sinister forces. Her new names, history and criminal record has become a reality.
With the growing capability and presence of artificial intelligence-driven systems and the increasing use of data analytics to target you, who is responsible for the ethical, accountability and risk implications at a personal, community, national and global level?
The problem is that government and legislation are reactive.
Facebook is finally dealing with cyber-bullying and offensive language in comments, legislation and company policies are only now catching up with inappropriate social media behaviour and apparently you have to get an import licence if you order more than three times from an online platform.
The King III report has emphasised the importance of IT governance, but does it go far enough and are we asking the right questions?
The Bergman Klein Centre at Harvard is asking: “How do we ensure that AI systems serve the public good rather than exacerbate existing inequalities and biases?”
Is it time that global code of ethics become similar to environmental and other treaties?
The three mega trends impacting AI are physical, digital and biological.
The development of robotics, 3D printing, autonomous cars, drones and virtual reality equipment are heralded as disruptive technologies.
However, it has serious implications for the future of labour, safety and ethical decision-making, crossing geopolitical boundaries and the social bias of navigating through unmapped informal communities.
The use of sensors, DNA mapping, brain-scanning in the growing field of neuroscience, implanted chips, CCTV cameras and GPS tracking systems are crossing the physical and psychological boundaries of personal space and privacy.
It is collecting vast amounts of personal data which are inevitably owned by the collector.
If you dislike a medical-aid committee making decisions about your healthcare, what if it can make these decisions based on urine samples taken from company bathrooms (yes, the technology exists) or suggest gene therapy based on your DNA profile?
What if these decisions were being made by a machine and not a human?
The Edelman Trust monitors place political, media and business trust at an all time low.
Can we trust them to rise above self-interest to communicate, legislate and provide governance frameworks on our behalf?
Ultimately, however, ethics, accountability and governance can only truly work if the locus of control is internal and not external.
It starts with us. It is about questions we ask before we build AI technologies.
Are we protecting personal data and privacy through technologies like blockchain, do we have clear AI governance and IT policies, are we building digital literacy into our skills plans and are we rewarding the right behaviour?
And, most importantly, are we consciously safeguarding our humanity?
Nanny software or a DStv code does not replace parental attention.
Digital platforms supplement human connection, but do not replace it and an emoji hug is never the same.
Digital ethics start at home.
Are you walking the talk with your children, your employees, your customers?
Are you preparing them to not only thrive now, but also into the future?
Ethics is not the sole responsibility of government and leadership, it is everyone’s responsibility – our very future depends on it.