Yes, AI is when a computer now thinks for itself
SO, THERE’S this South African mate of mine who happens to be a gifted polymath. He is a media-shy physics PhD who spent much of his early professional career working in finance – programming artificial intelligence (AI) driven solutions to take over complex portfolio management tasks that were traditionally handled by human beings. Indeed, today a lot of the activities that happen on the world’s leading securities exchanges are now run by machines.
However, my learned buddy is often quick to remind me how misleading the notion of things being “run by machines” can be when we forget that ultimately, the machines themselves are “run by humans”.
This is an important observation, given the media’s tendency to use human descriptors to chronicle advances in AI.
Attention grabbing media headlines like a recent Business Insider one which reads: Google’s DeepMind AI just taught itself to walk, capitalise on society’s Hollywood-induced fascination with the idea of machines taking over the world.
After all, if bots can teach themselves to walk, they may well teach themselves to be us at some point, right? There’s no doubt that such social media-optimised, SEO-friendly statements are designed to capture the imagination of the vast majority of us who, all too often, do not read past the headlines to learn more about what is often a fairly unspectacular AI evolution.
I’ve come to appreciate that huge leaps in logic are only possible when we surrender ourselves to futuristic sensationalisation and conveniently forget that humans are ultimately responsible not only for seeding the “intelligence” in AI, but also for determining the parameters, or indeed the lack thereof, within which automated software ought to tirelessly toil towards optimisation.
Given all that, this recent TechCrunch headline which reads Anyone can teach this MIT robot how to teach other robots more accurately accounts for the influence of humans in contributing towards the proliferation of AI – particularly within the sub-field of machine learning.
Ryan Falkenberg is co-founder and co-chief executive of a South African start-up called Clevva – an AI platform that “allows non-coders to build and maintain navigation apps” which organisations can then deploy to help employees perform real-time analysis that results in sound decision-making. Falkenberg is a man who understands the value of the human factor in creating and deploying AI solutions, and he is well-placed to help us wrap our minds around the practical implications of advances in AI.
In a recent e-mail-interview, I asked Falkenberg to break down the notion of AI into the simplest possible terms – an explanation a 3-year-old might understand. He told me that AI is when a computer thinks for itself, and does not simply follow a programme of pre-coded instructions. The computer does this by analysing all the data it can access to work out the highest probability of certain outcomes based on specific requests.
It then tries to learn from the outcomes and improve the probability of providing correct answers or performing desired actions the next time. Unlike humans, however, AI can learn via many computers and draw on huge batches of data to make decisions. This allows it to get better at tasks far quicker, and consider more variables than humans are typically able to do.
When asked to illustrate the difference between AI and machine learning, Falkenberg stated that AI is a broad umbrella that encompasses more than just machine learning, and is often loosely split into two areas, namely:
(1) Artificial Narrow Intelligence – which is AI focussed on a very specific areas or fields.
(2) Artificial General Intelligence – like IBM’s mega question-answering machine, Watson. Apparently, Nasa’s Mars Exploration Rover is an excellent example of machine learning in action. The Rover was designed to operate without human intervention. It was deployed to gather data from the surface of Mars and use that data to make decisions.
The concept of “learning” in the phrase machine learning stems from the fact that the more data is made available to the machine, the better the decisions it will make over time. The past results of decisions made then inform future decisions, allowing the machine to adapt – much like a baby would by exploring the world around him/her.
The more the machine “tries” things, the more it “learns”, and therefore the better it gets at “decision-making”. This form of machine learning, also dubbed cognitive computing, can be seen in things like driverless cars and pretty much wherever human decision-making is supplanted to improve efficiencies.
In today’s increasingly digitised world, AI is all around us. It has become ubiquitous no small thanks to its widespread deployment by firms like Apple and Google, who use it to power virtual assistants like Siri and Google Now, for example. It is also used by Facebook to help identify and tag people, places, and things, in gaming to make characters more real and give them “personalities”.
In business, AI is used to predict consumer actions, detect fraud and pre-empt criminal activities using predictive modelling. For example, that’s how online retailers creepily “know” what you will buy before you do. They are able to send you tailored promotions and coupons based on calculated predictions they’ve made about you.
When challenged to suggest what one of the more pertinent trends within AI might be within the African context, Falkenberg cited a lesser-known kind of AI that captures known intelligence and expertise, and then by using three or four-dimensional logic, assists humans in their decision-making.
This form of AI is called a decision navigator. Decision navigators work like GPS’s to augment humans rather than replace them.
In his response, Falkenberg undoubtedly took the opportunity to give a not-sosubtle nod to Clevva’s area of speciality. Nonetheless, I do rate the importance of that specific field of AI because of the real threat to African livelihoods being posed by the deployment of automated software.
It is a reality I have highlighted in this column several times before. Quite notably, decision navigators help companies use their existing workforces more efficiently to do higher-value work, rather than replacing them altogether.
Decision navigators are being deployed by banks, insurers, petro-chems, and even telcos to guide staff in their decision-making, within their product, policy and procedural environments.
Because decision navigators leave an audit trail, they frequently form part of regulation technology regimes designed to promote corporate governance.
Decision navigators also help companies on board and train new staff in substantially less time than was previously required.
Instead of having to teach people everything they need to know by rote learning, organisations can concentrate on getting employees up to speed with the bare essentials and augment that with decision navigators that will help people navigate their daily tasks – however basic or complex.
Finally, I asked Falkenberg what innovations within AI he is most enthusiastic about right now. He told me that he’s rather excited by the increasing ability of AI platforms to accurately sense and interpret the environment, allowing for more accurate decision-making.
He reckons that as we get better at natural language translations and the accurate interpretation of intent, the advances in sensors that can provide inputs across all five senses will accelerate AI towards full automation in many areas that impact daily living, including proactive monitoring of pretty much everything.
Decision navigators are being deployed by banks, insurers, petro-chems, and even telcos to guide staff in their decision-making.
Andile Masuku is a broadcaster and entrepreneur based in Johannesburg. He is the executive producer at AfricanTechRoundup.com. Follow him on Twitter @MasukuAndile and The African Tech Round-up @africanroundup