Here’s how local firms are putting AI to work
Great risks come with the rapid spread of artificial intelligence, but some local businesses have developed tools to harness its powers for good.
Displacement of jobs, deep fake images, data security and the spread of misinformation are just a few of the concerns.
But there is no escaping it, with tech powerhouse Amazon recently saying it was confident the world was just a few years away from having generative AI a part of everything we do.
The Media Lab co-founder Antony Young said the hype around AI, and the negative side to it, seemed threatening to people, but it didn’t need to be that way.
There was a practical side to the equation where the technology could be used for good, and to help businesses to function effectively in an AI-enabled world, he said.
“The thing is everything is on year 1 French with AI at the moment, and you want to get to year 8 French to make proper use of it. It is all about how businesses learn and transition to the new world, and the modes of working that are necessary to do so successfully.”
His company launched the country’s first specialist AI advertising agency, The Digital Cafe, in July, and along the way, he had encountered several local companies leveraging AI to provide solutions to the risks.
“They are all trying to get ahead of the curve a bit in this space, and once you start to do that, you start to see the opposite to the threat, you see the potential.”
Data security is one of the big risks of generative AI models, such as Chat GPT, and that prompted software development company Endgame to develop a closed platform solution for companies.
Endgame co-founder Andrew Butel said when they started to use generative AI they quickly realised it was a great personal productivity tool, but there were hurdles to overcome. The first was how to make sure they were protecting their clients’ intellectual property, as well as their own IP, while using AI, he said.
“Because open AI wants to train on your data, and when you do that you are making your data available to everyone on that model of AI. If there is sensitive client and employee information involved that is a problem, and that is why we are seeing companies who want to keep their data safe banning generative AI.”
That led his company to develop Hoist, a closed platform which allows companies to keep their data out of open AI systems, such as ChatGPT, and to better protect their clients’ data and confidentiality.
Butel said AI was going to have a huge positive impact on business, but it was in the hype stage right now, and many people did not know what to do with it.
The big question was how did they take a first step without leaking confidential information. “The great thing about AI is that it allows innovation to come from within the business, not just from software providers. This is the approach we’ve taken with Hoist to provide a controlled environment, that allows a business to start building AI capability.”
Generative AI could be used to help solve business problems too, and one example was entrepreneur Nigel Keats’ latest venture, Prompter. Billed as “the Grammarly for policies and procedures”, it was a platform that made it easier for employees to comply with company policies and procedures, and New Zealand legislation.
It used machine learning and generative AI to read what users were writing in realtime, detect potential breaches, including privacy breaches or illegal activity, and suggest mitigation.
Keats said companies had multiple policies in many different areas, and while people might read them when they first started, they were often forgotten or overlooked. That meant mistakes could happen, costing organisations significantly in terms of money, time and reputation.
The tool worked well in the detection of breaches, but was now being fine-tuned before its official launch next year, he said.
“The great thing about AI is that it allows innovation to come from within the business.” Andrew Butel, of Endgame