Infrastructure biggest challenge for enterprises to scale GENAI
As companies look to increasingly incorporate AI and its newest form, generative AI (GENAI), into their operations, industry experts hashed out the “infrastructure” challenge their companies faced, in a fireside chat at the Mint AI for Business Summit 2024, held in association with IBM. “We are kind of working in a dual mode, what we call as the on-prem mode of looking at GENAI, and we’re also trying to see how to look at public AI in some spaces,” Ramesh Lakshminarayanan, CIO, Group Head IT at HDFC Bank, said.
On-prem refers to an organisation managing hardware, software and data within its own location. He also stressed that to convert pilot AI projects into real-time ones, companies would not get the correct AI output if they didn’t have the right information architecture in place. Viswanath Ramaswamy, vice-president of technology sales, India/south Asia, IBM, broadened the definition of “infrastructure” to include security principles pertaining to frauds, biases and ethics. On the hardware side, Ramaswamy said organisations have to decide whether they would retrain the AI models or continue with the trained models they already have. “If you (the company) already have a trained model which you do not require a retraining for, a central processing unit (CPU) can do the job…but if you’re retraining the AI model, you require a whole lot of graphic processing units (GPUS), and that’s one challenge,” he said, adding that explaining the outcome of the AI model without any bias formed a part of having a responsible AI infrastructure.
On Genai’s uses for marketing purposes, Arvind Iyer, head of marketing for Piramal Capital and Housing Finance Ltd, said his company is looking to employ GENAI in interactions or transactions with customers. Iyer said his company, a large part of whose marketing is hinterland-based, is also looking to develop linguistic capabilities to talk to customers with various “cultural nuances”.