FOR EMPOWERING ENTERPRISES TO MANAGE AI RISKS
AI: IT’S ALL FUN AND games until a company gets hurt. As CEOS hurry to pilot and implement new generative AI tools, their chief information officers have been worried about how to monitor and measure their products and systems for such things as bias, safety or security gaps, and lack of compliance with company policies and regulations. Credo AI, which Navrina Singh founded in 2020 after she recognized this issue while commercializing Microsoft’s enterprise AI services in the late 2010s, provides clients with a cloud-based AI platform that meets this moment. Last May, Credo AI rolled out new governance products to manage the special risks of generative AI tools, including data leakage, toxic output, and security vulnerabilities. These services have helped Credo AI win such customers as Booz Allen, Mastercard, and Northrup Grumman as well as major healthcare, insurance, and pharmaceutical players. “There’s a really sad narrative that governance is a mechanism by which innovation slows down,” says Singh, whose prescience on this issue has helped her become a leading voice in helping governments navigate their regulatory options—including being part of Senator Chuck Schumer’s AI Insight Forum last November. “I believe that if you are going into AI with eyes wide open, if you understand what those guardrails are, you can go really fast.”