Waterloo AI firm working with defence contractor
WATERLOO — A Waterloo tech company specializing in helping to explain the sometimes unexplainable decisions made by artificial intelligence is collaborating with the world’s largest defence contractor.
DarwinAI announced the collaboration with U.S.-based Lockheed Martin last week. Chief executive officer Sheldon Fernandez said they’ve been working together for the past year and the partnership will help accelerate the development of AI technology across multiple applications for the company.
“They use AI throughout the organization. For signal analysis, for autonomous planes ... they’re really looking at AI in a serious way,” said Fernandez.
One of the biggest challenges facing artificial intelligence is the so-called “black box” problem — the inability of humans to peek inside the advanced neural networks and understand how or why the AI makes certain decisions.
DarwinAI, which first launched about two years ago, has developed technology that gives developers and engineers better insight into those decisions.
They’ve also been able to significantly reduce the size and increase the efficiency of neural networks, a virtual model that simulates how the human brain makes decisions.
Lockheed Martin said it understands AI explainability is a “critical challenge” for the industry.
“Understanding how a neural network makes its decisions is important in constructing robust AI solutions that our customers can trust,” Lee Ritholtz, director and chief architect of applied artificial intelligence at Lockheed Martin, said in a news release.
Lockheed Martin is the largest defence contractor in the world, with reported total sales of $53.8 billion (US) in 2018. The company specializes in a wide range of technology, from submersibles and jets such as the F-35, to lasers and space satellites.
Its corporate website states AI-enabled autonomous systems are “changing the way militaries operate and protect their forces, the way first responders fight fires, how researchers explore the far reaches of space and the ocean’s depths.”
Lockheed Martin says it is building AI systems that will keep people in control while enabling them to be safer and more successful.
The development of AI and autonomous technology for military applications has some concerned about the rise of so-called “killer robots” that could make the final decision on what targets to destroy without human input.
Branka Marijan, a senior researcher at Conrad Grebel University College, said she isn’t surprised to see defence contractors such as Lockheed Martin taking an interest in the explainability research developed by DarwinAI.
Marijan is a researcher for Project Ploughshares, a peace research institute, and a member of the Campaign to Stop Killer Robots.
“It’s about building trust in their products that they will do the things you want them to do,” she said, adding it’s still not clear how the company will use the technology.
“It’s really important to understand how the decisions are being made, but it’s also not enough to just be able to explain why the decision is being made,” she added.
Global efforts to curtail the development of fully autonomous weapons have largely been ineffective, but Canada could be poised to take on a leadership role in developing a ban.
In December, the mandate letter for Canadian Minister of Foreign Affairs François-Philippe Champagne from Prime Minister Justin Trudeau included the directive to “advance international efforts to ban the development and use of fully autonomous weapons systems.”
Fernandez acknowledged collaborating with Lockheed Martin raised ethical questions with his team.
“We provide a general platform to be used throughout the organization; if it came down to ‘we need DarwinAI’s help with X,’ we would have to evaluate whether or not we’d be comfortable with X,” he said.
“If it was ‘we want you to design a missile system,’ we’d probably have some challenges doing that.”