Google takes AI stand
Google pledged yesterday that it will not use artificial intelligence in applications related to weapons, surveillance that violates international norms, or that works in ways that go against human rights. It planted its ethical flag on use of AI just days confirming it would not renew a contract with the US military to use its AI technology to analyse drone footage. The principles, spelled out by Google CEO Sundar Pichai in a blog post, commit the company to building AI applications that are “socially beneficial”, that avoid creating or reinforcing bias and that are accountable to people. The search giant had been formulating a patchwork of policies around these ethical questions for years, but finally put them in writing. Aside from making the principles public, Pichai didn’t specify how Google or its parent Alphabet would be accountable for conforming to them. He also said Google would continue working with governments and the military on noncombat applications involving such things as veterans’ healthcare and search and rescue.