- CEO Sundar Pichai just published Google’s list of ethical artificial intelligence principles.
- He said that Google won’t use the tools or weapons for surveillance, with some caveats.
Google says it won’t use its artificial intelligence technology for weapons or surveillance, with a few caveats, according to a list of ethical principles published by CEO Sundar Pichai.
The company will still work with the government and military in other areas, including cybersecurity and training, and it will only avoid surveillance that violates “internationally accepted norms,” Pichai writes. Google also won’t work on technologies that are likely to cause harm, unless it decides that “the benefits substantially outweigh the risks.”
The guidelines come after months of internal controversy stemming from Google’s partnership with the Pentagon to use AI to analyze drone footage. Several thousand employees signed a petition urging Pichai to keep Google out of the “business of war” and dozens resignedin protest. Google eventually told employees that it would not renew the contract when it expires next year.
Through the firestorm, Google executives reportedly promised that they would publish a list of ethical principles to guide its future projects. Pichai writes that this document will act as “concrete standards” that inform its research, product development and business decisions.