Google’s involvement in a technology project related to the US military has caused deep disquiet among the company’s employees, some of whom even resigned in protest, reports say.
A month ago, more than 3,100 employees of the tech giant signed an internal letter expressing their opposition to Google’s participation in Project Maven, an artificial intelligence (AI) program being undertaken by the Pentagon, according to the reports.
Arguing that autonomous weapons don’t gel with Google’s “Don’t Be Evil” corporate motto, the employees urged the company to stay away from military work.
About a dozen employees even resigned in protest as they believed Google was refusing to snap ties with the Pentagon and halt its work for the Pentagon, reports say.
According to Gizmodo, Project Maven aims to accelerate analysis of military drone footage through automatic classification of images of objects and people.
The project is said to have sparked multiple concerns among Google employees, including ethical worries about the use of AI in drone warfare.
A few employees resigning in protest spoke with Gizmodo on their decision to leave the company. They said executives have not been fully transparent with their staff in relation to controversial business decisions.
They added that management seems less interested now in listening to objections from workers compared to the situation earlier.
“At some point, I realized I could not in good faith recommend anyone join Google, knowing what I knew. I realized if I can’t recommend people join here, then why am I still here?” a person said.
“Actions speak louder than words, and that’s a standard I hold myself to as well,” another resigning employee told Gizmodo.
According to the Wall Street Journal, the Pentagon had budgeted US$7.4 billion for AI-related projects in 2017.
On Monday, more than 90 academics in artificial intelligence, ethics, and computer science released an open letter urging Google to support an international treaty prohibiting autonomous weapons systems and halt its participation in Project Maven.
Google, meanwhile, has been quoted in Gizmodo as saying that the technology being developed under the Project Maven is not meant to kill, and that it can actually help save lives.
“An important part of our culture is having employees who are actively engaged in the work that we do. We know that there are many open questions involved in the use of new technologies, so these conversations—with employees and outside experts—are hugely important and beneficial,” a Google spokesman said in a statement earlier in April.
The technology is used to flag images for “human review” and its intention is to save lives and to save those from having to do extremely “tedious work”, the spokesman said, while acknowledging that “any military use of machine learning naturally raises valid concerns.”
This article appeared in the Hong Kong Economic Journal on May 16
Translation by Jonathan Chong with additional reporting
[Chinese version 中文版]
– Contact us at [email protected]