Google has announced it will not use artificial intelligence (AI) for weapons or to "cause or directly facilitate injury to people.”
Google chief executive Sundar Pichai made the announcement on Thursday as he unveiled a set of principles for the company's artificial intelligence technologies, AFP reported.
He said that Google won't use artificial intelligence for weapons, but "we will continue our work with governments and the military in many other areas" including cybersecurity, training, and search and rescue.
This comes as Google is facing pressure from employees and others over a contract with the US military.
Last week, however, the California tech giant said it would not renew its contract with the Pentagon for artificial intelligence work.
During a weekly meeting with employees on last Friday, the chief executive of Google’s cloud-computing business Diane Greene said there would be no follow-up for the Pentagon contract when it expires in March 2018, The Washington Post cited company sources as saying.
The 18-month project known as Project Maven, was launched in April 2017 to find ways the US military could use artificial intelligence to update its national security and defense capabilities “over increasingly capable adversaries and competitors,” according to a US Defense Department memo.
The program involves using machine learning and engineering talent to distinguish people and objects in drone videos.
The artificial intelligence project prompted dozens of employees, including researchers, to resign. Up to 4,000 Google employees also signed a petition in April, calling for the firm to stop the program, which they called the “business of war.”
They argued that by the project, Google was putting users' trust at risk, as well as ignoring its "moral and ethical responsibility.” They also fear that the project could be the first step towards using artificial intelligence for lethal purposes.