Silhouettes of the laptop, and the users of mobile devices are see next to a projection screen of the Google logo in this picture illustration 28 March 2018. REUTERS/dado Ruvic/Illustration – RC1F87EA23D0
After the cessation of his involvement in a Pentagon project, Google has vowed to never develop AI technologies that can be used for war or surveillance.
Google’s new policy around AI development is focused on building the technologies responsibly and to limit potential abuse, the company said in a blog post.
No Google AI technology will ever be used as a weapon or for supervising the policy of the member states. In addition, the company will refuse to develop AI projects that “cause or threaten to cause total damages.” Only when the benefits significantly outweigh the risks,” the company will continue, but with safeguards in place.
The policy comes as Google employees reportedly under pressure from the tech giant to cancel its existing participation in the Maven Project, a Pentagon effort with AI to analyze images from the air drones. The company claimed that the research was for non-offensive purposes,” but some employees were afraid that it would one day be used in actual warfare. In response, she circulated an internal letter, with the argument that “Google should not be in the business of the war”.
More From PCmag
Facebook Bug Changes to the Privacy Settings of Users 14M
SimpliSafe Now Plays Nicely With The Apple Watch
We lift Sanctions Against ZTE (After it Pays $1.4 B)
Get a Bluetooth Headset for $30, Samsung Tv’s at Black Friday Price
The resistance to the Project Maven was so severe that at least a dozen staffers reportedly resigned in protest. To placate employees, Google has promised a new ethics policy around AI development, which the company made public on Thursday.
Google Cloud CEO Diane Green confirmed that the company will not seek to renew his government contract for Maven Project. However, the new AI-ethical policy does not spell an end to Google’s involvement with the Pentagon. Far from it.
“We want to be clear that we are not the development of AI for use in weapons, we will continue our work with the government and the army in many other areas,” Google’s CEO Sundar Pichai said in a separate blog post.
“Cybersecurity, training, military recruiting, veterans’ health care, and search-and-rescue,” he added. “This collaboration are important, and we will be actively looking for more ways to improve the critical work of these organizations and support from members and citizens are safe.”
Or the tech-giant can avoid the technologies of armed (or even if that is wrong) in the discussion. But the new policy also lays out a roadmap for Google’s AI development. For example, the company’s artificial intelligence will be built and tested for safety; they will also be designed with privacy in mind, a clear nod to the controversy surrounding Google’s Duplex, a future role in the company’s voice-assistant that have the potential to make people think that it is human. Under the new policy of Google AI technologies will give “opportunity for notice and consent.”
“This is how we choose to approach the AI, we understand there is room for many in this conversation,” Pichai added in his blog post. “As AI technologies progress, we work with a range of stakeholders to promote thought leadership in this area.”
This article originally appeared on PCMag.com.