Google has been criticized for months by thousands of employees for developing AI tools for the military. Not only has the company pledged to stop working with the military, the company now has effectively published an AI rulebook.
- Google has released what is effectively a rulebook for how the company will apply AI technology
- The seven principles include making sure that AI is applied to applications that are socially beneficial, safe and won't create unfair bias.
- This is Google's response to a conflict inside the company over management's controversial decision to build AI tools for the military. Last week, Gizmodo reported that Google cloud chief Diane Greene told employees that Google had decided to stop providing AI tech to the Pentagon.
Google CEO Sundar Pichai published a set of " target="_blank"ethical principles" on Thursday that will govern the company's work with artificial intelligence.
Pichai said that while AI technology provides consumers and businesses with many benefits, Google realizes the tech "will have a significant impact on society for many years to come" and that the managers "feel a deep responsibility to get this right."
Pichai said the AI applications will be screened to make sure that they are socially beneficial, won't create unfair bias, are safe, accountable to people, incorporate design principles, consistent with scientific excellence, and be made available for uses that maintain the previous principles.
The principles follow a conflict inside Google, pitting thousands of employees against management. These workers protested the company's involvement in Project Maven — the controversial collaboration between Google and the US Department of Defense. In March, news leaked that Google had quietly supplied AI technology to the Pentagon to help analyze drone video footage.
In April, more than 4,000 workers signed a petition demanding that Google's management cease work on Project Maven and promise to never again "build warfare technology." Soon after that, Gizmodo reported that a dozen or so Google employees had resigned in protest. Last Friday, Diane Greene, Google Cloud chief, reportedly told employees that Google had decided to stop working with the military on AI.
In his blog post, Pichai also made it clear on what sorts of applications that Google will not develop. Those include weapons or technologies that cause overall harm, as well as anything that can be used for surveillance that violates "internationally accepted norms" or anything that conflicts with "widely accepted priciples of international law and human rights."
AI spooks many people. At the extreme, the fear is that one day we might see the kind of killer robots ofted depicted in Hollywood films. While that might seem far-fetched now, critics have said that Google's video analysis of surveillance footage could help improve the accuracy of drone missile strikes.
And Gizmodo also reported that Google sought to help build systems that enabled the Pentagon to surveil entire cities.
What this means for Google's bottomline is the potential loss of a defense contracts. One, the Joint Enterprise Defense Infrasture or JEDI, is worth $10 billion. But according to Bloomberg, Google was a longshot to win those contracts. Companies such as Oracle, Microsoft and IBM, which possessed far more experience working with the government, were considered the frontrunners for winning the deals.
Story developing...