Google won't deploy AI to build military weapons: Pichai

IANS  |  San Francisco 

After facing backlash over its involvement in an (AI)-powered project "Maven", has enphasised that the company will not work on technologies that cause or are likely to cause overall harm.

About 4,000 employees had signed a petition demanding "a clear policy stating that neither nor its contractors will ever build warfare technology".

Following the anger, Google decided not to renew the "Maven" AI project with the US after it expires in 2019.

"We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," said in a blog post late Thursday.

"We will not pursue AI in "technologies that gather or use information for surveillance violating internationally accepted norms," the Indian-born added.

"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas like cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue," noted.

Google will incorporate its privacy principles in the development and use of its AI technologies, providing appropriate transparency and control over the use of data, Pichai enphasised.

In a blog post describing seven "AI principles", he said these are not theoretical concepts but "concrete standards that will actively govern our research and product development and will impact our business decisions".

"How AI is developed and used will have a significant impact on society for many years to come. As a in AI, we feel a deep responsibility to get this right," Pichai posted.

Google will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where it operates.

"We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief," Pichai noted.

Pichai said Google will design AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.

"We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our will be subject to appropriate human direction and control," he added.

--IANS

na/in

(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)

First Published: Fri, June 08 2018. 10:48 IST