Head of Google, Sundar Pichai, has expressed feelings of “deep responsibility” as a leader in the development of artificial intelligence (AI).
In an interview with The Washington Post, he encouraged his tech competitors such as Apple and Amazon to ‘self regulate’ their technology when designing AI that has the potential to harm.
Pichai said it was important to factor in ethics during early stages of production, rather than afterwards; accepting concerns surrounding AI’s potential to hurt people as “very legitimate”.
"I think tech has to realize it just can't build it, and then fix it," Pichai said. "I think that doesn't work."
In June of this year, Google published a set of internal AI principles, that software created would first and foremost be ‘socially beneficial’. It vowed the technology would never be designed or deployed to violate human rights, to be used in surveillance outside international norms or ever be used in weapons.
Pichai’s comments follow the controversy surrounding Amazon’s Rekognition app, which has faced scrutiny regarding its accuracy and has raised ethical concerns. CEO Jeff Bezos met with Border Control Officials to sell the facial recognition software, which could track human beings and take them back to potentially dangerous situations overseas.
"This is why we've tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation," Pichai says.
The company noted that it would continue to work with the military or governments in areas such as cybersecurity, training, recruitment, healthcare, and search-and-rescue.
"As a leader in AI, we feel a deep responsibility to get this right."