Google, a subsidiary of Alphabet, moved this year to tighten controls over its scientists ’papers by launching a review of“ sensitive topics, ”and in at least 3 cases it asked authors to refrain from putting its technology in a negative position, according to internal communications and interviews with Interested researchers at work.
In a report published by Reuters, it was found that Google’s new review procedure asks researchers to consult with legal, policy and public relations teams before pursuing topics such as facial and sentiment analysis, and classifications of race, gender or political affiliation, according to internal web pages that explain politics.
One of the research team’s pages stated that “technological advances and the increasing complexity of our external environment are increasingly leading to situations in which seemingly non-offensive projects raise ethical, reputational, regulatory or legal issues.”
Reuters was unable to determine the date of publication, although three current employees said the policy began in June.
Eight current and former employees add that reviewing “sensitive issues” is another addition to the basic scrutiny that Google uses for papers for defects or errors that must be corrected, such as disclosing trade secrets in order to enable early intervention.
For some projects, Google officials intervened at later stages, and one of Google’s senior managers – who reviewed a study on the technology for recommending content to authors – indicated that “extreme caution should be exercised to add a positive character,” according to internal correspondence obtained by Reuters.
“This does not mean that we should hide from the real challenges” posed by the program, the director added.
Subsequent correspondence from the researcher to the reviewers states that the authors “have been alerted to remove all references to Google products”.
Four researchers from the task force – including chief scientist Margaret Mitchell – said they believed Google had begun to interfere with critical studies of potential technological harm.
Mitchell said researchers get into a serious censorship problem when they are not allowed to publish information they deem correct on their own terms because of management reviews.
Google stated on its public site that its scholars enjoy “great” freedom.
Tensions erupted between Google and some of its employees this month after the sudden exit of scientist Timente Gibro, who led a 12-person team with Mitchell focused on the ethics of AI programs.
Google expelled her after questioning that failing to publish research claiming that artificial intelligence mimics speech could harm marginalized groups, Gibro says.
Google said it accepted her resignation and was quick to do so. It was not possible to determine whether the Gibro paper had been subject to a review of “sensitive topics.”
Google Senior Vice President Geoff Dean said in a statement released this month that the Gibro paper focused on potential harms without discussing ongoing efforts to address them.
Dean added that Google supports the Scholarship for Artificial Intelligence Ethics, and is “actively working to improve our paper review processes, because we know that too many checks and balances can get cumbersome.”
The explosion in artificial intelligence research and development via the technology industry has prompted authorities in the United States and elsewhere to propose rules for its use, and some have cited scientific studies showing that facial analysis programs and other AI systems can perpetuate biases or undermine privacy.
In recent years, Google has integrated artificial intelligence into all of its services, using technology to interpret complex search queries, report recommendations on YouTube and automatically complete sentences in Gmail.
A study of Google’s services for biases is among the “sensitive topics” under the company’s new policy, according to an internal webpage.
Other “sensitive topics” included include the oil industry, China, Iran, Israel, the Coronavirus pandemic, home security, insurance, location data, religion, autonomous vehicles, communications, and systems that recommend or specialize web content.