Asking for an extra round of inspection is supposedly part of new controls implemented by the search giant for papers on certain topics.
If true, they aren’t going to win Google any brownie points, which drew a lot of flak earlier this month for allegedly firing Dr. Timnit Gebru, the co-lead of Google’s Ethical AI team, after supposed disagreements with her work.
- Take a look at the best business VPNs
- Here are the best Windows 10 VPN services
- We’ve also collated the best proxy service providers
Concern or censorship?
Reports now cite researchers and internal documents that show that in at least three cases senior managers have requested authors to refrain from casting its technology in a negative light.
The report quotes one such document as saying “Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues.”
In another alleged internal correspondence, a senior Google manager reviewing a study on content recommendation technology shortly before publication is quoted as asking authors to “take great care to strike a positive tone.”
The leaked documents, whose authenticity can’t be verified by TechRadar Pro, spell out some of the “sensitive” topics as “the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms and systems that recommend or personalize web content.”
The reports also quote senior Google AI researcher Margaret Mitchell as saying that this sort of interference could be seen as a form of censorship:
“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”
- Also take a look at our list of the best VPN services