Oh Google. A mere 10 days after forming an advisory council to guide the company's development of AI, Google has shuttered the committee and gone back to the drawing board. But what went wrong, and how did it happen so quickly?
Named the Advanced Technology External Advisory Council (or ATEAC), the committee was formed to ensure the "responsible use and development" of artificial intelligence within the tech giant.
- What is AI? Everything you need to know
- How Google Lookout is using AI for good
- Google Stadia: the gaming streaming service is on its way
With debates raging over military uses of AI, and its use in surveillance of citizens, there are plenty of ethical issues surrounding AI – not least in terms of data bias, which can see AI systems blindly entrench prejudices based on the information fed to it.
But the selection of ATEAC members was immediately controversial – including the CEO of a military drone company, and a member with openly anti-trans views – with one board member immediately resigning and any remaining members having to defend their position on the council.
From the outset it was clear that ATEAC was poorly equipped to actually tackle concerns being raised around the ethical use of AI.
Google has a soul – its employees
One of the primary factors for closing the council may have been the response of Google's own staff. An internal body known as Googlers Against Transphobia (opens in new tab) published an open letter criticizing the appointment of Kay Cole James - President of conservative think tank Heritage Foundation, and a vocal opponent of LGBT+ rights (opens in new tab).
The letter points to how "the potential harms of AI are not evenly distributed, and follow historical patterns of discrimination and exclusion. From AI that doesn’t recognize (opens in new tab) trans people, doesn’t “hear” (opens in new tab) more feminine voices, and doesn’t “see” (opens in new tab) women of color, to AI used to enhance police surveillance (opens in new tab), profile immigrants (opens in new tab), and automate weapons (opens in new tab) — those who are most marginalized are most at risk."
Shutting the committee down entirely may have evaded a PR disaster in the short term, but unless Google actually seeks council on the ethical problems already plaguing AI development, the next ATEAC won't be worth the effort.