The entire ethics and society team responsible for guiding the artificial intelligence organization in product production has been sacked as part of Microsoft’s recent layoffs. This move comes at a time when AI ethics discourse is at an all-time high with the increase in global popularity of bots like ChatGPT and could spell trouble in the very near future.
Microsoft still has an active ‘Office of Responsible AI’ which is responsible for creating rules and principles to guide AI initiatives, and says there is still a committed investment in ethical development despite a considerable downsizing in staff in the area.
The ethics and society team was at its highest capacity in 2020, with about 30 employees consisting of engineers, designers, and philosophers. The variation in expertise and skills on the team provides a larger, more varied bank of knowledge to draw from when deciding on the ‘rules’ and principles that will be reflected in future AI products. As part of a reorganization, the team was sliced down to just seven people, who have now been laid off.
As we have seen in the short time AI chatbots have been available to the public, a lot can go wrong. Microsoft already had to rein in the new Bing AI almost as soon as it was launched. We’ve touched on many shortcomings and oddities that have come out of ChatGPT and other chatbots, including blatant misinformation, emotional spirals, and of course an incredibly convincing tool for scammers of all kinds. As more companies rush to burp out more AI-enhanced products, making the team in charge of ethics and development seems like a very strange decision that looks like the company is prioritizing speed over safety.
According to Verge, the terminated employees stated that “People would look at the principles coming out of our offices and say ‘I don't know how this applies’” and it was their job to “show and create rules in areas where there were none”. The team had recently worked on a larger ‘responsible innovation toolkit’ that included a roleplaying game called ‘Judgement Call’ that helped designers think about potential issues or harm that could come about during product development.
The remaining ethics and society members have said the smaller crew has made implementing their future plans difficult.
The forecast calls for clear skies and moral draughts
Last year the ethics and society team put out a memo that outlined the brand risks that would be associated with the Bing Image Creator, which uses DALL-E (powered by OpenAI, which also created ChatGPT). The image generator has become incredibly popular and has proved to have a plethora of uses - we made an interesting valentines day card with it last month. The team accurately pointed out that the tech could potentially damage artists' livelihoods and creative integrity by allowing anyone to not only copy their work but produce unauthorized duplicates based on artists' work without permission.
Clearly, Microsoft is rushing to put unstable technology into the hands of the general public and it doesn’t take a team of experts to point out what kind of damage can be done with that kind of mindset. Without taking into consideration the problems ChatGPT and DALL-E have already presented, both for direct users and people in surrounding communities (think artists but also writers, examiners, and journalists) Microsoft is opening the floodgates for a lot of bad to come very soon.
We can turn to Elon Musk’s vision for his own ‘anti-woke’ ChatGPT competitor, which will apparently be stripped of safeguarding, anti-hate, and discrimination protections, to get a glimpse into Microsofts’s possible future. We have already seen pornographic ‘deepfakes’ of streamers, celebrities, and the general public, and with the possibility of video coming to ChatGPT, we can absolutely expect more to come.
If the company is not going to even try to pretend to care about these issues and have a semblance of an ethics team on board, who exactly is to blame when people face real-world consequences? If not Microsoft, who are we supposed to look to for control over such sensitive and tender technology?
The layoffs and their consequences do allow the mind to wander to many dark and scary places, and while we do not want to fearmonger we have to accept that people will use everyday tools for malicious reasons regardless of the original product intention. Scammers, misogynists, racists, and cheaters have already made a home with AI-generated text, and if Microsoft and other companies gunning for the AI market continue on this path of reckless abandon we will inevitably be caught in a horror movie none of us will be able to control. How many sci-fi movies have been made - and likely will continue to be made - that start off just like this: a new, impressive, and frightening technology released to the public without any foresight? Isn’t that the plot of the first Jurassic Park movies?
It is already hard to avoid ChatGPT and AI bots as it is; if we flood the internet and our devices with unregulated, unethically developed technology we may end up changing the digital landscape irreversibly. With Microsoft laying off the ethics and society team, we could see other companies follow suit - and then we’ll be in a whole lot of trouble.
To quote the wise words of a certain fictional mathematician: “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”
Sign up to receive daily breaking news, reviews, opinion, analysis, deals and more from the world of tech.
Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison.
Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place.
Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).