Microsoft doesn't want us to be scared of AI - but is it doing enough?

Illustration of Scientist and giant robot
(Image credit: Moor Studio / Shutterstock)

Microsoft has become one of the biggest names in artificial intelligence and brought us the quirky and sometimes strange Bing Chat AI. The company has heavily invested in AI, and has now come up with three commitments to keep the company, and the technology, in check. Laws and regulations are rushing to catch up with AI, falling so far behind where we need them to be that Open AI’s CEO bounced around government institutions to beg for regulation.

In his statement to Congress earlier in the year Sam Altman was clear that the dangers of unregulated AI and diminishing trust are a global issue, ending with the strong statement that "this is not the future we want."

In order to help keep AI in check, Microsoft’s “AI Customer Commitments” aim to act as both self-regulation but also customer reassurance. The company plans on sharing what it’s learning about developing and deploying AI responsibly and assisting users in doing the same.

Antony Cook, Microsoft Corporate Vice President and Deputy General Counsel shared the following core commitments in a blog post:

“Share what we are learning about developing and deploying AI responsibly”

The company will share knowledge and publish key documents to allow consumers to learn from, including the company’s internal Responsible AI Standard, AI Impact Assessment Template, Transparency Notes and more. It will also be rolling out a training curriculum that is used to train Microsoft employees to give us insight into the ‘culture and practice at Microsoft’.

As part of the information share, Microsoft says it will ‘invest in dedicated resources and expertise in regions around the world’ to respond to questions and implement responsible AI use.  

Having global ‘representatives’ and councils would boost not just the spread of the technology to non-western regions, but would also remove language and cultural barriers that come with having the technology heavily based and discussed in the English language. People will be able to discuss their own concerns in a familiar language, and with people that really understand where these concerns are coming from.

"Creating an AI Assurance Program"

The AI Assurance Program is basically there to help ensure that however you use AI on Microsoft’s platforms it meets the legal and regulatory requirements for responsible AI. This is a key factor in ensuring people use the technology safely and securely, as most people wouldn’t consider legality when using Bing Chat AI so having transparency allows users to feel safe.

Microsoft says it will also bring customers together in “customer councils”  to hear their views and receive feedback on its most recent tools and technology.

Finally, the company has committed to playing an active role in engaging with governments to promote AI regulation and present proposals to government bodies and its own stakeholders to prop up appropriate frameworks.

"Support you as you implement your own AI systems responsibly"

Finally, Microsoft plans to put together a “dedicated team of AI legal and regulatory experts” around the world as a resource for you and your business when using artificial intelligence.

Microsoft taking users who use their artificial intelligence capabilities for their businesses into consideration is a pleasant addition to its AI commitments, as many people have now slowly incorporated the tech into their ventures, having to figure out and balance their approach on their own.

Having resources from the company behind the tools will prove to be incredibly helpful for business owners and their employees in the long run, giving them steps and information they can rely on when using Microsoft's AI responsibly.

Too little too late

Putting the technology out into the world and then discussing how to care for the people using it after the fact is a failure on Microsoft’s part.

Microsoft publicizing its AI commitments not long after cutting its pioneering Ethics and Society team that was involved in the early work of software and AI development is a bit strange, to say the least. It doesn’t fill me with a lot of confidence that these commitments will be adhered to if the company is willing to get rid of its ethics team.

While I can acknowledge that artificial intelligence is an unpredictable technology at the best of times (we have seen Bing Chat do some very strange things, after all) it seems like the AI Customer Commitments Microsoft is now putting in place is something that we should have seen a lot earlier. Putting the technology out into the world and then discussing how to care for the people using it after the fact is a failure on Microsoft’s part.

Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison.

Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place.

Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).