Microsoft reins in Bing AI’s Image Creator – and the results don’t make much sense
Stricter censorship of created pics means Dall-E 3 is even censoring itself, weirdly
You may have noticed that Bing AI got a big upgrade for its image creation tool last week (among other recent improvements), but it appears that after having taken this sizeable step forward, Microsoft has now taken a step back.
In case you missed it, Bing’s image creation system was upgraded to a whole new version – Dall-E 3 – which is much more powerful. So much so that Microsoft noted the supercharged Dall-E 3 was generating a lot of interest and traffic, and so might be sluggish initially.
There’s another issue with Dall-E 3 though, because as Windows Central observed, Microsoft has considerably reined in the tool since its recent revamp.
Now, we were already made aware that the image creation tool would employ a ‘content moderation system’ to stop inappropriate pics being generated, but it seems the censorship imposed is harsher than expected. This might be a reaction to the kind of content Bing AI users have been trying to get the system to create.
As Windows Central points out, there has been a lot of controversy about an image created of Mickey Mouse carrying out the 9/11 attack (unsurprisingly).
The problem, though, is that beyond those kinds of extreme asks, as the article makes clear, some users are finding innocuous image creation requests being denied. Windows Central tried to get the chatbot to make an image of a man breaking a server rack with a sledgehammer, but was told this violated Microsoft’s terms of using Bing AI.
Whereas last week, the article author noted that they could create violent zombie apocalypse scenarios featuring popular characters (that are copyrighted) with Bing AI not raising a complaint.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Analysis: Random censorship
The point is about censorship being an overreaction here, or this seemingly being the case going by reports, we should add. Microsoft left the rules too slack in the initial implementation, it appears, but has gone ahead and tightened things too much now.
What really illustrates this is that Bing AI is even censoring itself, as highlighted by someone on Reddit. Bing Image Creator has a ‘surprise me’ button that generates a random image (the equivalent of Google’s ‘I’m feeling lucky’ button, if you will, that produces a random search). But here’s the kicker – the AI is going ahead, creating an image, and then censoring it immediately.
Well, we suppose that is a surprise, to be fair – and one that would seem to aptly demonstrate that Microsoft’s censorship of the Image Creator has maybe gone too far, limiting its usefulness at least to some extent. As we said at the outset, it’s a case of a step forward, then a quick step back.
Windows Central observes that it was able to replicate this scenario of Bing’s self-censorship, and that it’s not even a rare occurrence – it reportedly happens around a third of the time. It sounds like it’s time for Microsoft to do some more fine-tuning around this area, although in fairness, when new capabilities are rolled out, there are likely to be adjustments applied for some time – so perhaps that work could already be underway.
The danger of Microsoft erring too strongly on the ‘rather safe than sorry’ side of the equation is that this will limit the usefulness of a tool that, after all, is supposed to be about exploring creativity.
We’ve reached out to Microsoft to check what’s going on with Bing AI in this respect, and will update this story if we hear back.
You might also like ...
Darren is a freelancer writing news and features for TechRadar (and occasionally T3) across a broad range of computing topics including CPUs, GPUs, various other hardware, VPNs, antivirus and more. He has written about tech for the best part of three decades, and writes books in his spare time (his debut novel - 'I Know What You Did Last Supper' - was published by Hachette UK in 2013).