Microsoft says Copilot is for ‘entertainment' not work, Meta’s Muse Spark and 7 other AI stories you need to catch up on
AI's rapid rise is further entangled with power, profit and growing uncertainty
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
I've been writing about AI for over a year now and there's still no such thing as a quiet week. There's always a lot to catch up on. Sometimes it's positive, often it's concerning and occasionally its downright bizarre. This week is no exception, particularly as broader geopolitical tensions shape and are shaped by AI in increasingly visible ways. We're on the brink of new models, new infrastructure and, inevitably, new concerns.
This article is part of our ICYMI franchise, where we round up the biggest stories of the week — this time in AI.
Welcome to ICYMI AI, a new round-up of the biggest stories in artificial intelligence. Right now, what stands out is the deepening entanglement between tech and global politics, alongside the familiar cycle of "this changes everything" followed swiftly by a bunch of caveats. We're also starting to get a clearer picture of how much money companies like OpenAI are actually making, which feels significant when a lot of people are wondering when or if the AI bubble will burst.
This week's lead story captures something I've seen play out repeatedly over the past year: big, bold claims meeting real-world limits. Microsoft is now suggesting Copilot should be used for "entertainment purposes only", which feels like a big shift in tone. Alongside that, there's a deep and fascinating New Yorker profile of Sam Altman, fresh insight into OpenAI's revenue figures, and new concerns around Anthropic's latest model, Claude Mythos. Growing tension, growing excitement, and always the sense that there's way too much AI news to absorb, which is exactly why this round-up exists. There's also a quiz at the bottom to test your knowledge, so stay sharp.
Article continues belowYou know the Microsoft Copilot tool you use at work? It's not for work anymore, sorry
You know Microsoft’s Copilot? The AI tool positioned as essential for the modern workplace and a flagship example of how AI can transform productivity? According to Microsoft’s official terms and conditions, it’s “for entertainment purposes only”.
OpenAI, Google and Anthropic have similar disclaimers in their own terms. But what matters is the gap between how these tools are sold and what the small print says. Microsoft wants businesses to keep using Copilot. But the language shifts the responsibility back on the user if anything goes wrong.
This is a pattern we’ve watched play out in AI therapy, AI friendship, AI life coaching, and even AI romantic companions. AI tools can play certain roles very well, but the risk is yours. So the big question here isn’t whether AI will make mistakes or not, we know it will. It’ll be about who gets held responsible when it does. And right now, AI companies are doing everything they can to make sure it isn’t them.
ChatGPT maker OpenAI says it's making a lot of money — does that mean the AI bubble won't burst?
One of the biggest questions hanging over the whole AI industry right now is whether it’s actually making any money. The answer is yes, but maybe not in the way you might think. It’s less about people using ChatGPT for recipes or late night health spirals and more about businesses paying to integrate AI into their products and workflows.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
But even if you’re not using it like that at work, this is important. Because revenue changes the trajectory significantly. If companies can make serious money from AI, it becomes harder to argue this is a passing hype cycle or bubble that’s about to burst any minute now. It also points to where things are heading, which is more focus on business customers. Which could mean potentially higher costs or tighter limits for regular users.
Iran threatens to bomb $30 billion Stargate AI data center backed by OpenAI, Nvidia, and other tech giants
Reports suggest that Iranian officials have referenced tech infrastructure as a potential target in the event of escalation with the US and its allies. The biggest project drawing attention is Stargate, which is a large data center initiative in the United Arab Emirates that's backed by major tech players, including OpenAI. It's designed to provide vast amounts of the computing power needed to train and run advanced AI systems.
This is important because it shows us how dependent AI is on massive infrastructure, requiring huge amounts of energy and stable geopolitical conditions to operate. For everyday users it also shows that all the tools we rely upon are dependent on that infrastructure. If it becomes too expensive, politically contested or damaging to the environment, that could mean much higher costs, less access and slower progress.
More AI news you might've missed
- We put Meta’s Muse Spark AI model to the test and it may not need to outperform other chatbots. If it makes AI feel like a natural part of scrolling, messaging, and sharing, that could be far more powerful.
- Investigative journalist Ronan Farrow digs into Sam Altman and OpenAI in a deeply reported New Yorker profile, drawing on interviews and internal documents.
- In more news of AI’s role in the Iran conflict, the BBC spoke to the creator behind the viral Lego-style AI videos and it’s a fascinating, unsettling look into the future of propaganda.
- The Artemis II mission gave us some of the most breath-taking photos of Earth from space we’ve ever seen. Here’s how we turned one into an iPhone wallpaper with a little help from AI.
- In the UK, OpenAI is reportedly pausing a multi-billion pound UK data centre project citing high energy costs and regulatory concerns. Does this signal caution or will it risk leaving the UK behind?
- Anthropic’s latest research shows AI can hide intent and even ‘cheat’ without saying so. That’s based on findings from its Claude Mythos Preview, a new and powerful new model from the company.
Were you paying attention? Take our AI news quiz
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.