Kim Kardashian blames ChatGPT for low law class test scores – even as OpenAI dismisses a rumored ban of legal and medical advice from ChatGPT
- Kim Kardashian admitted to failing tests in law school after asking ChatGPT for help
- Recent viral rumors claimed ChatGPT stopped offering legal and medical advice
- But AI users can confuse confidence for actual expertise and should be more cautious
Kim Kardashian, possibly the world’s most famous law student, has just put generative AI on blast. During a lie detector test video interview for Vanity Fair, she copped to using ChatGPT to help her study and for tests, but added that the chatbot’s advice has been so inaccurate, she's outright failed some tests by trusting it.
Her story comes out at a particularly apt moment. Over the last week or so, rumors spread online like wildfire that ChatGPT has stopped offering legal and medical advice altogether. Users claimed ChatGPT was refusing to answer certain questions around legal and health issues, pointing to a line tucked into OpenAI’s updated terms of service as the culprit. The clause states, “Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional."
That phrasing set off a wave of speculation that OpenAI had made it impossible to use ChatGPT for two of its apparently popular functions. As with almost all internet speculation based on an imprecise understanding, this turned out to be untrue. OpenAI Health AI lead Karan Singhal explained on X that this wasn't a new part of the terms of service, and nothing was changing.
You can still discuss legal or medical topics. The sentence was inserted a while back to make it clear for legal cover that ChatGPT is not pretending to be a licensed professional. It may simply have been that the sentence drew attention because it seemed new to someone unfamiliar with ChatGPT's terms of service before the recent update.
Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information. https://t.co/fCCCwXGrJvNovember 3, 2025
But the whiplash caused by this rumor, paired with Kardashian’s very public story of AI disappointment, highlights how people don't always grasp where AI tools are useful and when they can become liabilities. ChatGPT can be great at explaining concepts and summarizing information. Treating it like an authoritative source of legal or medical advice is not a good idea, however.
Kardashian’s approach to ChatGPT is hardly unique. The idea of taking a picture of a test question and asking ChatGPT for the answer is logical enough. Automatically expecting an answer good enough to stake her test score on was more than a little naive, no matter how confident the answer seemed.
Sometimes ChatGPT’s biggest flaw isn’t its knowledge gaps, but its self-assured tone. It almost always uses language implying a deep understanding of a topic, even when it’s inventing things entirely. It's hallucinations with a side of arrogance.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Kardashian isn’t a novice tech user. She runs multi-million-dollar companies, and she’s been studying law for years. But even with all that experience, she still ended up in the same trap many less famous ChatGPT users have faced. It's one thing to ask ChatGPT for a summary of HIPAA. It’s another to draft a will, submit it to court, and find out later it’s full of made-up legalese.
AI advice awareness
OpenAI's terms remind users that ChatGPT is not a professional in any field, let alone law or medicine. It suggests awareness of people doing so regardless, and discomfort with that fact.
ChatGPT can still help decipher a lease agreement or simplify medical terms, but it cannot replace licensed professionals. It can’t vouch for the legality or accuracy of its interpretations, and it certainly won’t be held accountable if you end up misdiagnosing a friend or submitting a lawsuit based on fake case law.
It's heartening to know wealth doesn't mean you can't be fooled by AI. We should all double-check ChatGPT's answers, regardless of the topic. It's embarrassing to show up to a closed restaurant the AI chatbot said would still be open, but blind faith in its answers on issues of law or life and death is even more reckless.
AI chatbots can be very supportive ahead of an exam or when managing a chronic medical condition, but they’re only as useful as the judgment you bring to the interaction.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.