First, AI flooded the internet with slop, now it's destroying work, too – this is how you use AI and still be a stellar employee

If there's one thing we can depend on AI for, it's to prove time and time again that you can't simply replace human effort with technology. A new Harvard Business Review and Stanford Media Lab study found that "workslop" is overrunning business and, in the process, ruining work and reputations.
If workslop sounds familiar, that's because it's a cousin to AI slop. The latter is all over the internet and characterized by bad art, poor writing, six-fingered videos, and auto-tuned-sounding music.
Workslop, according to HBR, is "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task."
Because we're living on AI Time and everything in technology (and life) seems, thanks to generative AI, to be happening at three times its normal pace, we suddenly have Large Language Model (LLM)-driven AI in every corner of our lives.
Generative platforms like Gemini, Copilot, Claude, and ChatGPT live on our phones, and while Google search still far outstrips ChatGPT as a tool for basic search results, more and more people are turning to ChatGPT when they want deeper, richer, and theoretically more useful answers.
That trend continues in the workplace, where, seemingly overnight, tools like Gemini and Copilot are embedded in productivity apps like Gmail and Microsoft Word.
They're capable of generating:
Sign up for breaking news, reviews, opinion, top tech deals, and more.
- Summaries
- Reports
- Presentations
- Redsearch
- Coding
- Graphics
And it's clear from this report that there has been a quick and broad embrace of these tools for these and many other office tasks. In fact, workers might be squeezing a little too tight.
In the study, 40% of respondents reported receiving workslop, and they're none too happy about it. They report being confused and even offended.
Even worse, workslop is changing how they view coworkers.
The problem with workslop is that while it appears to be complete and high-quality work, it is often not. AI can still produce errors and hallucinations. OpenAI's GPT-5 model is the first major update to address the hallucination issue, stopping ChatGPT from filling in the blanks with guesswork when it doesn't know something. Still, it and other AIs are not perfect.
The work is often weirdly cookie-cutter, in that these are still programs (highly complex ones) that rely on a handful of go-to terms like "delve", "pivotal", "realm", and "underscore."
It's not clear if the workers using AI to build reports and projects recognize this, but their coworkers and managers appear to be aware, and let's just say that the workers' next performance evaluations may not be recognizing them for "originality."
A bad look
According to the report, peers perceive that AI-work-product-delivering coworkers as less capable, reliable, and trustworthy. They also think they're less creative and intelligent.
Now, that seems a bit unfair. After all, it does take some effort to create a prompt or series of prompts that will result in a finished project.
Still, the reaction to this workslop indicates that people are not necessarily curating the work. Instead of a series of prompts delivered to the AI to create some output, they might be plugging in one prompt, seeing the results, and deciding, "That's good enough."
The cycle of unhappiness continues when managers and peers report this workslop to their managers. It's a bad look all around, especially if the workslop makes it out of a company and into a client's hands.
Our AI coworker
What's been lost in this rush to use generative AI as a workplace tool is that it was never intended to replace us or, more specifically, our brains. The best work comes from our creative spark and deep knowledge of context, two things AI decidedly lacks.
When I asked ChatGPT, "Do you think it's a good idea for me to ask you to do work for me and then for me to present it to my boss?" it did a decent job of putting the issue in perspective.
Mostly, ChatGPT discussed how it can help in research and outlining the first version of a project, being a time saver to cut down on repetitive tasks, and helping me generate fresh ideas.
It warned me, however, about
- Originality & Attribution
- Accuracy
- Ethics and Expectations
It was almost as if ChatGPT had already read the HBR study. Even it knows workslop is bad.
How do we avoid workslop?
HBR had some ideas, and I think it's pretty simple. Remind everyone that AI is not the answer to every problem.
Ensure that everyone knows when it's best to use AI and understands what should happen to the AI output, i.e., editing, fact-checking, shaping, or rewriting.
Start viewing AI as a very smart assistant, not as another, smarter version of yourself.
Insist on more in-person meetings and direct collaboration. Reembrace the beauty of a brainstorm.
Workslop, like AI slop before it, will surely get worse before it gets better, and there is a real chance that we may soon no longer know the difference between original human work and AI-generated projects, but I hope that day never comes. We can figure this out. Even ChatGPT knows the answer:
"Think of me as your co-writer or research assistant, not a ghostwriter. Take what I give you, refine it, make sure it’s in your voice, and add your personal expertise. That way, you’re delivering something polished but still authentically yours."
You might also like

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.
Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.