Could You Over-Rely on Analytics?
Data is definitive — or so we think. A deep dive into the cognitive biases that make even careful analysts see patterns that aren't there, and what to do about it.
AI slop is everywhere on social media. But the version quietly spreading through your inbox — polished on the surface, hollow underneath — might be costing your company millions.
There's a good chance that most of the emails sitting in your inbox right now have some element of AI in them. A subject line refined by a copilot, a paragraph smoothed out by ChatGPT, a summary generated by an assistant. And honestly, that in itself isn't much of a problem. If AI helps someone communicate more clearly or saves them twenty minutes on a draft, that's fine.
But there's a subset of those emails — and the reports, the analyses, the deliverables attached to them — that weren't just assisted by AI. They were produced almost entirely by it. Copy-pasted with minimal review, sent off with a confidence that the tool didn't earn. And that's where things start to break down.
We've all heard about AI slop on social media — the bizarre AI-generated images, the soulless listicles, the uncanny content clogging up your feeds. There's plenty of discourse around that. But one area that doesn't get nearly as much attention is the version of this problem that's showing up at work. Call it what researchers at Stanford and BetterUp have started calling it: workslop.
Workslop — work that looks polished on the surface but lacks accuracy, originality, or critical thought — creates a specific kind of problem. The burden doesn't fall on the person who produced it. It falls on the person who receives it.
Similar to the little disclaimer you'll find at the bottom of every chatbot — "AI can make mistakes" — these tools are, in fact, prone to hallucinations and errors. And here's what makes it tricky: a colleague whose work you've reviewed for months develops a recognisable pattern of mistakes. You learn their blind spots. You know where to double-check. AI errors, by contrast, are unpredictable. They can show up anywhere — a wrong number buried in a table, a confident-sounding claim with no basis, a subtle logical leap that doesn't hold up. You don't know where to look because the failure mode changes every time.
This means managers and reviewers on the receiving end are spending significantly more time than before trying to figure out what's actually reliable in the work that lands on their desk. The productivity that AI was supposed to create for the person producing the work gets quietly consumed — and then some — by the person responsible for checking it.
Dive a little deeper and the root challenge becomes clearer. Much of the newer Gen Z population is entering the workforce with AI already by their side. They've had access to these tools since university — or earlier. And while this means their output is initially higher, it can quietly undermine their ability to develop real expertise.
This is similar to what happened during the industrial revolution. As humans transitioned from being the producers of goods to being the operators of the machines that produced the goods, it became essential for the human in the loop to still understand the underlying process. Not because they needed to do it by hand every time, but because understanding the process was the only way to ensure the product was being made correctly. You can't quality-check what you don't understand.
It's also the same logic behind how we teach children mathematics. You'd absolutely encourage a student to use a calculator to speed up their work — but not before they've learned to calculate without one. The tool is most powerful in the hands of someone who already understands what it's doing. Without that foundation, the tool doesn't augment expertise. It replaces the chance to build it.
And the data backs this up. Competence has traditionally developed through repetition — writing dozens of briefs, researching topics from scratch, fixing your own mistakes, getting corrections. Those cycles build intuition. When AI handles the production, junior employees end up interacting with outputs rather than problems. The experience compresses, but so does the depth of learning.
The financial impact of all this is not trivial. Recent research from Stanford and BetterUp estimates that for an organisation of around 10,000 employees, workslop costs upwards of $9 million a year in lost productivity — driven largely by the nearly two hours of rework each instance demands from the person on the receiving end. That's not a rounding error. And it runs directly counter to the narrative that AI adoption automatically raises productivity.
In fact, a study from the National Bureau of Economic Research surveying over 6,000 executives across the US, UK, Germany, and Australia found that the vast majority see little measurable impact from AI on their operations so far. The pattern is starting to echo what economists call Solow's paradox — the same thing that happened with computers in the 1980s, where the technology was clearly transformative but the productivity gains simply didn't show up in the numbers for years.
Firms can start by addressing two of the most common drivers of this problem.
The first is workload. Studies consistently show that overburdened employees are the ones most likely to over-rely on AI. And here's the irony: many firms, upon seeing the productivity potential of AI, respond by assuming employees can now take on more work. This pushes people to lean on AI even harder, often without adequate review, which generates more workslop — the exact opposite of the intended effect. UC Berkeley researchers studying a 200-person tech firm found precisely this pattern: AI increased both the volume and variety of work employees took on, but the resulting cognitive overload led to more multitasking, weaker decision-making, and ultimately lower quality output.
The second is how AI adoption is mandated from the top. The most common — and least effective — approach is a broad directive from leadership to "just integrate AI into everything." Stanford's Jeff Hancock describes these as indiscriminate AI mandates, and the research suggests they're a primary driver of workslop. When leadership encourages AI use without frameworks for how to use it, employees default to the path of least resistance: paste the prompt, copy the output, send it off.
The alternative isn't to discourage AI. It's to be structured about it. Effective mandates provide clear frameworks that balance three things: productivity improvement, quality standards, and employee development. They define where AI adds genuine value and where human judgment remains essential. They create accountability for AI-assisted outputs rather than treating them as automatically trustworthy. And critically, they ensure that especially for newer employees, AI augments the learning process rather than bypassing it entirely.
The goal was never to do more work faster. It was to do better work. And until firms reckon with the difference, we'll keep drowning in slop — just the kind that comes with a company email signature.
Written by Sameer
samspoke.com · Singapore
Data is definitive — or so we think. A deep dive into the cognitive biases that make even careful analysts see patterns that aren't there, and what to do about it.
Counter-intuitively, the speakers who project hesitation and doubt often persuade more effectively than those who project authority. Here's the science behind it.