Parmy Olson: The AI panic ignores something important -- the evidence
Published in Op Eds
Last week, a post written by tech entrepreneur and investor Matt Shumer went viral on social media. Titled "Something Big Is Happening," it was a rundown of all the ways artificial intelligence would, in short order, decimate professional jobs. Tools like Claude Code and Claude Cowork from Anthropic PBC would displace the work of lawyers and wealth managers, he wrote. To get ready, we all needed to practice using AI for an hour a day to upskill ourselves and keep ahead of the tsunami.
The post ripped through the Internet and has been seen more than 80 million times on X. In the words of the young and very online, people are shook. Shumer’s post has struck a nerve in the middle of huge selloffs of finance and software companies whose products seem ripe for replacement.
That market meltdown is one reason the public may be particularly vulnerable to dramatic storytelling about AI right now. Another is that many are tinkering with the latest tools, spinning up a website in hours with Claude Code or using its newer cousin Cowork to answer LinkedIn messages. Collective awe at the agents’ remarkable capabilities has triggered another ChatGPT moment — and soul searching about “what it all means” for our livelihoods.
But the viral reaction to Shumer’s post also helps explain the market turmoil: AI is trading on vibes and anecdotes.
Of the 4,783 words in "Something Big Is Happening," none point to quantifiable data or concrete evidence suggesting AI tools will put millions of white-collar professionals out of work any time soon. It is more testimony than evidence, with anecdotes about Shumer leaving his laptop and coming back to find finished code or a friend's law firm replacing junior lawyers.
Some critics claim the author has made exaggerated claims in the past about tech, but that is beside the point. A single compelling story about AI has created ripples of worry just when the market has become so narrative-driven that it’s giving investors whiplash. One minute AI is overhyped and the next we’re on the verge of the singularity.
Remember in mid-November 2025 when the Dow fell nearly 500 points? Or the following month, when shares in Oracle Corp. and CoreWeave Inc. dropped? In both cases the market was rattled by concerns that an AI bubble was on the verge of bursting.
Then earlier this month shares took a beating again, this time after Anthropic released 11 plugins for Claude Cowork, including one that carried out legal tasks. Now investors were worried that AI threatened the equities in which they’d long parked themselves.
And yet through all these narrative swings, the underlying data hasn’t changed that much. National productivity statistics are up slightly, but generally within their historic range. The Yale Budget Lab has found no discernible disruption to the broader labor market since ChatGPT's launch. And a randomized controlled trial conducted by research group Model Evaluation and Threat Research (METR), which Shumer himself cherry picks from, found last year that experienced software developers took 19% longer to complete tasks when they used AI tools.
It’s worth retaining a healthy dose of skepticism about the speed of this transformation, and remembering that those who spread the most viral claims about it will likely benefit the most. Anthropic Chief Executive Officer Dario Amodei grabbed headlines when he predicted AI would wipe out half of all entry-level white-collar jobs in the next one-to-five years, while Microsoft’s AI head Mustafa Suleyman took things further last week, saying that “most if not all” professional tasks would be automated within 18 months.
Questionable decisions abound for those who only listen to the rhetoric. A Harvard Business Review survey of more than 1,000 executives found that many had made layoffs in anticipation of what AI would be able to do. Only 2% said they’d cut jobs because of actual AI implementation. Swedish fintech firm Klarna Group Plc had to rehire humans last year after its move to replace 700 customer service staff with AI led to a decline in quality.
We’ve seen this pattern before. When stories got ahead of reality in the early 2000s, we got the dot-com crash. The Internet turned out to be as transformative as people claimed, but it took longer than expected to play out.
A slow and deliberate approach to the nuanced impact of AI is needed today, as well as some humility over the fact that none of us — not even the AI labs — have any idea what is around the corner. OpenAI’s leaders didn’t expect ChatGPT to spark a market boom and Anthropic was shocked at the impact of its latest products, staff there tell me.
Two things can be true at the same time: AI’s impact can be both overhyped and real. But striking that balance means prioritizing evidence over testimony, and tracking things like productivity statistics, hiring rates and rigorous studies such as those carried out by Berkeley-based METR.
Artificial intelligence is a genuinely useful technology, but its impact will be uneven, gradual and impossible to predict. That’s the boring truth, however unlikely it is to go viral.
____
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”
©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.






















































Comments