AI Bubble

An AI Company CEO Says Your AI Skepticism Is Actually Grief

Louis Rosenberg wants you to know that calling AI output "slop" means you're in denial about machines taking over

Oliver Senti
Oliver SentiSenior AI Editor
January 2, 20265 min read
Share:
Person working late at computer with multiple AI interfaces open, looking skeptical

Louis Rosenberg, the CEO of an AI company called Unanimous AI, published a piece in Big Think last month arguing that growing skepticism about artificial intelligence isn't rational criticism. It's the first stage of grief. Society, he claims, is collectively entering denial about losing "cognitive supremacy" to machines.

The timing is convenient.

The argument, roughly

Rosenberg's Big Think op-ed lands at a moment when "AI slop" has become Merriam-Webster's word of the year, when mentions of the phrase have increased ninefold compared to 2024, and when even Pinterest had to add features letting users hide AI-generated images from their feeds. The backlash is real. Rosenberg's response: you're all just scared.

His evidence for AI's continued advancement? GPT-5 and Gemini 2.5 Pro competed in the 2025 ICPC World Finals, the prestigious collegiate programming contest. GPT-5 achieved a perfect 12-for-12 score. Gemini solved 10 of 12 problems, including one no human team cracked. And Google just dropped Gemini 3 in November with, depending on who you ask, impressive benchmark results.

So. Rapid AI advancement continues. But skeptics keep calling everything "slop." Therefore, Rosenberg concludes, the skepticism is because "society is collectively entering the first stage of grief."

There's a logic problem here.

The missing middle

What Rosenberg doesn't engage with: the possibility that AI can be simultaneously improving and also producing enormous quantities of garbage that degrades people's experience of the internet. Both things can be true. AI-generated articles now make up more than half of all English-language content on the web, according to one SEO firm. Negative sentiment around "AI slop" hit 54% in October.

People aren't in denial about AI getting better at programming contests. They're annoyed about cat soap operas flooding YouTube, fake recipes with AI-generated photos, and Amazon product descriptions that trail off with phrases like "Unfortunately I do not have enough information to summarize further."

The ICPC results are legitimately impressive. I'm not going to pretend otherwise. But the existence of capable AI systems doesn't invalidate frustration with how the technology is actually being deployed.

The conflict of interest thing

Rosenberg is introduced in the Big Think piece as "a computer scientist and CEO of Unanimous AI." His company sells AI products. This doesn't automatically make his arguments wrong, but it does explain why his framing treats all AI criticism as psychological dysfunction rather than, say, market feedback.

The Hacker News comments on his article were unimpressed. "They have conveniently omitted he's also CEO of 'UNANIMOUS AI,'" one user noted. Another pointed out the "defeatist/inevitabilism" of the piece: "We've heard it before with other things. Are we supposed to just accept climate impacts?"

The grief metaphor is doing a lot of work here, and not all of it is honest work.

What the "slop" people are actually saying

The AI slop critique isn't really about whether large language models can solve algorithmic problems under time pressure. It's about the flood of low-effort content that's making certain corners of the internet worse.

"The problem with AI slop is that it just cheapens what AI can do," analyst Anshel Sag told Gizmodo. "People want sharper images. People want easier photo editing. They don't want AI slop."

This is a distinction Rosenberg collapses. His argument treats "GPT-5 won a coding competition" and "Meta keeps pushing AI features nobody asked for" as part of the same phenomenon, which they're not.

The dotcom bubble comparison nobody's making

Here's what I find strange. "It can be simultaneously true that AI is a world changing technology, with drastic consequences for how we live our lives and it is a massive financial bubble, built on extremely overconfident bets," as one Hacker News commenter put it. "This exact scenario happened with the dotcom bubble."

The internet was obviously transformative. Pets.com still went bankrupt. These facts coexist.

The "AI denialism" framing asks you to accept that any skepticism about current valuations, deployment strategies, or content quality is really just emotional avoidance. That's a neat rhetorical trick if you're selling AI products.

The Gemini 3 thing

To give Rosenberg some credit: AI does continue to improve at a "stunning pace." Google released Gemini 3 on November 18, 2025, and the benchmarks are strong. The model apparently scores well on reasoning tests, multimodal understanding, all the usual.

But benchmark performance and user experience are different metrics. People aren't calling things "slop" because they don't understand that AI systems can now solve graduate-level math problems. They're calling things slop because their Instagram feeds are full of AI-generated images they didn't ask for.

The real question

Is there something to the grief theory? Maybe. Cognitive supremacy is a weird thing to mourn, but humans do resist paradigm shifts. The idea that machines might genuinely outperform us at more and more cognitive tasks is uncomfortable.

But here's the thing: grief and legitimate criticism aren't mutually exclusive. You can be psychologically resistant to a change AND have valid complaints about how that change is being implemented.

Rosenberg's op-ed only acknowledges the first possibility. Convenient, for an AI CEO.

What happens next

The backlash isn't going away. Some sites, including Pinterest and YouTube, have introduced features allowing users to limit AI-generated content. Merriam-Webster picked "slop" for a reason.

Meanwhile, the AI labs will keep publishing impressive benchmark results. Both trajectories continue. The question is whether tech companies can find deployments that people actually want, or whether they'll keep insisting that user frustration is really just a psychological phase we need to work through.

OpenAI said after ICPC they probably won't participate in more competitions. "The next frontier is more exciting." Translation: we're done proving points. Now comes the hard part.

Tags:artificial intelligenceAI skepticismtech criticismGPT-5AI bubble
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

An AI Company CEO Says Your AI Skepticism Is Actually Grief | aiHola