AI Hype and the Search for Meaning
How do we build meaning in a world of algorithms?
Last week, a post went viral on X titled Something Big Is Happening. It’s one of those very long articles that X is desperately trying to make happen, but you can get a sense of what it’s about from the first few paragraphs:
Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.
I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.
The post compares the current moment in AI development to the weeks before COVID really blew up - a time when smart people could see the world was about to change, even while most people were unaware. The post might be the first X ‘article’ to have ever escaped containment and actually spread beyond just X, and inspired yet another round of AI discourse.
There’s a pattern these debates normally follow. You have your AI boosters who think everything is going to change. There are dozens of rapturous essays floating around about Clawdbot/Moltbot/OpenClaw1 and how AI agents are going to revolutionize absolutely everything. If you don’t have an army of agents running tasks for you right now, what are you even doing? You’re not gonna make it. Closely intertwined with the utopian evangelists are the doomers who believe the same future is coming, but that this is a huge problem that leads to mass unemployment at best and literal Terminator scenarios at worst. There are some folks warning about the destructive risks coming from AI, and it’s not just random people - it includes Mrinank Sharma, the head of safety at Anthropic, who resigned last week in a very public open letter.
You also have AI skeptics who think the entire thing is fake, a house of cards about to collapse. It’s a time bomb, proclaims Ed Zitron. They don’t even have a business model, claims Ross Barkan. They are not ‘smart’ in any sense of the word, says Tyler Austin Harper.
I’m normally hesitant to jump into giving hot takes on the future of AI. It’s not my area of expertise, everyone involved is speculating, and I’m not sure that the thousandth version of AI Gods Are Inevitable, And That Is Great/Awful or AI is a Giant Scam, You Utter Fools is particularly useful to anyone. But what I am interested in, and what I like to think I’m good at, is examining the social dynamics behind discourses like this. And what I notice about every side of the AI debate is how deeply committed everyone involved is to posting their way through it.
In truth, Something Big Is Happening is not the first article of its kind. There have been dozens of articles written like this before - usually on less public sites, like the EA Forums - and there have already been imitators that have popped up in its aftermath.
But the trajectory of these kinds of posts has changed recently. They’ve become more frequent and more evangelical in nature, perhaps rationally so as AI keeps advancing. But in addition to the frequency and the tone of the posts, this debate is far more public than it’s ever been. It’s not enough to believe in a particular theory of AI, you have to post about it publicly. You have to write an article. You have to write a long letter explaining why you joined a company, or why you left.
There are a lot of reasons why people might be posting more frequently. The original author of the letter is the CEO of a small AI company, and may just be a grifter - he’s got a history of hyping slop, and has been accused of fraud and cheating benchmarks before. There have been increasingly fervent rounds of discourse about OpenClaw, MoltBook, and this particular hype article, and it’s likely some of that is opportunist bandwagoners trying to make a buck. Less cynically, it’s clearly the case that many of the people working inside AI have very strong feelings about it. Mrinank Sharma is an AI safety researcher, it’s natural and predictable that if he thinks AI is heading in a dangerous direction, he’d want to express himself.
But Mrinank isn’t alone. OpenAI Is Making the Mistakes Facebook Made. I Quit, wrote OpenAI researcher Zoë Hitzig in the New York Times. A former DeepMind employee is speaking out in Time Magazine. xAI cofounder Jimmy Ba left the company last week with an announcement on X, as did Tony Wu and almost a dozen other prominent xAI figures. Every single one had a public post explaining why they left, how they felt, and their general feelings about the coming AI revolution. And all of these examples are from the last ten days! That’s a genuinely crazy amount of public posting in a short time, but if you go back further, you can find many, many more instances.
You can’t just believe things about AI quietly. You have to announce what you believe. You need to vague-post about how ‘Something is Happening’. You need an article, a manifesto, a capital-T Theory of the Case. You need a Substack, or god forbid, a podcast. It’s easy to overlook this, or say that things have always been this way. But posting is the most powerful force in the universe, and we forget that at our peril.
What’s happening here, underneath the surface, is that we’ve lost the ability to contextualize meaning independent of social media.
A few years ago I wrote a post about how technology profanes the sacred where I said:
People crave sacredness and ritual… They find meaning in these social connections and cultural practices. It’s a way to link yourself to something important - whether it be a wider community, a holy text or a God, other people that you love, a beautiful and breathtaking physical space, or a cultural tradition that goes back generations. And social media by its nature degrades that kind of practice.
After all, imagine a sacred ritual you’re undertaking with other people. It could be a wedding, a religious service, etc. What’s the most embarrassing, thoughtless thing you could do during the middle of it? Have your smartphone loudly ring and buzz, repeatedly. Or use your phone in the middle of the ritual, paying little attention to those around you.
We instinctively recognize, even if we don’t always put it into words, that some places are not meant to be profaned with technology. I cannot imagine how mortified I would be if, while sitting in the shrine of St. Edward the Confessor in Westminster Abbey with other worshippers, my phone rang. Typically, we’re able to shame people into behaving correctly in instances like that. But for the private rituals that can give us meaning - a morning walk in a forest, quiet coffee with a loved one, a night time prayer - it’s all too easy for technology to intrude.
For as long as humans have existed, we’ve created meaning out of ritual. Sometimes these rituals were religious in nature. Sometimes they were built around community or family. Sometimes they were long standing cultural practices. But they existed to ground our lives with purpose and significance.
What we have today is an algorithmic system where people increasingly try to construct meaning out of social media. Why does every view on AI need to be publicly declaimed in a viral post, an op-ed, or a public statement? Because we no longer know how to find meaning if we don’t post about what we believe. If we don’t have quantifiable metrics from social media, or if we can’t literally count how viral we are, how do we know if our worldview is valid or not? If your letter about why you left your company doesn’t get at least 5,000 likes, is it even worth leaving?
There’s a thin line between actually believing in something, and giving the public performance of believing it. And while only the lord above can judge any specific individual, I feel pretty confident that at least some of the people above are performing their opinions for approval of the crowd.
Technology has become a filter for meaning, and nothing seems to have meaning if it’s not algorithmically approved of. And technology is terrible at this. Our current systems of social media are built to maximize engagement, to feed you a constant drip-drip-drip of content that doesn’t enlighten and doesn’t really even entertain as much as it distracts. There is no way to construct purpose out of an infinite scroll that aims to continually micro-dose you on dopamine hits so that you don’t close the app and harm KPIs on a dashboard somewhere. This is not where meaning comes from, but it’s where we’re trying and failing to find it. And the people working at frontier AI labs are just as vulnerable to that dynamic as the rest of us.
I said near the beginning of this post that I was hesitant to give hot takes about the future of AI, but against my better judgment I’m going to try. Let’s go back to the original post we were talking about - the one that compares today to February 2020. And let’s do the author the courtesy of taking that analogy seriously.
We all know what happened in 2020. COVID was a minor story until, in a very short amount of time, it completely changed the world. That’s the thing with exponential curves - they can easily be mistaken for a flat line right up until the point they go exponential. COVID altered so many things about human society on a fundamental level. It changed how we work, how we ate, how we connected to people. It constrained us in physical spaces. It caused booms in some industries while nearly destroying others. It shook governments, businesses, families and every level of society… for a while.
Then we adapted. We figured out workarounds, we developed vaccines, and we got on with life. A few years later, society is plodding along just fine. That’s how you should think about AI.
AI is going to change a lot of things. It might revolutionize a lot of different parts of society, harming some people and boosting others. It might radically reshape some of our institutions. But you know what will happen? Humans will adjust those institutions, we’ll tear some things down and build newer, different things, and we’ll keep on going. Just like we did with COVID.
Here’s what I don’t worry about. I don’t worry that AI is going to kill billions of people. I don’t worry that it’s going to lead to mass unemployment or a permanent underclass. I also don’t think it will lead to a near-term utopia, or that we’re going to experience the Singularity.2 Will things change? Sure. And just like COVID, that transition will cause disruption. It won’t be pain free and maybe we’ll have some scars from the rapid change, the stretch marks of birthing a new world. But it won’t be an end, it’ll be a transition. It won’t change everything immediately, it will take many years. Society will, for the most part, be fine.
Here’s what I do worry about. We live in a world where we’ve increasingly replaced backyard barbecues with scrolling TikTok in an isolated bedroom. More and more people, despite immense material wealth, are lonely and frustrated and bored, digitally bowling alone through lives they’re not sure have any deeper meaning. I’d like to reverse that trend, but I worry that AI will accelerate it.
It’s bad enough that we now use likes and follows to construct meaning. But what does meaning look like in a world dominated by AI? What happens to human connections when you can have an AI companion rather than real friends? What happens when posting is even more effortless and frictionless, when the velocity of AI-generated content is overwhelming? What happens to intellectual pursuits when you can simply have an AI think for you? We started this essay discussing the article Something Big Is Happening, which positions itself as a form of warning. And I think it is a warning, but not in the way the author intended. The article itself is pretty clearly written by AI, leaving me to wonder - is the author actually capable of making the argument without the assistance of AI? Or is he merely picking up a idea he’s heard secondhand and getting an AI to create a worldview for him? Has he turned himself into the stochastic parrot, dully repeating the output of his chatbot’s prompts, unable to think independent thoughts or analyze the world without an AI to guide him?
The same things matter to people that have always mattered. Human relationships. Community. Family. The pain and joy of struggling for a worthy cause. The determination to do hard things, to create beautiful art, to contribute to something larger than oneself. These are still what bring meaning to life, and I worry that they’re becoming harder to reach. If you’re worried about anything with AI, worry about this - that if we’re not careful, technology will disrupt the things that make us human in the first place.
I’ve never seen a product iterate through so many names so quickly
If you want a specific sense of what I do/don’t think is likely, I think Freddie deBoer’s bet is a good place to start.

