In the world of bees, the waggle dance is an elegant system of communication. A worker bee finds nectar and returns to the hive to dance—literally—drawing figure-eights in the dark, humid air of the colony. The angle and duration of her dance encode direction and distance. But here’s the thing: she doesn’t send tokens. She doesn’t generate statistically likely bee-speak. She connects. She invests in the colony’s shared understanding.
Contrast this with today’s large language models (LLMs), the glistening crown jewels of Artificial Intelligence. They consume the written world—billions of tokens—and spit out completions that often feel eerily correct. But that correctness is a performance. The waggle dance was survival. We must remember the difference.
We’re standing at the cliffside edge of Artificial General Intelligence (AGI)—a landscape littered with optimism, VC money, and tokenized dreams. And yet, the closer we inch toward that shimmering horizon, the more the terrain feels… flat.
Why? Because we’ve trained our machines to think fast, but not to think slow. We’ve optimized for completion, but not contemplation. And in doing so, we’ve overlooked a fundamental truth: not all journeys to insight are linear—and many of the most meaningful ones never were.
The Tyranny of Tokens
At the heart of modern LLMs is a beautifully simple idea: break down language into pieces (tokens), train a model to guess what comes next, and repeat ad infinitum. This is like learning to understand Shakespeare by finishing his sentences with autocomplete.
To be fair, it works—spectacularly well—for certain things. Drafting emails. Writing code. Summarizing articles. It’s System 1 on steroids: the fast, intuitive thinking Kahneman wrote about. But AGI is not a parlor trick. It is, by definition, general. And general intelligence means navigating ambiguity, inventing new tools of thought, and—most importantly—connecting context across dimensions, not tokens across lines.
We can teach a model to finish Hamlet’s soliloquy. But we still struggle to teach it why Hamlet paused.
The Non-Linearity of Thought
Let’s talk about how humans think.
Imagine you’re walking through a forest. Not a park with signs, but a true, tangled wood. One moment you’re following a trail of mushrooms. The next, you hear a stream and veer off. You backtrack. You sit. You wonder why you came in the first place. Eventually, you emerge—not at the planned exit, but somewhere better. Insight, as it turns out, was not on the map.
This is how real discovery often happens: non-linear, relational, recursive. We think in loops, not lines. We rely on memory, emotion, and social feedback loops. Our thoughts are not predictive tokens—they are living dialogues between past experience, present awareness, and future aspiration.
LLMs, by design, miss this. Their architecture—transformers, attention heads, positional encodings—forces a form of thought that’s straight-jacketed into sequence. Clever? Undoubtedly. Creative? Occasionally. Conscious? Not even close.
The Illusion of Intelligence
There’s a certain theatrical genius to modern AI. It mimics expertise so well that we often forget it doesn’t understand. It composes an email like your boss, explains a concept like your teacher, and jokes like your favorite late-night host. But this is ventriloquism, not voice.
The truth is, we’ve reached the uncanny valley of cognition. The models are fast enough to dazzle, but brittle enough to break in moments that require slow thought—moral reasoning, deep empathy, conceptual synthesis. And as we scale models with more parameters, we find we’re scaling the performance, not the presence.
People-to-People: The Last Frontier
Here’s the twist: while AI is sprinting ahead in speed, it’s falling behind in something deeply human—relationship.
If you look at history’s greatest insights, they rarely emerged from isolated geniuses. They came from communities. The Enlightenment didn’t happen in one mind; it brewed in salons, in letters, in arguments over wine. Einstein’s breakthroughs weren’t solitary eureka moments; they were nurtured in correspondence with friends and mentors.
Even in the workplace, the most transformative ideas come not from PowerPoints, but from corridor conversations. From the long lunches. From the patient space where doubt can live and curiosity can stretch.
And that’s the thing: AI, as we build it, doesn’t know how to invest in those spaces. It doesn’t do “corridor conversations.” It does bullet points. It completes. But it doesn’t connect.
Thinking Fast is Cheap. Thinking Slow is Sacred.
The current model of AGI feels like building a cathedral with a nail gun. Impressive speed, but no soul.
To truly advance AGI, we must confront the cost of slowness—and pay it. Invest in architectures that reflect the human mind’s love for detours. Build systems that not only mimic human conversation but engage in human communion. Support tools that make people-to-people thinking not obsolete, but essential.
Because in the end, we’re not just building machines that think. We’re building the ecosystem in which we all learn, work, and grow. And if we get that wrong, it won’t matter how fast our models think—they’ll still be thinking alone.
The Real Intelligence Is Collective
The future won’t be won by machines that out-think us, but by communities that out-connect them. By groups of Worker1s—compassionate, high-performing humans—who elevate not only themselves, but everyone around them.
The edge of AGI isn’t technical. It’s relational. It’s not about getting machines to guess the next word—it’s about getting people to build the next world.
And for that, we’ll need more than fast models.
We’ll need each other.