When Charles Dickens opened A Tale of Two Cities with “It was the best of times, it was the worst of times,” he might as well have been talking about today’s world of work. The optimism of AI-driven productivity sits uneasily beside the dread of job displacement. Economists, futurists, and armchair philosophers on social media swing between utopian and dystopian visions: AI will either free us from drudgery or plunge us into mass unemployment.
But the truth, as always, is less dramatic and more interesting. And sometimes, it comes not from speculation, but from looking at what people are actually doing.
A recent study from Anthropic, Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations, did something refreshingly concrete. Instead of asking what AI might do to work, it analyzed over four million real conversations between humans and Claude, Anthropic’s AI assistant. It mapped those conversations onto the U.S. Department of Labor’s O*NET database—the most comprehensive catalog of occupational tasks we have.
Link: https://arxiv.org/pdf/2503.04761
The result is not a vision of the end of work. It’s a map of its quiet transformation.
Work as Bundles of Tasks: Lessons from Beavers and Bureaucrats
To understand this transformation, we need to rethink what we mean by “work.” Too often, we imagine jobs as monolithic entities: teacher, doctor, lawyer, engineer. But jobs are not indivisible. They are bundles of tasks—some cognitive, some physical, some social.
Take the role of a teacher. One task is developing lesson plans. Another is delivering lectures. A third is grading essays. A fourth is comforting a child who failed a test. AI might be quite good at the first two. It might be able to assist with the third. But the fourth remains distinctly human.
The Anthropic study confirms this reality: AI adoption spreads not across “jobs” but across tasks. Some bundles are more vulnerable than others. Some remain stubbornly human.
Nature offers us an apt metaphor. The beaver, one of evolution’s finest civil engineers, doesn’t “have a job” in the way we think of it. It performs a set of tasks—gnawing trees, dragging logs, stacking branches—that together create something larger: a dam that reshapes the ecosystem. If you removed one task, the whole structure would falter.
Humans too are bundles of tasks. And AI is beginning to unbundle them, twig by twig.
Where AI is Already Making Itself at Home
Anthropic’s analysis revealed some striking patterns:
- Software development dominates. Coding, debugging, and software design are the most common uses. The AI has become a digital pair-programmer, helping with both quick fixes and complex builds.
- Writing is a close second. From technical documentation to marketing copy to educational materials, AI is rapidly becoming a co-author.
- Analytical tasks show strong uptake. Data science, research summaries, and problem-solving are frequent AI collaborations.
- Physical work resists automation. Construction workers, surgeons, and farmers barely appear in the dataset. AI is not yet pouring concrete, wielding scalpels, or herding cattle.
In other words: AI has burrowed into the cognitive middle of the economy—tasks where language, logic, and structured reasoning dominate. Where hands, regulatory barriers, and high-stakes complexity prevail, AI remains peripheral.
This reflects a historical truth about technology. The steam engine did not replace every kind of labor—it replaced some tasks and made others more valuable. The spreadsheet did not kill accounting—it redefined it. AI is now following that same arc.
Augmentation vs. Automation: A Tale of Two Futures
One of the most intriguing findings is the split between automation and augmentation. The study found that 43% of AI interactions were automative, meaning the user delegated the task almost entirely to AI. Meanwhile, 57% were augmentative, meaning the human and AI collaborated through iteration, feedback, and learning.
This is more than a technical distinction—it’s a philosophical one.
Automation is when AI replaces human action. A student asks: “Write my essay on the causes of the French Revolution.” Claude obliges. The task is done.
Augmentation is when AI enhances human action. A student says: “Here’s my draft essay. Can you sharpen the argument, add evidence, and suggest counterpoints?” The result is co-created.
Automation risks eroding skills and hollowing out meaning. Augmentation strengthens skills and expands capacity. It transforms AI into a partner, not a substitute.
This echoes the thinking of economist David Autor, who argued that automation rarely erases whole professions but reshapes them by automating some tasks while augmenting others. What Anthropic has shown is that this process is already happening—not in theory, but in practice.
And here lies the crossroads of our era: Do we build a future of automation, where humans step aside? Or do we build a future of augmentation, where humans step up?
The Goldilocks Zone of AI Adoption
Another revelation: AI use is not evenly distributed across wages. It peaks in mid-to-high wage occupations requiring a bachelor’s degree or equivalent preparation.
At the low end of the wage spectrum—waiters, janitors, agricultural laborers—AI is scarce. These jobs involve hands-on, physical work that AI, for now, cannot do.
At the very high end—surgeons, judges, senior executives—AI is also scarce. Here the barriers are different: complexity, regulation, ethics, and the high stakes of error.
But in the middle—the engineers, analysts, educators, managers—AI thrives. The work is structured, cognitive, and often repetitive enough for AI to add value without existential risk.
In other words, AI’s Goldilocks zone is not too low, not too high, but just right: the knowledge economy’s middle class.
This has profound implications for inequality. If mid-tier professionals accelerate with AI while others lag behind, the gap between task-augmentable and task-resistant work may grow. History tells us such gaps reshape societies. The Industrial Revolution created winners and losers not because machines replaced everyone, but because they elevated some tasks while leaving others untouched.
Tasks, Not Titles: A New Mental Model
Perhaps the most important takeaway is conceptual: the future of work will not be defined by job titles, but by tasks.
When we say “AI will replace teachers,” we misunderstand the problem. What the study shows is that AI may replace some tasks of teaching (lesson planning, generating quizzes), augment others (explaining concepts, providing personalized tutoring), and leave others untouched (mentorship, emotional support, conflict resolution).
The same applies to lawyers, doctors, managers, and writers. Jobs will not vanish overnight. They will evolve, as their internal mix of tasks shifts.
This is why Worker1—the vision of a compassionate, empathetic, high-performing professional—is so critical. In a world where tasks are unbundled, what remains most valuable are the qualities machines cannot replicate: empathy, adaptability, creativity, community-building.
The workers who thrive will not be those who cling to a fixed job description but those who continuously rebundle tasks, integrating AI into their flow while doubling down on their humanity.
The Worker1 Imperative
So what do we do with these insights?
First, we must resist the temptation to frame AI as a story of replacement. History shows us that automation is never so clean. AI is not replacing “jobs.” It is reconfiguring them. That reconfiguration can be empowering if managed with foresight.
Second, we must design for augmentation, not automation. Policies, tools, and cultures should encourage humans to remain in the loop—to learn, adapt, and grow with AI, not vanish behind it.
Third, we must measure at the task level, not the job level. Averages conceal the truth. The future will not unfold as “lawyers replaced, teachers augmented, doctors untouched.” It will unfold as “some tasks automated, others enhanced, many unchanged.”
Finally, we must prepare communities, not just individuals. Strong workers create strong communities, and strong communities nurture resilient workers. If AI accelerates productivity but widens inequality, we will have failed. If it strengthens both Worker1 and the ecosystem around them, we will have succeeded.
Conclusion: Building the Dam Ahead
The beaver does not stop building dams because storms may come. It keeps at its tasks, twig by twig, shaping the flow of rivers with patience and purpose.
So too with us. AI is not a flood washing away the world of work. It is a tool we must decide how to use. We can automate meaning away, or we can augment human potential. We can hollow out communities, or we can build stronger ones.
Four million conversations with AI show us that the choice is still open. Humans are experimenting—sometimes delegating, sometimes collaborating, sometimes learning. The patterns are not yet fixed.
The river is shifting. The dam we build will determine whether we channel that flow into stronger ecosystems—or watch it erode the banks of our humanity.
The challenge, and the opportunity, is to build like the beaver: task by task, with care, with vision, and always with the community in mind.