The 2025 Human Development Report from the UNDP, titled “A Matter of Choice: People and Possibilities in the Age of AI,” makes an urgent and timely appeal: that the rise of artificial intelligence must not leave people behind. Its human-centric framing is refreshing, reminding us that AI should be designed for people, not just profits. But when viewed from the ground level—the side of the worker—the picture is more complicated.

The report is a valuable compass. Yet compasses don’t steer the ship. And the ship, right now, is drifting.

✅ Five Things the UNDP Got Right

1. Human Agency as the Anchor

What They Said: The report reframes AI not as an autonomous disruptor but as a tool shaped by human choices.

Why It Matters: Too often, AI is treated like weather—inevitable, untouchable. By restoring the idea that humans can and must choose how AI is designed, deployed, and distributed, the report pushes back against the disempowering fatalism of “tech will do what it does.”

Example: A teacher choosing to use ChatGPT to help students personalize writing feedback is very different from a school district replacing that teacher with a chatbot.

2. Focus on Augmentation Over Automation

What They Said: The report encourages complementarity—humans and AI working together, not in competition.

Why It Matters: This shifts the conversation from “Will AI take my job?” to “How can AI help me do my job better?”—a subtle but critical difference.

Example: In radiology, AI now assists in identifying anomalies in X-rays faster, but the final judgment still comes from a human specialist. That balance is productive and reassuring.

3. Nuanced Life-Stage Perspective

What They Said: It segments the impact of AI across life stages—children, adolescents, adults, elderly.

Why It Matters: Technology doesn’t affect everyone equally. Younger people might be more adaptable to AI, but also more mentally vulnerable due to hyperconnected environments. Older adults face exclusion from AI-integrated systems due to lower digital literacy.

Example: An older person struggling to navigate AI-driven banking systems faces frustration that isn’t technological—it’s design-based exclusion.

4. Highlighting the Global Digital Divide

What They Said: The report illustrates that AI is deepening disparities between high HDI (Human Development Index) countries and low HDI ones.

Why It Matters: While much of the AI narrative is Silicon Valley–centric, the report rightly stresses that many countries lack the infrastructure, talent pipelines, or data sovereignty to benefit.

Example: A rural teacher in Uganda can’t train students in AI because there’s no internet, let alone access to the tools or curriculum.

5. The Call for “Complementarity Economies”

What They Said: The report calls for economies that rewire incentives around collaboration, not replacement.

Why It Matters: Today’s market incentives reward automation, not augmentation. Encouraging innovation that boosts worker agency is vital for inclusive progress.

Example: A logistics company that builds AI tools to help warehouse workers optimize shelving gets different outcomes than one that simply replaces them with robots.

❌ Five Things the UNDP Missed or Underplayed

1. The Rise of Algorithmic Bosses

What They Missed: The report underestimates how AI isn’t just replacing work—it’s also managing it.

Why It Matters: Workers today are increasingly controlled by algorithmic systems that schedule their hours, evaluate performance, and even terminate contracts—with no human oversight or recourse.

Example: A gig driver in Jakarta is penalized by an app for taking a route slowed by a protest. No manager. No context. Just code.

2. The Reality of “So-So AI” Proliferation

What They Missed: The report mentions “so-so AI”—tech that replaces labor without increasing productivity—but doesn’t show how common it is becoming.

Why It Matters: These low-value automations are creeping into call centers, HR departments, and customer service, degrading job quality rather than enabling workers.

Example: Chatbots that frustrate customers and force human agents to clean up the mess—but now with tighter quotas and less control.

3. Weak Frameworks for Worker Rights in AI Systems

What They Missed: The report doesn’t offer concrete policy proposals for how workers can challenge unfair AI decisions.

Why It Matters: Without algorithmic transparency, workers can’t contest outcomes or understand how their data is being used against them.

Example: A loan applicant is denied due to an AI risk score they can’t see, based on features they can’t change. No appeal. No clarity.

4. Gender and Cultural Blind Spots in AI Design

What They Missed: The report touches on bias but doesn’t dig into how AI systems reflect the blind spots of the environments where they’re built.

Why It Matters: AI trained on Western datasets often misinterprets cultural nuances or fails to support non-Western use cases.

Example: Voice assistants that understand American English accents but fail with regional Indian or African dialects, excluding millions from full functionality.

5. No Ownership Model Shift or Platform Power Challenge

What They Missed: The report doesn’t challenge the concentration of AI ownership in a few private firms.

Why It Matters: Without decentralizing AI infrastructure—through open models, public data commons, or worker-owned platforms—most people will be mere users, not beneficiaries.

Example: A nation may rely entirely on foreign APIs for public services like healthcare or education, but cannot audit, improve, or adapt the models because the IP is locked away.

The Way Forward: From Language to Leverage

The report’s strength is its moral clarity. Its weakness is its strategic ambiguity. To make AI work for the worker, we need:

  • Algorithmic accountability laws that mandate explainability, appeal processes, and worker input.
  • Worker-centered tech procurement in public services—choosing tools that augment rather than control.
  • Skills programs focused on soft power—ethics, communication, critical thinking—not just coding.
  • Global development frameworks that fund open, local, inclusive AI infrastructure.

Final Thought

The UNDP is right: AI is not destiny. But destiny favors the prepared. If we want a future of work where humans lead with dignity, not dependency, we need more than vision. We need strategy. Not just choice—but voice.