The Paradox of Powerful Tools
In the tech industry, we often fall into the trap of believing that if a tool is powerful enough, people will naturally use it. We are currently pouring billions into Artificial Intelligence, assuming that sheer computational “intelligence” will solve the future of work.
But from where I sit at the intersection of human behavior and technology, I see a different reality: AI doesn’t fail because the math is wrong; it fails because the human integration is broken. The “Future of Work” isn’t a technical spec. It’s a complex negotiation between human psychology and machine logic. If we want AI to succeed, we have to stop treating it as a software upgrade and start treating it as a new type of social collaborator.
Beyond the “Black Box”: The Need for Mental Models
The biggest hurdle to AI adoption isn’t a lack of features; it’s a lack of legibility. When a human colleague gives you a suggestion, you intuitively understand their perspective, their biases, and their expertise. You have a “mental model” of how they think. AI, however, often operates as a “black box.” When the output is unexpected, users don’t just get confused—they lose trust.
I’ve often surmised that the most successful AI tools aren’t necessarily the “smartest”—they are the most predictable. Trust isn’t built on perfection; it’s built on understanding why a system did what it did. If a user can’t predict how a tool will behave, they will eventually revert to manual processes where they feel in control.
The Friction Paradox: Eliminating Hurdles, Preserving Pauses
We often talk about “friction” in UX—the small hurdles that slow a user down. With AI, friction is a double-edged sword.
On one hand, there is the Friction Penalty. For example, I’ve seen many AI services commercially position themselves on the promise of saving customers X hours per week at the office. But in practice, workers will be quick to discard AI tools that add time or effort to their existing workload. For example, workers may detect costs if AI tools require them to constantly travel outside of their current work routines and systems. If the tax for using a product is too high, workers will be quick to abandon it.
However, as we integrate AI deeper into our workflows, we must also recognize the value of Intentional Friction.
If a tool is too seamless, it encourages “autopilot” behavior. When the barrier to generating content or code is zero, the human tendency is to stop reviewing and start blindly accepting. This is where AI adoption becomes dangerous. To ensure long-term, meaningful impact, we actually need “cognitive speed bumps”—moments where the system intentionally slows the user down to review, edit, and integrate the output.
The goal isn’t just speed; it’s deliberate integration. We don’t want to sacrifice the long-term quality of our work for the short-term dopamine hit of a “one-click” solution. Research helps us find the balance: removing the “bad” friction that hinders productivity, while designing the “good” friction that keeps the human mind engaged.
The Agency Paradox: Partners, Not Replacements
There is a persistent narrative that AI is here to replace human labor. Social science tells us something more nuanced: people don’t fear “automation” as much as they fear the loss of agency.
Many years ago while studying AI-generated summaries, a recurring theme emerged. Users didn’t want the AI to “finish” the task; they wanted it to provide a “scaffold.” They wanted a partner that handled the “heavy lifting” of data organization while leaving the “high-value” synthesis to them.
When we design AI to replace the human entirely, we create a passive, disengaged workforce. When we design AI to augment the human, we create a more powerful professional. The goal of research is to find that “sweet spot” where the AI does the chores, but the human keeps the steering wheel.
From Technical Shift to Cultural Evolution
Introducing AI into a company is less like installing a new server and more like introducing a new hire. It changes the culture. It shifts power dynamics. It redefines what “expertise” looks like.
Without a research-led approach, organizations risk “Cargo Cult AI”—implementing the technology because it’s trendy, without understanding the social fabric of their own teams. We need to ask:
- Whose workload is actually being reduced?
- Where does the AI create new, invisible labor?
- How does this tool change the way teammates trust one another?
The Path Forward
The future of work will not be defined by who has the most powerful LLM. It will be defined by who understands the human element best.
User research is the bridge between the “what” of technology and the “how” of human behavior. By applying the rigors of social science to AI development, we can move past the hype and build tools that don’t just work—but actually matter to the people using them.
The future isn’t about smarter systems; it’s about more intentional ones.