AI doesn’t just add work; It changes work in ways that are now empirically undeniable. Harvard Business Review articleArtificial intelligence does not reduce work, it intensifies it“Confirms the validity of what I call”Artificial intelligence taxAlmost a year ago: AI increases the volume, speed, and ambiguity of work unless organizations intentionally design against this outcome.
When the search catches up with the floor
In the AI Tax post , I argued that AI does not arrive simply as productivity gains; It arrives in six categories of new work: juggling and expanding tools, auditing, data readiness, relevance and integrity, burden of failed projects, and constant learning and relearning. These categories emerged from conversations with teams already using AI in practice, with users switching between tools, reconciling deliverables, and cleaning data instead of doing the “higher value” work they were promised.
A Harvard Business Review article by Aruna Ranganathan and Xingqi Maggie Ye offers a rare longitudinal look at this reality, tracking nearly 200 employees at a US technology company over the course of eight months to find out how generative AI has already changed their work. Their conclusion was straightforward: AI tools did not reduce work; They “continuously intensified it.” Employees worked at a faster pace, took on a wider range of tasks, and extended their work into more hours of the day, often without any manager asking them to do so.
Simply put, the study provides an ethnography of the categories of work in AI tax.
Three ways for artificial intelligence to intensify work
Harvard Business Review research identifies three main patterns of ramp-up that emerge once AI tools move from demonstration to everyday use.
- Expand the mission
Once AI is available, people don’t do the same work faster; They start doing more types of work. Product managers and researchers start writing and reviewing code; Employees take on tasks that previously required new staff; People are taking back work that was outsourced, postponed, or simply avoided. On some level, this can be seen as empowering. A deep dive reveals engineers who find themselves mentoring colleagues on AI-powered code, reviewing a stream of partial pull requests, and fixing a low-quality “work ramp” that arrives in their queue wearing finished work - Blurred boundaries between work and non-work
AI makes it easy to “just try something” on the margins of the day: a quick prompt during lunch, another optimization before heading to a meeting, or a late-night idea tested in bed on the phone. These mini-sessions don’t feel like extra work, but over time, they erode the rest and recovery periods, creating a constant sense of cognitive engagement. Workers in the study reported that as motivation became their primary approach during downtime, breaks no longer felt refreshing. - Increased multitasking and cognitive load
Employees run multiple AI agents and threads in parallel, allow the AI to create alternate versions as they type, and monitor the output while trying to focus on something else. Having a never-tired “partner” encourages a constant shifting of context: probing, prodding, re-prodding, reconciling. The result is an ambient sense of always being behind, even as visual productivity increases.
If you read my AI Tax post, these topics will feel very familiar – because they represent the lived experience behind the categories.
How does the AI tax explain densification?
In my book The AI Tax, I described six ways in which AI creates more work than it saves when deployed without design. New Harvard Business Review research clearly falls into this framework.
- Interacting with AI: Multitasking, switching, and sprawl
The third mode of study, increased multitasking, is the human experience of juggling AI tools, agents, and interaction metaphors. In my post, I wrote about the breadth of the tool chain: one AI for scheduling, another in email, and a third hidden in a CRM, each with a different interface, set of capabilities, and quirks. The result is a workday that feels like a perpetual reconciliation exercise, with attention divided into dozens of bland tasks. - Auditing: Censorship and the Problem of Hallucination
Expanding tasks seems effective until you remember that every AI-generated draft, whether it’s a document, a snippet of code, or a marketing campaign, requires proofreading. A Harvard Business Review study documents engineers who have begun spending significant time reviewing AI-powered work produced by colleagues outside their specialty, often through informal exchanges and favorites on Slack. This is the “shadow work” imposed by the AI tax, real work that does not have a line item in the project plan, and is absorbed by people who already have the ability. - Data Science and Preparedness: Uncovering the Hidden Work
Artificial intelligence makes data problems visible. When employees eagerly expand their scope of work: writing analyzes, reports, or prototypes they’ve never attempted before, they quickly run into data that is scattered, mislabeled, or outdated. This collision forces them into a private debate over data: reconciling formats, searching for reliable sources, and knowing enough about the organization’s data structure to be dangerous. - Suitability and Safety: Governance Adoption Delayed
As AI disseminates content more quickly, questions about style, bias, confidentiality, and regulatory risk become everyday concerns rather than fringe cases. The HBR article alludes to this obliquely, but the connection to the two AI tax categories is a direct one: when governance lags behind adoption, every step forward requires a detour to verify compliance and appropriateness. This friction doesn’t show up in vendor demos, but employees feel it immediately. - Failed projects and abandonment cycles
The study depicts an enthusiastic early experiment: people “just trying things” with AI. I warned in my post that this pattern often develops into a cycle of experimental software that doesn’t connect to a real workflow, bots that die on the edge of promise, and technical debt that someone has to clean up. When every failed experiment leaves behind abandoned claims, partial automation, and skeptical users, the AI tax piles up over time. - Learning and relearning: AI as a moving target
Finally, both the HBR article and my post on AI tax converge around the learning process. Every model update, interface change, and new feature, not to mention the arrival of entirely new tools, forces people back into training mode. Add to that social FOMO (“Have you tried the latest model?”) and you get a culture where employees are expected to keep up with the ever-changing AI landscape while also maintaining their current responsibilities.
The point is not that AI cannot create value. It’s that value and complexity are scaled together, and complexity comes first.
The mirage of free time
When AI works, when it actually speeds up a task or streamlines a workflow, a different question arises: What happens to the time that is freed up? In my article on AI tax, I said that this is not a technical question but a leadership and policy challenge. Without intentional design, free time is reabsorbed into:
- More tasks, often vaguely defined as “strategic work” or “innovation.”
- Informal expectations that people will take on additional responsibilities because “tools make it faster now.”
- Light pressure to maintain or increase production rather than using time to recover, learn, or collaborate.
The Harvard Business Review study makes this dynamic clear. Employees used AI to shorten tasks, then filled the margins with new work: helping colleagues, trying out additional prompts, or expanding their responsibilities into areas that were previously outside the scope of work. They felt more productive, but no less busy. Over time, the initial excitement gave way to exhaustion and cognitive fatigue.
And this is the crux of the AI tax argument: If organizations do not explicitly decide how to deal with the time saved by AI, the default will always be intensification, not liberalization, and in many cases, replacement rather than augmentation.
Design against condensation
The Harvard Business Review authors suggest that organizations need “clear AI practices” to prevent ramping up from becoming the default: standards about when to use AI, when not to use it, and how to manage AI-based work sustainably. The AI Tax Framework aligns with that call and provides concrete starting points.
Here are several design steps leaders can take, based on research and AI tax:
- Unify the AI stack
You can reduce the breadth of your toolchain by choosing a small number of platforms and building around them. Consolidation reduces knowledge transformation costs, simplifies administration, and makes it easier to design training that sticks rather than chasing every new feature. - Make the screening process visible and accountable
Stop treating censorship as an invisible heroism. Assign audit responsibilities, track the time it takes, and factor that time into project plans and ROI claims. This is not only fair; It generates the data needed to determine where AI is truly helping and where it is merely redeploying labor. - Invest in data before volume
Many of the frustrations revealed by the study, such as partial results, confusing output, and reliance on “dynamic” coding, stem from weak data, unclear criteria, or missing context. Cleaning, tagging, and aligning data is unglamorous, but it is essential if AI is to produce outputs that reduce work rather than creating additional cleanup work. - Play timed pilots with real endings
Organizations should treat AI experiments as experiments with clear timelines and decision gates, not as permanent, half-approved features. At the end of the pilot, either commit and invest, or close it and document what you learned so you don’t repeat the same mistakes later. I also regularly claim that A.I It requires knowledge managementBut the accelerating adoption of AI often overshadows its implementation. - Protect human time as an asset
Perhaps most important is to decide in advance how to meaningfully reclaim free time. Some portion should be set aside explicitly for rest, reflection, guidance, and exploration, rather than being harvested as shadow productive gain. If AI is to become a colleague, it must create the conditions for better human judgment, not just greater productivity.
From AI tax to AI practice
The convergence of HBR research and the AI tax is encouraging because it suggests that we are moving from a speculative phase of AI to a more experimental and design-oriented phase. We now have a growing body of evidence that suggests that AI, left to its own devices, does not reduce work; It reduces friction and invites more work.
The task for leaders is to treat these realities as constraints on design rather than as nuisances. The AI tax identifies where costs accumulate; This Harvard Business Review article explains how these costs play out in a real organization over time. In between lies the opportunity to build “AI practices” that respect human boundaries, protect time, and ensure that intensity is a choice and not an accident.
(tags for translation) Artificial Intelligence Tax












