Ai and workload creep: how automation quietly expands work and fuels burnout

AI was sold as a shortcut: automate the boring parts, reclaim your day, focus on “what really matters.” A new study highlighted in Harvard Business Review reveals something closer to the opposite. Instead of shrinking workloads, AI is quietly stretching them—intensifying work, eroding role boundaries, and fueling a new, harder-to-name form of burnout.

Researchers from UC Berkeley and Yale spent eight months embedded inside a roughly 200-person tech company where employees were encouraged—but not forced—to experiment with AI tools. This wasn’t a lab simulation or a small survey; they watched real teams use AI in real time. What they found was a pattern they call “workload creep”: an almost invisible expansion of tasks, expectations, and responsibilities that steadily piles up until people are exhausted.

When “productivity” becomes more work

The first and most immediate shift the researchers observed was task expansion. Once AI tools were introduced, the division of labor inside the company started to blur.

– Product managers who once focused mostly on roadmaps, stakeholder alignment, and specifications suddenly began writing and debugging code.
– Researchers, whose jobs centered on user studies and insights, started taking on engineering tasks they would previously have handed off.
– Employees in non-technical roles experimented with drafting technical documentation, customer communications, or even basic data analysis.

AI made this expansion feel natural—even empowering. If a language model can sketch out a code snippet or summarize a dataset, why not “just do it yourself” instead of waiting on the engineering team or a data specialist?

On the surface, it looked like productivity magic: one person doing what used to require three. But the underlying workload didn’t disappear. It simply shifted onto fewer shoulders.

The illusion of time saved

At the heart of the problem is a common assumption: if you can complete each task faster with AI, you will end up with more free time. In practice, the opposite occurs.

Employees in the study initially believed that AI would carve out space in their calendars—fewer late nights, less scrambling before deadlines. Instead, as they became more productive, expectations rose in lockstep:

– Managers quietly raised the bar on what “good” output looked like.
– Project scopes expanded because “AI can help with that.”
– Colleagues started to assume quick turnaround on requests that used to take days.

Time savings created a vacuum, and the organization rushed to fill it with more tasks. What looked like efficiency gains translated into an expanding backlog of work—faster cycles, more deliverables, higher standards, and less room to say no.

From specialization to “do everything”

Historically, organizations relied on specialization: engineers engineered, researchers researched, product managers coordinated. AI disrupted that structure not by replacing roles, but by blurring them.

The study documented a clear erosion of role boundaries:

– People took on tasks outside their expertise because AI made it seem low-effort.
– Work that once required cross-functional collaboration moved to a single desk.
– The informal message shifted from “that’s not your job” to “you can probably handle that with AI.”

This kind of flexibility can feel exciting at first. Employees can broaden their skill sets and have more impact. But over time, it also means they carry more responsibility, juggle more contexts, and become accountable for more parts of the workflow. Cognitive load spikes, even when individual tasks feel easier.

Workload creep: the slow burn

“Workload creep” describes a subtle, almost invisible dynamic:

1. AI makes a category of tasks faster or more accessible.
2. Individuals start doing more of those tasks themselves.
3. Teams normalize that higher level of output.
4. The organization quietly resets its expectations around speed and volume.
5. Any time saved is absorbed by new projects, additional features, more iterations, or extra monitoring.

Because no single change feels dramatic, employees might not recognize what’s happening until they hit a wall. They are not necessarily being explicitly overworked; instead, the baseline of “normal” is gradually redefined.

The researchers found this creep particularly pronounced in knowledge and tech roles, where work is already flexible, loosely scoped, and easy to expand. If “just one more version,” “one more experiment,” or “one more outreach campaign” can be spun up quickly with AI, people are encouraged—explicitly or implicitly—to do exactly that.

A new flavor of burnout

Traditional burnout is often associated with visible overload: long hours, impossible deadlines, chronic stress. AI-driven burnout is more insidious. It emerges from:

– Constant context switching between AI-assisted micro-tasks.
– The expectation to produce “AI-level” output all the time.
– The pressure to keep learning and adopting new tools just to maintain baseline performance.
– The feeling that there is always more you could be doing, because the tools make more possible.

Many workers find themselves mentally drained even if they aren’t always clocking more hours. The effort shifts from manual labor to decision fatigue, quality checking AI output, and juggling an ever-widening portfolio of responsibilities.

There’s also a psychological twist: employees may feel guilty for being exhausted because, on paper, AI is “helping.” If the tools are supposed to make everything easier, why does the job feel harder?

Perpetual beta mode: always learning, never done

Another stressor the research highlights is the ongoing need to stay up to date. AI tools change fast. New models appear, features update, workflows evolve. Mastering one system is no longer enough; there’s an expectation to be in permanent adaptation mode.

This creates a second, invisible job layered on top of the first: the job of continuously optimizing how you work.

– Testing prompts and workflows.
– Comparing tools.
– Re-training yourself every few months as interfaces and capabilities shift.
– Teaching colleagues “best practices” you’ve just figured out yourself.

That learning curve rarely appears in project plans or performance metrics, but it still costs energy and attention. Workers are spending time not just on the work, but on reinventing how the work is done—over and over again.

Managerial blind spots and shifting baselines

From a management perspective, AI can distort how effort is perceived. When outputs arrive more quickly, it’s easy to assume the work is easier, full stop. But the study suggests something more complex: AI redistributes effort rather than eliminating it.

Leaders see:

– Faster drafts, prototypes, or analyses.
– More ideas on the table.
– Teams apparently “handling more with the same headcount.”

What they don’t see as clearly:

– The extra cognitive load of overseeing more parallel workstreams.
– The time spent validating, correcting, or redoing AI-generated content.
– The emotional toll of feeling constantly “on” in a system that can, in theory, run 24/7.

Without a conscious effort to recalibrate expectations, organizations risk normalizing AI-accelerated output as the new baseline. Once that happens, there is little room left to slow down—even when people are clearly burning out.

The cultural trap: “if you’re not using AI, you’re falling behind”

The study also reveals a cultural pressure that doesn’t show up in formal job descriptions. As AI adoption spreads, workers start to fear that not using these tools aggressively will mark them as inefficient, outdated, or replaceable.

That anxiety can drive:

– Overuse of AI, even when it’s not the best tool for the task.
– Reluctance to push back on rising workloads, since the official story is that “AI makes it doable.”
– Quiet competition between colleagues to be seen as the most “AI-savvy” or productive.

Instead of allowing AI to become a supportive tool, this culture turns it into a benchmark. The comparison is no longer: “How much can a human reasonably do in a day?” but “How much can a human plus AI do—and why aren’t you hitting that level yet?”

How individuals can push back

While structural change is essential, individual workers do have some levers they can pull to blunt AI-driven workload creep:

Reassert role boundaries. Be explicit about what is and is not your responsibility, even if AI technically enables you to do more. “I can help prototype this with AI, but engineering still needs to own the final implementation” is a legitimate stance.
Make trade-offs visible. When asked to take on more because AI “should make it easy,” spell out what will be deprioritized. “If I add this, feature X will slip by a week” forces a realistic conversation.
Timebox AI work. It’s easy to endlessly refine prompts, drafts, and iterations. Set hard limits: two AI-assisted versions, then decide and move on.
Track the hidden work. Keep a log of time spent validating AI outputs, learning new tools, and troubleshooting. This can be invaluable in performance reviews and workload discussions.
Protect focus. Group AI-heavy tasks together instead of sprinkling them through your day, reducing context switching and cognitive overload.

These tactics won’t fix systemic issues by themselves, but they can help workers regain some control in environments where AI has quietly inflated expectations.

What organizations must do differently

The study is also a warning to companies: deploying AI without revisiting how work is designed is a recipe for burnout, not breakthrough.

To harness AI without crushing people, organizations need to:

Redefine success metrics. Instead of rewarding sheer output volume, emphasize quality, sustainability, and impact. If metrics only count how much gets shipped, AI will inevitably be used to push quantity over everything else.
Set explicit workload ceilings. If AI accelerates certain tasks, don’t reflexively expand project scopes to fill the gap. Sometimes the gain should translate into genuine slack: learning time, recovery time, or deeper focus.
Clarify roles in the AI era. Document which tasks should shift across roles and which should not, even if AI makes them technically doable. Protect expertise and avoid silently collapsing three jobs into one.
Account for oversight work. Building and checking AI-assisted workflows should be recognized as real work, not “free” or invisible labor.
Invest in responsible rollout. Training should include not just “how to use the tools,” but also “how to use them without burning out”: boundaries, realistic expectations, and escalation paths when load becomes unsustainable.

Designing AI for human limits, not just machine speed

The core lesson from the UC Berkeley and Yale research is not that AI is inherently harmful. It’s that technology designed to optimize speed and volume collides badly with workplaces that already struggle to set healthy limits.

If AI is introduced into a culture that equates more with better, it will amplify that tendency. If it enters a system that respects human constraints and values sustainable performance, it can genuinely reduce drudgery and free up time.

That means the real question isn’t “Can AI make us faster?” It’s “What will we do with the time AI gives us—and will we allow any of it to remain unfilled?”

Until organizations can answer that honestly, workers are likely to keep experiencing exactly what this study documents: not liberation from work, but a new, quieter strain of burnout—one where the job doesn’t just demand more hours, but more of the self, stretched ever thinner by tools that promised to help.