Across industries, leaders are rushing to add AI to everyday work. Developers now push code in hours, writers shape first drafts before meetings, and analysts spin up models by lunch. It all looks like a wave of productivity on the surface.

However, Amdahl’s Law offers a quiet warning underneath that wave. In any system, the maximum speedup islimited by the slowest process in the workflow. If you only accelerate one slice of the work, the rest of the workflow sets a hard ceiling on how fast the whole thing can go.

Right now, AI is mostly speeding up individual work: drafting, coding, and analysis. Yet on most teams, those activities make up only a fraction of the lifecycle. The bulk of the time is still spent on human collaboration and process: reviews, decisions, sign‑offs, risk checks, and alignment across teams. That means there’s a built‑in cap on how much faster AI alone can make the system, if we only look at increasing efficiency of individuals.

We are currently witnessing a 43% gap between ‘felt’ speed and actual output, where individual efficiency gains are being swallowed by the friction of team-level handoffs.

Team-level velocity charts stay flat, cycle times refuse to budge, and people feel more stretched instead of relieved. That is the AI efficiency paradox. AI can make an individual much faster, but unless we redesign the way teams work, the gains leak out through handoffs, reviews, and decisions long before they reach customers.

Engineering teams were early adopters of AI, so we can see this paradox most clearly in the software development lifecycle. While the same pattern shows up across modern organisations, this article will focus on examples from software development and software systems.

Amdahl’s Law for teams

Use Amdahl’s Law to decide where to improve, not just to restate that one slow step can cap the whole system. Let the parts that rarely speed up guide your design of the rest of the workflow, so that human attention is reserved for moments that require judgment and coordination.

If only about 20% of your lifecycle is individual work, and AI speeds up primarily that portion, then there is a hard ceiling on how much the whole system can improve. Even if individual work went from slow to effectively instant, overall throughput would still rise by only about 1.25x, because the remaining 80% would continue to move at current speed.

You can see this by plugging in some simple numbers:

Share of the lifecycle that’s individual workAI speedup on individual workMax speedup of the whole system
20%2x faster1.11x
20%10x faster1.22x
20%“Instant”1.25x
50%2x faster1.33x
50%10x faster1.82x

A few things stand out:

  • When only 20% of work is individual, even “instant” speed caps you at ~1.25x overall.
  • As the share of individual work grows (for example, to 50%), those same AI gains create much bigger system-level improvements.
  • In many real teams, collaboration and validation dominate the lifecycle, so individual speed can only help so much until you redesign how people work together.

Human collaboration touchpoints, not individual speed, hold most of the headroom.

The hidden math of software development work

When multiple people work together, only a small slice of the work is truly individual. AI thrives in that slice because the inputs are well-formed and the outcomes are easy to verify. You can request a first draft, a quick refactor, or a data check and get something useful in minutes.

Typical software work is split roughly between individual work, human collaboration, and process.

  • Individual work is the time spent heads down writing code, drafting a spec, or creating a model.
  • Human collaboration and process is everything else: decision making, validation, alignment, approvals, feedback, and waiting in queues.

Across many studies and real projects, only a modest share of the software lifecycle is true hands-on work at the keyboard. The ratio shifts by context. A small, autonomous startup might spend roughly half its time in individual work, while a heavily regulated enterprise might spend closer to a tenth. Most teams land somewhere in between, which matters less than the pattern it reveals.

The point is not to debate the exact percentage; what matters is the shape of the work. On most teams, the time spent aligning, deciding, reviewing, and coordinating outweighs the time spent producing. This relationship is what drives the paradox.

A day in the life of an AI-accelerated engineer

Imagine Maya, a senior engineer who has leaned into AI across her day. She pairs her judgment and context with the tools, which help her move from a blank page to a solid draft and from a rough approach to running code far more quickly.

Her day unfolds something like this:

  • 9:00 AM
    Maya picks up a feature from the sprint board. With her AI partner, she explores the problem space, drafts a technical approach, and lands on a direction in about 20 minutes. Before AI, this might have taken half a day of research and iteration.
  • 9:30 AM
    With an AI coding assistant, she scaffolds the implementation, writes tests, and has a working pull request ready by 11:00. Pre-AI, this might have taken two days of work. She feels highly productive.
  • 11:00 AM
    She submits the pull request for review. Her reviewer, Tom, already has a stack of items waiting from yesterday. Two are also from Maya, after a very productive afternoon. Tom spends the morning in sprint planning.
  • 11:15 AM
    Maya starts the next feature. By 1:00 PM, she has another pull request ready. That is three open reviews in a single day.
  • 2:00 PM
    Tom finally finishes meetings and starts reviews. Each review is expensive. He must reconstruct the context that Maya and her AI built up quickly. He hops between code paths and problem domains. Cognitive load piles up.
  • 4:00 PM
    Meanwhile, a feature from yesterday is waiting for product sign-off. The product manager has been in stakeholder meetings all day and has a long queue of items to review.

By the end of the day, Maya had completed code for three features, yet none of themshipped. Her individual stats (lines of code, number of pull requests, time to initial implementation, etc.) look fantastic, but the system-level metrics remain stagnant.

By the end of the week, Maya has several open pull requests. Tom is overwhelmed and starts to skim. The product manager batch-approves a subset of items with only a quick glance. Quality erodes quietly. The board shows the same throughput as last quarter.

Maya’s teammates push back on her pace and ask to be involved earlier in the discovery and creation process. Bringing them in sooner will lower the cost of reviews and sign-offs later. Maya is faster, but the production system is under a kind of denial-of-service attack.

This is the AI efficiency paradox in action.

When speedups stall at the collaboration layer

The AI efficiency paradox has a simple structure: AI often speeds up work in areas that were never the real bottleneck, and that new pace spills into the parts of the system that still run at human speed.

Common bottlenecks in modern software work include:

  • Code reviews
  • Design reviews
  • Cross-team alignment
  • Risk and security checks
  • Product and legal approvals

When AI accelerates individual work without changing these collaborative structures, work in progress piles up at each gate: queues grow, and context switching explodes. Humans at those gates experience cognitive overload and fatigue, which can actually reduce their throughput.

You do not just fail to gain speed. You can end up slower.

In systems thinking, this pattern is familiar. Speeding up a part that is not the constraint does not improve the system. It simply pushes more work to the true bottleneck, just faster.

For many software teams, individual work was never the slowest part of the system. The persistent limits come from how teams’ workflows were designed – collaboration, validation, and decision steps – not from the people moving through it.

When AI makes individual work faster without changing those system steps, the underlying design produces a few predictable side effects:

  • Queues grow as workflow gates remain unchanged (system design effect)
  • Context switching increases due to batching and handoff patterns, not individual choices
  • Decision-making slows as cognitive load rises at bottlenecks (a workflow capacity issue)
  • Review quality drops when queues exceed design capacity – fatigue is a symptom, not the cause
  • Defects surface later because the process pushes risk detection downstream (a system property)

People adapt to the system they are in. Under pressure, the workflow nudges us to move faster by skimming reviews, bundling decisions, and leaning on heuristics to keep work moving. These are rational responses to process and capacity constraints – not personal shortcomings.

That stretch masks the real constraint for a while. The system does not collapse, but it slowly degrades instead. Quality slips, cognitive load climbs, and burnout risk creeps upward. None of this is obvious on a sprint report until the accumulation is hard to ignore.

The bottleneck does not announce itself. It forms quietly.

Designing your system for AI, not just faster keyboards

This paradox is not a reason to avoid AI, but instead a reason to build a deep understanding of your end to end workflows and bottlenecks.

If 70–90% of your lifecycle is human collaboration and process, then that is where the biggest gains live. When you reduce friction at those touchpoints, something powerful happens. You do not just gain the collaboration improvements. You also unlock the full multiplier effect of AI on individual work.

You can think about the path forward in a few concrete moves.

First, understand your own split.

Every team should roughly know how much time goes into individual work versus collaboration. Where do people lose hours? What percentage of a typical work item is spent waiting for a review, a decision, or an approval?

Second, name your bottlenecks.

Once you see your own numbers, you can identify the real constraints. It might be code review queues. It might be product decisions. It might be cross-team alignment or security sign-off.

Third, measure those constraints directly.

Treat collaboration and validation capacity as real system resources, not invisible background noise. Track queues, turnaround times, context switches, and rework.

Fourth, experiment with collaboration design.

This can include changes such as:

  • Fewer, clearer decision makers for common work types
  • Tighter scopes and smaller batch sizes for changes
  • Earlier alignment so that reviews focus on quality, not strategy
  • Stronger shared standards encoded into automation rather than tribal knowledge
  • AI tools that support reviewers through summarisation, risk surfacing, and context recall

Fifth, make validation pipelines trustworthy enough to run at AI speed.

One of the most expensive parts of software work is not coding itself, but validation. Think about everything that must happen before code is trustworthy enough to reach customers:

  • Automated tests at multiple levels
  • Static analysis and security checks
  • Integration and performance checks
  • Manual exploratory testing where needed
  • Environment and deployment confidence

For many teams, this validation layer consumes far more human attention than coding. Even if you fix basic human-to-human collaboration, you can still be constrained by this trust pipeline.

AI that generates code faster simply feeds more changes into a validation system that was already fragile. If you don’t trust your tests, staging environments, or automation, humans will remain in the loop as the last line of defense. That is slow and tiring.

To truly unlock AI-accelerated production, teams must become very intentional about how they build and trust their pipelines. That usually includes efforts such as:

  • Clear risk tiers for different types of changes
  • Pipelines that match the risk profile of each tier
  • Strong investment in reliable automated tests
  • Pre-production environments that reflect real conditions closely
  • Guardrails around any AI that participates in reviews or approvals

Until a team reaches a point where a healthy pipeline really means that green is go, humans will continue to act as manual brakes. Which means the paradox persists.

Finally, apply AI to the bottlenecks as well.

AI does not only belong at the point of creation. With clear accountability and sound governance, it can also lighten the load on reviews, decisions, and approvals by giving people better context, sharper summaries, and faster risk surfacing.

Used this way, AI becomes a real accelerant. It reduces collaboration overhead and improves decision quality, while individual work is already moving faster. Together, those effects raise the throughput of the whole value stream.

Rethinking what “productive with AI” really means

Early AI narratives have focused on individual speed: how fast an engineer can write a feature, how quickly someone can draft a document, and so on. Those gains are real, but partial.

A more honest definition of productivity in the age of AI must answer a different question:

How quickly can a team move high-quality work from idea to impact without burning people out?

For that question, the core lesson of the AI efficiency paradox is simple:

  • Individual speed is necessary, but not sufficient
  • Human collaboration and validation are now the dominant constraints
  • AI helps most when we redesign the system around those constraints, not when we ignore them

Teams that do this work first will not only avoid the paradox but also benefit from it. They will capture the full upside of AI while others are still wondering why faster keyboards did not make their organisations move any faster.

Turning the paradox into practice

If the AI efficiency paradox is real, then the question for leaders is no longer whether to adopt AI, but how to redesign the system around it.

That redesign work is hard to do in slides and scattered tools. It requires a shared view of how work moves from idea to impact, where collaboration slows it down, and how AI is changing each step.

This is where a connected system of work matters.

When planning, tracking, code, and incidents live in one place, you get a clearer picture of:

  • Where work is really waiting, and why
  • How decisions and reviews flow across teams
  • Which changes are safe to move faster, and which need more scrutiny
  • Where AI is helping today, and where it is quietly feeding the bottleneck

With that picture, you can start designing collaboration on purpose rather than by habit: clearer ownership, leaner review paths, smaller batch sizes, and automation that codifies your best practices rather than relying on tribal knowledge.

At Atlassian, this is the role our products play together: not just as individual tools, but as a system that helps teams see, manage, and continuously improve the way work actually flows. It is one way to turn AI-driven individual speed into real, sustainable team velocity.

If you are exploring how to redesign your own system of work for the age of AI, the most important step is simply to start measuring and experimenting. Tools can help, but the leverage comes from how intentionally you use them. For further information, watch our recent webinar, How to scale AI value across your org.