96% of companies haven’t seen meaningful business value from AI. And yours might be one of them.
IT leaders are spending big anyway – and wondering why projects still slip and service metrics refuse to improve.
According to Atlassian’s AI Collaboration Index, 96% of companies haven’t yet seen meaningful business value from AI, despite widespread investment. AI has made it easier to get work done, but not easier to work together.
If that sounds uncomfortably familiar, it may be because you’re bumping into one (or more) of these four warning signs.
These four warning signs aren’t isolated problems but instead stages. Most IT organizations experiencing one are already sliding toward the next. The further along you are, the wider the gap between your AI investment and your actual ROI.
1. Faster people, slower teams
On paper, AI should be a game-changer for IT with faster incident triage, better root-cause analysis, cleaner documentation, and more efficient change management. However, in practice, many IT leaders are seeing something different:
- Individual contributors generate more tickets, more docs, more “work artifacts”
- Local tasks get done faster
- But end-to-end project timelines and service metrics barely move
This is the AI efficiency paradox for IT: AI accelerates individual tasks inside a fragmented system. Without a connected way of working, those gains don’t add up to meaningful organizational impact.
If your AI rollout is focused purely on “personal productivity” use cases – code suggestions here, summarization there – without changing how work flows across teams, you’re likely optimizing the wrong thing.
What this may look like in your team:
- AI drafts a spotless PIR, but the incident commander spends 40 minutes chasing context across 4 tools before anyone reads it.
- Agents closed twice as many low‑level tickets, yet the major incident MTTR didn’t budge because handoffs and ownership remained unclear.
If this is where you are, you might think the fix is simply scaling what’s working. But that’s exactly what triggers the next warning sign.
Solution: Arm AI with context on goals
Start with measurable outcomes – mean time to resolution (MTTR), service level agreement (SLA) adherence, customer satisfaction (CSAT) – and give them to AI tools as context and grounding. By feeding your organization’s goals into a central context graph, each AI action and response will be more accurate and predictive of what comes next.
2. The bottleneck just shifted
Maybe your teams are getting work done faster. But throughput then stalls at the review, approval, or cross‑team coordination gates, revealing that the real constraint has shifted from execution to governance and decision-making.
AI generates more RFCs, change requests, PIRs, and service requests – but when review and approval workflows are still manual and fragmented, you’ve just shifted the bottleneck. Change advisory boards drown in AI-generated records, security and risk teams are overwhelmed by a flood of requests, and leadership inboxes fill with AI-polished business cases that still lack real alignment.
What this may look like in your team:
- Change requests double overnight, but CAB still meets weekly.
- Risk and impact aren’t visible in one place, so approvals queue up for days while launches slip and leaders assume AI will ‘speed it up next time.’
At this point, many leaders double down on AI to clear the new bottleneck. But without a system of intentionally designed workflows underneath, that acceleration creates an even bigger problem.
Solution: Plan work in anticipation of AI bottlenecks
Define a single connected view across services, software, and infrastructure. Identify where human reviews of AI-generated work (code, PIRs, knowledge base content) will be needed to account for new capacity constraints.
3. AI is amplifying your mess
When work is already fragmented across tools and teams, adding AI can actually amplify the fragmentation. It increases the volume of content and recommendations moving through disconnected systems, making it harder for people to find the right context and act with confidence.
In other words, AI without context amplifies noise.
What this may look like in your team:
- The virtual agent suggests three ‘relevant’ articles, but all are outdated.
- Engineers debate which runbook to trust while the incident clock keeps ticking. AI didn’t fail; your source of truth did.
If you operate without clarity on how work flows through your organization, which systems contain the current source of truth, and how goals, policies, and standards are captured and kept up to date, AI becomes just another layer of clutter on top of already noisy workflows.
If you’ve reached this stage, you’re likely starting to realize something uncomfortable…
Solution: Arm your AI system of work with organizational context
Invest in a context graph that unifies your organization’s knowledge across wikis, file shares, and enterprise chat. Use it as the single source to keep runbooks, PIRs, and standards current, and to link them to KB articles and assets.
4. Your teams work the same way they did before AI
Here’s the uncomfortable truth: the first three signs weren’t really about AI alone – they were about your system. Your org chart. Your approval chains. Your scattered runbooks and tribal knowledge. AI didn’t break anything; it just made the cracks impossible to ignore. Now it’s faithfully reproducing every broken handoff, every unclear ownership line, and every siloed decision at machine speed.
The harsh reality is that AI can’t mend pre‑existing fragmentation in IT organizations without context. When service, software, and infrastructure teams operate on disconnected workflows and data models, AI simply mirrors those breaks and often magnifies them in incident, change, and project work. To change the outcome, leaders must first redesign ownership, governance, and cross‑tool integration so AI has a coherent system to reinforce – not a fractured one to accelerate.
What this may look like in your team:
- You turned on AI across tools, but goals still live in decks and approvals in inboxes.
- People follow the same playbooks as last year – only faster – so outcomes don’t change.
Solution: Redesign how teams work with AI
Consider how to introduce AI as the owner of longer-running tasks in your workflow as the output quality improves. Start by defining the principles your team is trying to achieve, including the goal, output quality, and success measures. Then iterate and test to see where AI can take on work while humans review and make judgment calls.
Why AI alone can’t fix these problems
If you’ve recognized your team in these stages, you’ve just traced the trajectory yourself. The pattern is clear: AI can’t fix organizational design and workflow fragmentation. Stack more AI on a broken system, and you just move the constraint. To unlock real ROI, fix how work happens first.
This is where establishing a connected System of Work becomes essential.
The Atlassian System of Work is a data-backed philosophy that helps organizations connect tech and business teams to accelerate progress and maximize impact. It rests on four principles.
- Align work to goals
- Plan and track work together
- Harness collective knowledge
- Make AI part of the team
When IT leaders apply these principles, AI and collaboration tools stop being isolated point solutions and start becoming part of a coordinated, outcome-driven system.
Turning warning signs into a roadmap
Applied together, these principles turn those warning signs into a plan of action.
The teams that are realizing real AI ROI didn’t start by buying better tools – they started by redesigning how work flows across teams. Want to see what sets them apart? The AI Collaboration Index breaks down what the top 4% of organizations do differently – and how to close the gap between AI investment and actual impact.

