The learnings in this blog post are based on the webinar, “Engineering at AI Scale: How Connected Workflows Unlock Velocity, Visibility, and Governance.” You can check out this webinar and others on demand.
In an era where organizations need to move faster than ever, AI offers a tantalizing promise for incredible productivity gains and transformative business outcomes. Yet despite all the hype, many teams are still not seeing meaningful outcomes from their AI investments.
More than 3,500 developers and engineering managers reported spending only 16% of their time writing code. The remaining 84% goes into everything around the code: clarifying requirements, documenting decisions, reviewing changes, searching for information, and sitting in meetings.
Teams don’t usually slip because a single phase of the software development lifecycle (SDLC) is slow; rather, they slip because the flow is broken and context is lost between stages.
The challenge is turning AI investments into a connected, measurable system of work.
From AI features to an AI operating model
Many engineering organizations are treating AI primarily as a coding accelerator. They are piloting code assistants and experimenting with model providers, but doing so within a fragmented toolchain.
When we share the 16% coding statistic with customers, they often pause, then recognize that most of their developers’ time is spent in the “outer loop”: getting context, clarifying decisions, aligning with stakeholders, and navigating complex systems. If AI is applied only in the code editor, it can never compound across the end-to-end workflow.
We’ve outlined three areas leaders need to unlock compounding value from AI:
- Challenge: Disconnected data across teams and tools
Why it matters: AI is only as good as the data it can access. A unified data model that connects work, goals, and knowledge across the business gives AI the context it needs to be useful. This is the role of Atlassian’s Teamwork Graph. The Teamwork Graph is a data intelligence layer that connects data across Atlassian and third‑party tools and defines the relationships between knowledge, workflows, teams, and projects. - Challenge: AI embedded in everyday workflows
Why it matters: Teams execute work across Jira, Confluence, Loom, Rovo, and third-party tools. AI has to live in those workflows, not in a separate destination. It should remove friction at every step: from planning to building to shipping to learning. - Challenge: Disjointed tech and business workflows
Why it matters: Work should translate and flow across the organization. A product requirements document should become the seed for Jira issues, test plans, launch briefs, and support runbooks. AI can help aggregate and translate context for different stakeholders, but only if the underlying workflows are connected.
For highly analytical, outcome-focused tech leaders, these challenges are not abstract. They show up as slower delivery, more firefighting, and difficulty proving ROI. The question is not whether AI is important, but whether it can be implemented in a way that is secure, measurable, and aligned with how your teams already work.
Did you know?
73% of tech leaders list AI expansion as their top priority, but 77% struggle with integration.
AI is most effective when it is not just another tool, but an intentional leadership strategy for how teams work. To show how these ideas work in real life, let’s look at how Atlassian’s engineering teams unlock AI’s value in their day‑to‑day work.
How Atlassian’s engineering teams embed AI across the SDLC
Atlassian’s Confluence engineering team already uses AI to achieve strong results across its SDLC.
The team had roughly four months to go from idea to a fully shipped cross-surface AI experience across pages, LiveDocs, whiteboards, and databases. This meant they needed to move at “AI speed” while still preserving high quality and a clean architecture.
What made this possible was enabling automated workflows, powered by Rovo, with Teamwork Collection across the entire SDLC. Rovo’s capabilities in Teamwork Collection are grounded in Atlassian’s unified platform and the Teamwork Graph. Rovo delivers context-aware AI experiences that respect permissions and reflect how teams actually work. This includes both the “left of code” context (requirements, designs, product decisions) and the “right of code” context (incidents, support tickets, operational runbooks).
did you know?
McKinsey finds that leaders need to rewire their data and operating models to gain connected visibility across the business – not just run isolated AI pilots.
Design that stays connected, not siloed
The Confluence engineering team still used Figma for core UX design, but they did not treat it as a separate island. Using Rovo connectors, they pulled Figma content into their workflow.
From Confluence or Rovo Chat, engineers, designers, and PMs could ask questions like “Show me the latest entry point designs” or “What changed in the last iteration?” Rovo summarized changes across Figma files and related Confluence docs, turning static design screens into a queryable, shared context. This minimized context chasing and made design decisions visible and reusable across the team.
Also, thanks to Loom, design reviews were no longer gated on finding a meeting slot. Loom enabled asynchronous walkthroughs of Figma designs, enabling faster, more inclusive consensus.
From messy brainstorming to structured execution
In the early discovery phase, the team used Confluence Whiteboards to capture the problem space, user journeys, edge cases, technical constraints, and go‑to‑market inputs. The whiteboards were intentionally messy, as any good brainstorm should be.
The difference came after the brainstorm. The team clustered and prioritized directly on the whiteboard, then converted that structure into a Confluence page and Jira issues. Rovo summarized the whiteboard into a first draft of the spec and backlog.
Instead of manually rewriting notes and losing nuance, they moved from exploration to execution faster without sacrificing intent.
Reusing architectural and customer knowledge
Using Rovo, the team analyzed past customer feedback, internal dogfooding notes, and earlier AI architecture decisions. Because all of this was connected through the Teamwork Graph, they could ask questions like:
- What worked last time and what did not?
- Where did we struggle?
- Where did users struggle?
Rovo synthesized these sources into themes and trade-offs that directly influenced the project’s direction, including decisions that went into Atlassian’s final product design, such as adopting a single creation agent and a streaming flow.
Coding with a context-aware assistant
During implementation, engineers used Rovo Dev – an AI‑powered development agent available in the terminal, in the IDE, and across Atlassian tools – to navigate the codebase, jump to the right services and components, and quickly understand existing patterns. Rather than behaving like a generic coding assistant, Rovo Dev was grounded in the Teamwork Graph, turning it into an architecture service that’s connected to Jira issues, Confluence docs, Loom videos, and all third-party data.
Developers could ask how existing AI entry points were wired, request summaries of relevant services, and generate code and test drafts aligned with established patterns. This helped them move quickly while keeping the design consistent and maintainable.
Closing the loop with research and customer feedback
After launch, research teams returned to customers to collect qualitative feedback on the feature: what worked, what felt confusing, and where expectations did not align with reality.
Historically, this is where context breaks. Research lives in one place, while engineering and product live in another. With Teamwork Collection, research recordings and notes flow into the same system of work.
Customer feedback feels more real when engineers can watch short clips of users in their own words, and then use Rovo to synthesize themes at scale. Researchers now use Loom to capture customer sessions and share them broadly. Rovo summarizes recurring themes and links them back to the original specs and Jira issues. Rovo Dev then helps engineers incorporate those learnings in subsequent iterations.
This is the definition of interconnected AI: it does not support just one stage of the lifecycle; it connects the entire loop from idea to implementation to learning, and back again.
Improving collaboration between tech and business teams
Connected workflows help both engineering and business teams to work cross-functionally and avoid rippling consequences when hiccups occur with their counterparts.
With Rovo and Teamwork Collection, product and marketing teams can move from a product requirements document (PRD) to launch‑ready campaigns without losing context. While product teams capture the “what” and “why” of a new capability in Confluence PRDs, marketing teams use Rovo on those same PRDs to generate first‑draft campaign briefs, messaging frameworks, and launch checklists that stay grounded in the original product intent.
Because this work is connected through Teamwork Graph, the same PRD can also power richer Jira issue descriptions for engineers and clearer enablement content for sales and customer‑facing teams. Instead of every team recreating context from scratch, Rovo helps translate a single source of truth into tailored artifacts for each audience, closing gaps between product, engineering, and go‑to‑market and accelerating time to impact.
Measurable impact for engineering teams
Tech leaders are notably skeptical of vague promises, and for good reason. Evidence is needed that AI will improve delivery, not just add complexity.
For Atlassian, using Loom, Rovo, Jira, and Confluence together as part of Teamwork Collection has delivered measurable impact:
- An estimated 500,000 meeting hours have been returned in the first two years of Loom usage, based on video views and lengths
- 100+ hours saved per engineer per year using Rovo, driven by reductions in time spent searching for context and performing repetitive tasks
- A 45% improvement in pull request cycle time with Rovo Dev (a Teamwork Collection add-on) for code navigation, pattern reuse, and test generation
- 50% faster release rollouts, enabled by AI embedded across the end‑to‑end journey rather than a single stage
For leaders who care about user-friendliness, consolidation, and business outcomes, these are the kinds of proof points that matter. Teamwork Collection provides a common language across teams and unlocks more value from every tool.
did you know?
71% of CEOs rank AI as a top investment priority, and ~69% plan to allocate 10–20% of budgets to AI, with most expecting ROI in 1–3 years.
A pragmatic playbook for AI adoption at scale
AI delivers real value when it’s deliberately implemented where you work, not fragmented in some chat box. The following playbook outlines concrete steps leaders can take to move from scattered AI experiments to a secure, scalable way of working.
1. Start with focused pilots, but design for scale
Waiting for the AI landscape to “stabilize” is unrealistic. At the same time, jumping into every new tool without a plan creates noise and risk.
This means tech leaders should try:
- Starting with a small number of high‑impact workflows where AI can reduce friction end‑to‑end, not just at one touchpoint
- Piloting with motivated teams who are willing to learn, experiment, and share
- Measuring both quantitative impact (time saved, cycle time improvements, fewer meetings) and qualitative feedback
From there, leaders can codify what works as standard workflows, templates, and skills so it becomes institutional knowledge rather than a one‑off experiment.
2. Use a hub‑and‑spoke model to manage learning
To avoid uneven adoption and fragmented experiments, try building:
- Spoke teams that run pilots and push the boundaries of what is possible, using the latest models and techniques
- A hub team focusing on evaluating those pilots, standardizing successful patterns, aligning with security and compliance, and scaling them across the org
This allows you to harness grassroots innovation without losing governance or consistency. Over time, AI‑enabled ways of working move from experimental to default.
3. Bring IT, security, and compliance in early
From a buyer criteria perspective, security and compliance are non‑negotiable. Tech leaders need confidence that AI will protect sensitive data, respect permissions, and be auditable.
Make sure your risk and compliance partners ask these three questions of any AI platform:
- What data can AI access, and does that respect existing permissions?
- How is that data used, processed, and stored?
- Can we audit and prove what the system is doing over time?
Working closely with these stakeholders from the beginning, rather than treating them as a gate at the end, can preserve momentum while maintaining trust.
Teamwork Collection and Rovo are built with these concerns in mind. Because they sit inside your existing Atlassian Cloud environment and leverage Teamwork Graph, they inherit the same permission model and enterprise‑grade security and compliance controls you already rely on.
Moving from fragmented tools to a unified, AI-native system
A unified, AI‑native system only matters if it helps real teams ship better work with less friction.
That’s where Teamwork Collection comes in: it gives you a single, connected environment to align stakeholders, orchestrate delivery, and operationalize AI across the SDLC, without asking your teams to change where they already plan, build, and communicate.
Teamwork Collection gives you:
- A unified, AI‑native collaboration layer across Jira, Confluence, Loom, and Rovo
- A connected data foundation through Teamwork Graph that ties goals, work, teams, and knowledge together
- Embedded AI across the SDLC, from early discovery and design to implementation, research, and operations
- The governance, security, and auditability required by enterprise IT and compliance teams
The organizations that will win in this next wave are those that treat AI as a leadership strategy, not a point solution. They will use connected workflows to unlock developer focus, shorten delivery cycles, and strengthen alignment between technical and business teams.
Teamwork Collection as a connected system of work
At Atlassian, our portfolio of Cloud apps are a connected system of work rather than a compilation of point solutions.
Teamwork Collection sits at the center of that system as the shared foundation for collaboration across both technical and business teams. It brings together Jira, Confluence, and Loom – powered by Rovo – to unify goals, work, and knowledge on one platform.
This is what enables Teamwork Collection to serve as an operating model for engineering at AI scale, not just a product bundle – helping teams apply the right context, visibility, and governance throughout the SDLC.

