Adam Grant on how AI reshapes work: why agility beats ability
Practical lessons for leaders redesigning teamwork in an AI-first era
The learnings in this blog post are based on Adam Grant’s session at Atlassian’s Teamwork in an AI era event. Don’t miss your chance to see Grant’s full session, available only until June 30th.
When you bring AI into teams, the hard part isn’t the technology. It’s the people.
That was the through-line from Adam Grant during our event, Teamwork in an AI era. Grant is an organizational psychologist, bestselling author of six books, and Wharton’s top‑rated professor for seven years running. His work focuses on how people lead, work, and live in fast-changing environments – exactly the conditions AI is creating inside today’s teams.
Grant dug into how leaders can rethink team structures, culture, and their own habits so AI actually makes teamwork better, not just faster.
The new currency of success
For most of modern management, careers have been built on ability: demonstrating mastery, proving expertise, and being “the person who knows the answer.”
According to Grant, that era is over.
We live in a world now where the currency of success is not ability, but agility. It’s the people who are fastest to adapt to change, who are quickest to learn and unlearn, that are ultimately rising and succeeding and in the process, building job security.”
– Adam Grant
In other words, the teams that win won’t be the ones who already know how to use AI perfectly. They’ll be the ones who constantly run small experiments, such as trying multiple prompts instead of just one, comparing outputs across teammates, and learning what AI is good at, what humans are better at, and where collaboration works best.
And because the tools are evolving so quickly, those experiments can’t be a one‑off pilot. Grant warns: if you keep using last year’s prompts in this year’s world, you’re already behind.
Treat AI as a thought partner, not an oracle
By shifting perspective on how to use AI, it invites teams to use it the way they’d use a sharp colleague; someone who helps you brainstorm options, challenge assumptions, and refine drafts, while still requiring human judgment to decide what’s true, useful, and on‑strategy. Approached this way, AI becomes a catalyst for faster learning and better decisions, not a shortcut to blindly trusted answers.
We already know how to do this with people:
- We don’t trust every piece of information a colleague gives us.
- We recognize that confidence doesn’t always equal competence.
- We test ideas instead of accepting them blindly.
The same mindset should apply to AI. Leaders, in particular, need to wear a “scientist hat” by forming hypotheses about where AI can help, then testing and iterating rather than chasing fixed best practices in a moving landscape.
Working in this AI era, that means:
- Running lots of small, low‑risk experiments
- Comparing human vs AI vs human+AI workflows
- Expecting some “false hunches” and learning from them
The goal isn’t to be right once. It’s to keep learning consistently as the environment is changing
Why selling only the upside of AI backfires
Many leaders reach for the carrot: AI will remove toil, help write release notes, and draft those emails you hate.
Grant’s view: that’s only half the story, and often not the most motivating half. According to Grant, leaders must underscore that AI fluency is essential for modern work.
The key, though, is not to weaponize that fear with endless threats. If you overuse the stick, people will do the bare minimum to avoid getting fired. They’ll get narrower, more cautious, and cling more tightly to their comfort zones, the exact opposite of the agility this moment demands.
What people are not looking for is positivity. It’s clarity. You don’t have to be upbeat and reassuring in an uncertain world. You just have to paint a vivid picture of your vision and strategy.”
– Adam Grant
Instead of leaning solely on carrots and sticks, Grant urges leaders to anchor in values:
- Make learning agility, coachability, and adaptability explicit success criteria
- Reward experimentation, even when the specific attempt “fails”
- Tell stories that reinforce those values in action
Which leads to his next point.
Culture is shaped by the stories you tell
One of Grant’s favorite tools for changing behavior isn’t a policy; it’s a story.
Stories about what people do inside a company shape how newcomers understand what’s really valued. This is backed by the research of Sean Martin, an award-winning professor at the Virginia Darden School of Business.
What’s not always explicit is that adopting AI, in and of itself, isn’t the value. Rather, it’s the underlying company values, like having a growth mindset, upskilling to better serve customers, learning and being curious, experimenting and evolving, playing as a team, and being the change you seek. This naturally leads to adopting AI as a tool for progress.
Two types of stories matter most:
- Stories of senior leaders violating values. For example, powerful leaders who opt out of AI entirely, implicitly signaling, “this isn’t my job.” These are culture‑killers.
- Stories of junior people upholding values. For example, a new hire runs a small experiment with AI, finds a surprising use case, and helps their team adopt it. These are culture‑builders.
By shining a light on “bright spots” like these, teams and ICs will experiment with AI thoughtfully as an expression of shared values. Leaders reinforce that using AI well is a byproduct of living those values, not just a compliance checkbox.
Use AI to challenge your assumptions and expose hidden gaps in your strategy
It’s not enough for leaders to tell their teams to experiment with AI. They need to embody it in their own daily work.
At a dinner Grant attended with AI pioneers from Anthropic and DeepMind Technologies, the conversation turned to how AI could become a powerful sparring partner, not just a tool for convenience. His conclusion: use AI to build your “challenge network.”
A challenge network is a group of people who question your assumptions, spot holes in your strategy, and tell you the uncomfortable truths others might sugarcoat.
As leaders, we should be using AI tools to do something that the humans around us are often reluctant to do, which is to question us and challenge us.”
– Adam Grant
In most organizations, employees are afraid to call out gaps or vulnerabilities in their manager’s plans; however, AI doesn’t have that social fear if you prompt it correctly.
Grant recommends leaders regularly ask AI:
- “What’s a blind spot I might have as a leader?”
- “What’s a hole in this strategy?”
- “What systematic mistakes might I be making when I communicate with others?”
Then triangulate that with human feedback (like 360 reviews) to see where themes overlap.
Crucially, he argues leaders should be open about this:
Not being open about your shortcomings is failing to model the very behavior you want to see in others.”
– Adam Grant
In Grant’s research, when managers openly share areas they’re working on and what they’re doing to improve, their teams don’t see them as less competent. Instead:
- Psychological safety increases
- People are more willing to speak up about problems
- Teams bring more suggestions and solutions forward
Your team already sees your flaws; you might as well get credit for the self‑awareness and humility to work on them.
A practical way to model openness: share with your team what AI surfaced about your strategy or leadership style, and invite discussion.
Leaders don’t have to read everything if they fix the system
One common refrain Grant hears from senior leaders: “Everything is changing, and I can’t keep up.”
With AI news, product launches, and new tools dropping daily, it’s unrealistic to expect a CEO or exec to track it all personally. The answer, he says, isn’t heroic individual effort; it’s better information flow by design.
He points to WL Gore (makers of Gore‑Tex) as a model. Gore reimagined the rigid corporate ladder as a more flexible, interconnected lattice.
- Anyone with an idea can take it to any leader, not just their direct chain.
- All it takes is one “yes” to keep an idea alive.
- That gives senior leaders exposure to perspectives and information they’d never see otherwise.
The lesson isn’t that every company should copy Gore’s org chart. It’s that leaders need structured ways to hear from people outside their usual circles, especially in a fast‑moving domain like AI.
People with weaker ties that you don’t work with every day often carry the most novel information. Systems that encourage those connections help leaders stay current without burning out.
Discover Atlassian’s System of Work
Leaders don’t just need new tools in an AI era – they need a new, intentional way of working that hard‑wires experimentation, alignment, and learning into how teams operate. Atlassian’s System of Work turns ideas like “agility as the new currency,” challenge networks, and focused innovation tournaments into practical, repeatable patterns any team can adopt.
Redesign your workflows, rituals, and decision paths so AI actually improves teamwork rather than just speeding it up.
Avoiding AI chaos
Of course, opening the floodgates to ideas can create a different problem: duplication, chaos, and sub‑scale efforts.
Many large orgs feel this tension: when AI capabilities land, you might suddenly have dozens of teams racing to build similar prototypes, fragmenting effort and overwhelming the people who share them.
Grant’s solution is to pair autonomy with focused innovation tournaments, a mechanism he studied at Dow Chemical:
- Start with a clear brief (for example, “ideas that save energy and reduce waste,” under specified budget and payback constraints).
- Invite ideas from across the company, with peers and experts rating them.
- Combine similar ideas and have teams expand their best concepts.
- Invest in a small number of high‑potential ideas each cycle.
How Atlassian fosters innovation across teams
Every quarter, Atlassian runs ShipIt, a company-wide 48-hour innovation event where every employee, regardless of team, is encouraged to participate. Individuals or teams can build and demo anything from customer-facing product features to internal tools and workflow improvements that help Atlassians work better.
Over a decade, Dow funded around 575 ideas this way, saving roughly $110M per year, much of it from people outside traditional R&D roles.
For AI, Grant suggests borrowing this pattern and adding one twist from Google X: kill signals.
When teams propose a new AI idea, they should also define up front:
- What early signs would tell us this won’t work?
- What metrics or thresholds would make us shut this down in a week or a month?
At Google X, teams are rewarded for pulling the plug early on doomed ideas, not punished for it. That allows them to fail faster, free up resources, and redirect talent to the next experiment.
In AI‑heavy product work, that might look like:
- Adding “fast‑fail criteria” alongside success metrics in PRDs
- Normalizing short cycles where projects are intentionally paused or killed
- Celebrating the judgment to stop, not just the perseverance to continue
Redefining what ‘good work’ looks like in an AI era
AI makes it easy to produce more: more docs, more drafts, more prototypes.
That’s precisely why Grant thinks one of the most important habits to unlearn is equating volume and speed with quality.
Historically, churning out lots of work could be a signal of mastery: the more shots you took, the more likely you were to hit something valuable. Today, with AI, it’s just as easy to produce “work slop”, a term Grant borrows from research colleagues at BetterUp.
“Work slop” is:
- Long, low‑value documents
- Poorly filtered AI output
- Artifacts others then have to sift, clean, and interpret
In a world where anyone can generate pages of content in minutes, the differentiator is no longer collecting facts; it’s connecting dots.
The real value now comes from people who can:
- Turn a huge volume of inputs into a tight, insightful synthesis
- Explain something in one minute that used to take an hour
- Use AI to get breadth, then apply human judgment to find meaning
For leaders, that means recalibrating what you reward: not raw output, but thoughtful, concise contributions that move the team forward.
Beware of becoming a “cultural museum”
When Grant thinks about what leaders will regret in hindsight, he goes back to a conversation with Larry Page, who once told him his biggest fear was that Google would become a “cultural museum”:
We’re going to freeze the artifacts and practices of the past. We’re going to put them in a glass case and admire them. No. We need to smash the glass case. We should not have reverence for the way we’ve always done things. We should continue evolving the way we do things as the world evolves. And that will make us faster at adapting to change. It’ll also make us quicker to initiate change.”
– Adam Grant
In an AI era, that danger is everywhere: rituals, best practices, and success metrics built for a world that no longer exists.
How to “smash the glass case”:
- Question which “best practices” were built for a different decade (or a different org size, or a different technology stack).
- Treat them as hypotheses to test, not sacred truths.
- Look for “better practices” that pair AI with intentionally designed systems of work: clearer alignment, better decision paths, and more room for experimentation.
The organizations that thrive won’t be the ones that bolt AI onto old ways of working. They’ll be the ones willing to rethink how work happens: roles, norms, workflows, and measures of success, all with humans firmly in the loop.
Watch the full Adam Grant session
If you’re a leader navigating AI’s impact on your team, Adam Grant’s session is packed with practical, human‑centered guidance: from how to design experiments and kill signals, to how to use AI to build your own challenge network. Start small, measure what matters, and let real outcomes, not hype, determine what you scale.