I talk with a lot of marketers both on LinkedIn and at conferences, and I hear similar stories about AI adoption.
Their team blocks every Friday afternoon for “AI experimentation.” With sheer discipline, they carved out the calendar, got leadership buy-in, and even set up a shared Slack channel for discoveries.
After a month, they had 12 prompt libraries, three tool-comparison spreadsheets, and zero changed workflows.
The problem wasn’t effort or enthusiasm. It was that they were solving hypothetical problems, not real ones. They’d sit down with a blank Claude window and ask, “What could we use this for?” instead of opening the half-finished brief sitting in their inbox and asking, “Can this help me right now?”
They’re not alone. McKinsey reports that while 79% of organizations are experimenting with generative AI, fewer than 10% have scaled it into actual workflows. MIT Sloan calls these companies “Experimenters” (the largest cohort), who test AI without a roadmap or integration plan and end up with no measurable productivity gains.
If that sounds familiar, keep reading. There’s a better way.
Treat AI adoption like teamwork, not a side project
If we think about how we interact with our human teammates, we don’t approach collaboration by blocking out a single chunk of time to make up for communication practices. We confirm roles and responsibilities, skills required, and ways of working when we kick off a new project. We think of our teammates when we run into problems that we don’t have the skills to solve.
And yet, many teams view AI adoption as a separate process, sprint, or a massive, siloed project. Instead, we need to think about AI adoption like collaborating with our human teammates: an ongoing process that evolves as our needs change, and we discover new problems.
Why batch exploration fails for marketers
The nature of marketing makes it difficult to use for open-ended AI exploration. On any given day, we’re juggling campaign briefs, launches, stakeholder feedback, and performance data. We’re constantly context-switching, often working with subjective customer or leadership feedback, dynamic market conditions that change frequently, and incomplete data to back our recommendations.
When you sit down to “explore AI” without a live problem in front of you, you’re missing everything that makes AI actually useful:
- The real brief. Not just a hypothetical one, but a real-world one with the weird constraints and the impossible deadline.
- The real data. Your actual campaign performance numbers, not a sample dataset.
- The real urgency. The pressure that forces you to honestly evaluate whether the AI output is good enough to ship.
One marketer at Atlassian put it perfectly: “I know that capability exists… I just haven’t had a chance to really think about how I can bring agents into my workflow yet.” That’s not skepticism. That’s someone drowning in context who doesn’t have the bandwidth to manufacture artificial problems.
And here’s the thing: finding problems and solving them is a core marketing skill. Marketers don’t need a special sprint to be innovative. We need to point that innovative spirit at the friction we’re already experiencing.
Harvard Business Review argues that as AI models become commoditized, the true differentiator is an organization’s “Context Layer”. The real-world workflows, signals, and trade-offs that occur in the moment. HBR also notes that AI adoption stalls when organizations pursue “bold, sweeping changes” rather than identifying specific, high-impact use cases to solve real business problems.
The irony is clear: the batch exploration sprint is the bold, sweeping change that goes nowhere.
The “in the moment” alternative
So what works instead? Stop setting aside dedicated AI time. Start building a 10–15% buffer into every project timeline. Then, once you find the minimum viable solution, set aside time to optimize and scale it.
Here’s the shift: instead of asking “When should I experiment with AI?”, ask “Where am I stuck right now?”
When the brief is a mess, the draft needs four variants, the approval chain is unclear, the quarterly report is eating your entire day – or any other friction point – that’s the moment to try an AI approach.
This works better for three reasons:
- You have full context. You’re not manufacturing a scenario. You’re knee-deep in the real problem with all the constraints, history, and nuance loaded in your head (and hopefully, in your systems and tools).
- You can evaluate immediately. Did that output actually help? You’ll know in seconds because you know what “good” looks like for this specific task.
- The risk is built into the timeline. If AI fails, you still finish the project on time because you budgeted the buffer. If it works, you just got time back.
- The learning sticks. You don’t need to take notes for a future scenario. You just solved a real problem, and you’ll remember exactly how it felt.
This isn’t just intuition. Blueline Simulations reports higher engagement and retention when learning is immediate and tied to a live task. And Stanford University found the largest productivity gains were 76% to 176% when AI is applied to concrete “digital chores” and specific tasks, not vague exploration.
The pattern is consistent: AI is most useful when it meets a real need in real time.
How connected workflows beat isolated tests
There’s a deeper reason why in-the-moment AI use outperforms sandbox experimentation: context.
When you open a blank AI chat window and type a generic prompt, the tool knows nothing about your campaign, your audience segments, your brand voice, your team’s capacity, or what you shipped last quarter. You have to manually reconstruct all of that context through prompting, and most people don’t bother because it’s exhausting.
But when AI is wired into the systems where your work already lives – like project plans, documents, past briefs, meeting transcripts, and performance data – it starts with context rather than starting from zero.
That’s the difference between asking AI, “Write me a campaign brief”, and asking it, “Look at the brief for Project X, compare it to our last three successful launches, and tell me what’s missing.” The second prompt produces something useful. The first produces something generic.
One Atlassian marketing leader captured the frustration of the alternative: “Vendors love to sell us shiny AI add-ons, but if the basics aren’t wired (what’s the work, who owns it, what did it deliver) then it’s just another disconnected surface my team has to babysit.”
This is why connected systems matter so much for AI. The AI doesn’t need more capabilities. It needs more context. And you only have all the context when you’re in the middle of the work, not when you’re sitting in a sandbox trying to imagine what the work might look like.
MIT Sloan Management Review reinforces this, recommending that teams apply agentic AI to real knowledge work, such as meeting preparation, competitive intelligence, and stakeholder updates, rather than treating it as a standalone coding or content toy.
Show, don’t tell: building AI advocacy from the inside out
There’s another lesson buried in the “stop experimenting” message that’s worth pulling to the surface, because it changes how you build buy-in for AI across a marketing org.
At Atlassian, we’ve learned that telling marketers AI will transform their work doesn’t move the needle. What moves the needle is showing them with their own workflows, their own data, their own pain points.
Our internal research shows that marketing product-market fit emerges only when two conditions are met: the right experience and the right champion. Without both, even a strong vision stalls at the adoption stage.
The “right champion” is critical, and it’s almost always Marketing Ops. It’s not that they’re more technical or smarter; it’s that they have end-to-end visibility across systems, are on the hook for wiring integrations and infrastructure so tools can talk to each other, and clean and maintain the data.
Because they sit at those connection points, they see where AI actually fits and what it needs to plug into. That responsibility gives them the credibility to bring the rest of the team along. They don’t sell AI as a concept; they prove it by fixing problems the team already dreads.
This maps directly to the “in the moment” philosophy. The best AI advocacy doesn’t come from a presentation deck or an all-hands demo. It comes from a Marketing Ops lead who says, “I used AI to cut the quarterly report from six hours to forty-five minutes, here’s exactly how I did it, and here’s the output.” That’s a micro case study, not a pitch. And it’s far more persuasive.
The show-versus-tell principle applies at every level:
- For individual contributors: Don’t evangelize AI. Just use it when you’re stuck and share what happened, good or bad, with your team. Authenticity beats enthusiasm.
- For Marketing Ops: Identify the two or three workflows that eat the most time across the team. Solve one of them with AI. Document the before and after. That becomes your internal proof point.
- For marketing leaders: Resist the urge to mandate AI adoption or launch a formal program. Instead, create the conditions for organic discovery by building buffer into timelines, celebrating real examples, and making it safe to try and fail.
Keep in mind
The organizations that scale AI successfully aren’t the ones with the most training sessions or the biggest tool budgets. They’re the ones where someone on the team solved a real problem, shared the story, and made it easy for others to do the same.
Three real marketing problems worth trying AI on (when they happen)
Theory is helpful, but specifics are better.
Here are three scenarios where the “in the moment” approach pays off, but not as hypothetical use cases, instead as situations you’ll recognize from your own week.
The brief that arrives half-baked
You know the one. It’s missing the target audience. The success metrics are vague. The timeline is “ASAP.” Instead of scheduling a 30-minute discovery call that won’t happen for three days, feed your intake form and two or three past campaign briefs into AI. Ask it to fill the gaps and flag what’s still missing. You’ll get 80% of the way to a complete brief in 10 minutes, and you’ll have better questions for the call when it happens.
The quarterly report that swallows your day
You’re staring at five open dashboard tabs, a spreadsheet of channel performance, and a half-written narrative that needs to sound strategic but is mostly just restating numbers. When AI is connected to your actual data sources, ask it to draft the narrative section. You’ll spend your time editing and adding insight instead of copy-pasting metrics and writing transitions.
The approval chain that stalls
It’s been four days. You don’t know if the holdup is your VP, legal, or the partner team. Instead of sending another “just checking in” Slack message, ask AI to surface what’s stuck, identify who’s blocking, and draft the escalation note. Now you’ve gone from passive waiting to active unblocking in two minutes.
notice the pattern in all three
You’re stuck → you try AI → you evaluate immediately → you iterate or discard.
No exploration sprint needed. No prompt library required.
What to do with what you learn from implementing AI
Every time you try AI in the moment, whether it works or not, you’ve generated a micro case study. Don’t let it evaporate.
Keep a lightweight log. It doesn’t need to be formal. Just capture four things:
- What was the problem? (The brief was incomplete, and the report was taking too long.)
- What did you try? (Fed it past briefs and asked it to fill gaps.)
- What worked and what didn’t? (The structure was great; the tone was off.)
- Would you do it again? (Yes, but I’d add brand voice examples next time.)
Share those notes with your team, not as “AI best practices” but as “here’s what I tried when I was stuck.” That framing matters. It’s not a mandate or a training initiative. It’s one colleague helping another.

Over time, patterns emerge naturally. You’ll notice that AI is consistently helpful for first drafts but terrible at audience segmentation. Or that it saves hours on data synthesis, but needs heavy editing on strategic recommendations. That’s when you standardize: when the patterns are grounded in real experience, not hypothetical use cases.
This is the same instinct marketers already have for campaign optimization. You test, you measure, you iterate, you scale what works. To help drive AI adoption, treat it the same way: commit to continuous, long-term optimization.
guardrails and good judgment
Use AI where it helps, but keep humans in the loop. Any tools or outputs should include human input and oversight and align with your company’s policies on security, compliance, and privacy.
The best AI use case is the one you haven’t invented yet
Stop trying to predict your AI use cases in advance. The best ones will emerge from problems you haven’t hit yet, in contexts you can’t simulate.
Instead, do three things:
- Build buffer into your timelines by 10-15%. Enough room to try something when friction appears.
- Reach for AI when you’re actually stuck. Not when you have free time, but when you have a problem.
- Share what happens. With your team, in your own words, without the pressure of a formal readout.
The marketers who get genuinely good at AI won’t be the ones who “explored” the most tools or attended the most workshops. They’ll be the ones who solved the most real problems, one stuck moment at a time.
Want to learn more about how other leaders adopt AI? Check out our recent Teamwork in an AI era event available on-demand now.


