Rovo brings meaning‑aware search, chat, and agents to Jira, Confluence, and your connected tools, so teams can quickly find the right artifacts and move forward with confidence.

Instead of forcing you to remember exact titles or ticket IDs, Rovo focuses on what you mean. This makes all the difference between wading through pages of loosely related results versus instantly landing on the doc, Jira ticket, and decision trail that actually answer your question.

We’re always on the hunt for ways to boost how well Rovo understands the language of real work—your acronyms, project codenames, incident patterns, and runbooks—because traditional search consistently lets teams down.

Why traditional search breaks down at work

Investments in smarter, more contextual search matter so much to us because the old way of searching at work is fundamentally broken. Classic enterprise search is built on keyword matching and simple filters. That model starts to fail in a few common ways:

  • Language mismatch: The words you type (“how to find my pay stub”) rarely match the words in the doc title (“Global Workday Payslips”).
  • Scattered context: The “answer” isn’t in one place, it’s spread across Jira tickets, Confluence pages, comments, and Slack threads. Keyword search treats each item as isolated.
  • Structure beats text: In tools like Jira, the most important signals are often structured fields (status, assignee, components, links to epics) not just the description text.
  • Evolving vocabulary: Teams invent shorthand, rename projects, or pivot strategy. Static relevance rules and exact matches can’t adapt quickly.

The result: more time hunting for information, more duplicate work, and more decisions made without full context. That’s where Rovo comes in.

Semantic search in Rovo: finding answers, not just matching words

Most search boxes still behave like they’re stuck in 2005: type a few keywords, get a list of documents that happen to contain those exact words. Helpful… until you’re trying to answer a real question like, “how did we prioritize features for the mobile launch?”

Rovo’s semantic search is built to understand what you mean, not just what you type.

  • Understanding intent, not just tokens. When you search “why did we delay the Q3 launch?”, Rovo looks for issues, pages, and discussions that explain causes and decisions—even if none of them say “delay” or “Q3 launch” verbatim.
  • Grounding in work objects. Rovo doesn’t operate on a generic web corpus. It’s tuned on work artifacts: Jira issues, epics, Confluence pages, runbooks, post-incident reviews, and more. It learns patterns like ownership, dependencies, and decision trails.
  • Hybrid retrieval, not embeddings alone. Rovo combines traditional signals (fields, recency, project, engagement) with neural semantic understanding. That hybrid approach helps ensure the “right” Jira issue ranks above a thematically similar—but irrelevant—ticket in another project.

How our embedding models evolved

As we’ve scaled Rovo, we’ve iterated on the embedding models that power semantic understanding:

  • MiniLM → our lightweight baseline for early prototypes: fast, cheap, validated chunking, indexing, and hybrid ranking.
  • BGE-large → better recall on “work-shaped” language (e.g., Jira issues), longer passages, and paraphrases/acronyms.
  • EmbeddingGemma-300m → current default for quality/latency/cost balance at scale: strong cross-domain retrieval, low-latency query encoding, efficient large-scale indexing.

We validate upgrades with offline recall@k/MRR and online experiments (click/long-click, session success).

The same query can mean very different things depending on where you are and what you’re doing. Rovo tailors results based on what you’re actually trying to do:

  • In Jira, prioritize issues and epics that unblock execution
  • In Confluence, surface specs, decisions, and runbooks
  • Across tools, connect related work (e.g., incidents ↔ fixes ↔ follow‑ups)

Semantic search in Rovo is not magic text similarity; it’s domain‑aware retrieval optimized for getting real work done.

Out in the wild, Rovo doesn’t get clean, perfectly labeled data. It has to make sense of half‑filled tickets, noisy comments, and teams that don’t always stick to the template. Here’s what that looks like in practice:

Finding the right Jira issue with vague input

A developer types “that flaky payments test from last week”. Rovo can connect:

  • The concept of “flaky test” → recent bugs labeled “flaky”, related CI failures
  • “payments” → specific services, components, or repos
  • “last week” → time-aware filtering over issue updates and failures

Reconstructing a decision trail

A PM searches “why did we switch auth providers?” Rovo can pull together:

  • The original RFC in Confluence
  • The Jira epic that tracked the migration
  • Post‑incident reviews or performance analyses that influenced the decision

Surfacing operational know‑how, not just documents

An on‑call engineer searches “mitigate high CPU on search cluster”. Rovo retrieves:

  • Runbooks tagged for that service
  • Similar past incidents and what worked
  • Linked Jira issues where a permanent fix is in progress

Behind the scenes, Rovo is ranking not just by text match, but by:

  • How closely the content semantically answers the query
  • The roles and projects involved
  • Historical engagement and outcome signals (what people actually clicked or used)
  • The relationships between issues, pages, and services

That’s what turns semantic search into “I can actually find what I need to move forward.”

Where we’re headed next

Rovo has more than five million monthly active users, and we’re just getting started. Here are some of the areas we’re investing in across search and beyond:

Deeper understanding of work context

Rovo will get better at modeling how work connects: initiatives → epics → issues → incident timelines → postmortems, so it can answer questions at the level you care about (strategy, execution, or operations).

More proactive, assistive behavior

Search is becoming more conversational and more proactive. Instead of only returning documents, Rovo will help summarize, compare options, and highlight risks directly in your workflow, always grounded in your actual data.

Tenant‑specific embedding fine‑tuning

We are deepening per‑tenant adaptation of our embedding models—using anonymized, privacy‑preserving signals to align Rovo’s semantic space with each organization’s language, acronyms, and workflows—so search and RAG feel increasingly bespoke rather than “one‑size‑fits‑all.”

Scaling quality and performance with Atlassian powered by NVIDIA

As we push the boundaries of semantic search and AI, we’re collaborating with NVIDIA to take advantage of their accelerated computing and AI software stack. For example, fine‑tuning the Llama‑Nemotron‑Embed‑1B‑V2 model on a public Jira-like dataset using NVIDIA’s fine-tuning recipe for Nemotron RAG embedding models delivered 26%-40% uplift in retrieval quality (measured by Recall@60 and NDCG@1). The fine-tuning recipe took less than one day and leveraged NVIDIA NeMo Data Designer for synthetic data generation, data cleaning, and fine-tuning with NVIDIA NeMo Automodel, which translates into more accurate, trustworthy results in Rovo.

Agentic AI as the next frontier

Semantic search is becoming the backbone for how work gets discovered, understood, and acted on across tools. As more AI agents join the team, they depend on that same trusted semantic layer and shared context to be genuinely useful and plug into real work.

To prepare our customers for what’s next, we’re continually evolving our AI‑powered System of Work. We’re excited to take the next step with NVIDIA NemoClaw, an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell runtime, a secure environment for running next generation agents, and open source models like NVIDIA Nemotron.