EU AI Act
The EU AI Act establishes a comprehensive framework for regulating the development, deployment, and use of artificial intelligence across the European Union. The Act takes a risk-based approach to introducing compliance obligations, with requirements coming into force on a rolling basis through 2027.
Atlassian’s proactive approach to EU AI Act compliance
At Atlassian, we are dedicated to responsible AI development. We’ve published our Responsible Technology Principles, which serve as our internal framework to ensure the thoughtful development and use of AI technology. Additionally, in September 2024, we joined the EU AI Pact, a program launched by the European Commission to encourage companies to take steps towards pre-compliance with the EU AI Act.
Since joining the EU AI Pact and in anticipation of the EU AI Act, we’ve published a number of resources, documentation and detailed FAQs related to AI compliance on the Atlassian Trust Center. This includes transparency pages designed to help our customers better understand our AI offerings by detailing the underlying models that power them, clarifying capabilities and limitations, explaining data use and processing, and illustrating their impact on user experience. We are also dedicated to maintaining transparency with our users when they interact with our AI offerings. This includes clearly indicating in user interfaces for Rovo and Atlassian Intelligence when users are engaging with AI-powered products. Additionally, our AI offerings are subject to the same robust data protections and GDPR privacy commitments as the rest of our portfolio.
We are also committed to supporting our customers in deploying our AI offerings through knowledge sharing. We’ve published our Responsible Technology Review Template and No-BS Guide to Responsible AI Governance to help teams around the world translate these principles into practice. We are dedicated to helping upskill and improve the AI literacy of our customers through resources like Atlassian University and Atlassian Community, designed to build the deep understanding of our AI offerings needed to maximize their potential.
Shared responsibility in compliance
Compliance with the EU AI Act, like many regulatory frameworks, involves shared responsibility across the AI value chain, including foundational model and LLM developers, AI systems providers, and AI deployers. We collaborate with our LLM model providers to ensure that the models we use comply with legal requirements. Furthermore, when Atlassian utilizes third-party hosted LLM models, we ensure that model providers handle customer personal data in accordance with the same standards set for all Atlassian subprocessors in our Data Processing Addendum.
We have also updated our Acceptable Use Policy to include guidelines and recommendations that help ensure customer use of our AI offerings does not lead to harmful outcomes or violations of AI regulations. We encourage all customers to review the policy thoroughly and ensure their users understand the restrictions. If a customer’s use of our AI offerings violate our Acceptable Use Policy, their request may not be processed by our AI system.
To learn more about how Atlassian’s AI offerings and our commitment to responsible AI development, please visit our Trust Center and review our Responsible Tech Principles. We are committed to partnering closely with our customers on AI compliance.
Our team is here to help
Have more questions about our compliance program?
Do you have cloud certifications? Can you complete my security & risk questionnaire? Where can I download more information?
Trust & security community
Join the Trust & Security group on the Atlassian Community to hear directly from our Security team and share information, tips, and best practices for using Atlassian products in a secure and reliable way.
Atlassian support
Reach out to one of our highly-trained support engineers to get answers to your questions.