AUSTRAC Artificial Intelligence Transparency Statement
On this page
- Introduction
- How we use AI
- Usage patterns and domains
- Data privacy and security
- AI safety and governance
- Compliance with AI in Government Policy
- Contact information
Introduction
AUSTRAC performs a dual role as Australia’s anti-money laundering and counter-terrorism financing (AML/CTF) regulator and financial intelligence unit. This dual role helps to build resilience in the financial system and enables AUSTRAC to use financial intelligence and regulation to disrupt money laundering, terrorism financing and other serious crime.
As Australia’s AML/CTF regulator, we regulate more than 17,000 businesses that provide financial, gambling, bullion, remittance and digital currency exchange services. We ensure regulated businesses comply with their obligations to have systems and controls in place to manage their risks and protect them and the community from criminal abuse.
As a financial intelligence unit, we collect and analyse financial reports and information to generate financial intelligence which contributes to law enforcement and national security investigations. Our specialist analysts generate targeted, actionable intelligence and work closely with industry, government and law enforcement partners to deliver tangible investigative and operational outcomes.
New and emerging technologies are changing the way services are delivered. Criminals and terrorists are always becoming more sophisticated and developing new ways to exploit vulnerabilities in the Australian financial system. To meet this challenge, we will continue to evolve how we work with industry and our partners and adopt technologies such as Artificial Intelligence (AI) to support our specialist regulatory and intelligence capabilities.
AUSTRAC aims to be transparent about the way we use AI in our agency and how we intend to approach adoption in the future. Where we have deployed AI, we comply with whole-of-government guidelines to ensure we meet the highest standards of security, privacy, accountability and regulatory compliance. Our current approach and intended future use of AI is to leverage new techniques which advance outcomes while ensuring humans remain a key part of the decision-making process. This statement will be reviewed annually and updated when:
- we make a significant change to our approach to AI, or
- new factors impact the accuracy of this statement.
AUSTRAC’s intelligence functions are part of the national intelligence community, as defined under Section 4 of the Office of National Intelligence (ONI) Act 2018. It should be noted that the Responsible use of AI in government policy specifically exempts AUSTRAC’s intelligence functions from compliance with requirements of the policy, including this transparency statement. We may voluntarily adopt elements of this policy with respect to our intelligence functions, where we are able to do so without compromising national security capabilities or interests.
How we use AI
AUSTRAC defers to the Digital Transformation Agency’s definition of an Artificial Intelligence (AI) system as:
A machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions, that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
AUSTRAC makes limited use of public generative AI tools to support workplace productivity. This is in alignment with the Digital Transformation Agency’s Staff guidance on public generative AI tools.
AUSTRAC applies mandatory protective security controls to ensure that no sensitive or classified information is entered into public generative AI systems, in accordance with the Protective Security Policy Framework Advisory on OFFICIAL Information Use with Generative Artificial Intelligence.
AUSTRAC has not yet deployed AI which directly interacts with the public or is involved in decision making and administrative action without human intervention.
AUSTRAC is currently trialling the use of enterprise generative AI systems to responsibly explore the benefits and risks of this emerging technology. This includes internal tools to improve workplace productivity and tools to support service delivery. As we continue to expand our use of AI, AUSTRAC will ensure we utilise the technology with clear human oversight, monitoring and decision making.
AUSTRAC leverages AI internally to enhance the effectiveness of our compliance and intelligence capabilities. AUSTRAC also utilises AI-enabled analytics to help detect indicators of financial crime and support analysts to generate insights for partners.
Over the next year, AUSTRAC will implement the AML/CTF Reform priorities and deliver on our data and digital transformation priorities. AUSTRAC remains committed to upholding the highest standards of security, accountability, and integrity while responsibly exploring AI. We will leverage these technologies to enhance our regulatory, intelligence and corporate operations in line with public expectations and with an eye to innovation and technological advancement.
Usage patterns and domains
The Policy for Responsible Use of AI in Government requires AUSTRAC to state the usage pattern(s) and domain(s) associated with our use of AI. For more information, refer to the Digital Transformation Agency (DTA) classification system for AI use.
AUSTRAC’s current AI usage patterns are:
- Workplace Productivity: Used to improve process efficiencies such as supporting non-sensitive research, basic secretariat support and facilitating communications.
- Analytics for Insights: Used to identify, produce and understand indicators of financial crime.
AUSTRAC’s current AI domains are:
- Corporate and Enabling: Supports corporate functions to improve operational efficiency and productivity.
- Service Delivery: Provides tailored and responsive support by assisting staff who deliver these services.
- Law Enforcement, Intelligence and Security: Support law enforcement and intelligence agencies through AI-enabled analysis of data which aids intelligence gathering.
AUSTRAC is currently developing and piloting uses of AI which will impact or expand the following AI usage patterns:
- Workplace Productivity: Support the development of intelligence products to improve operational efficiency.
- Compliance and Fraud Detection: Identify patterns or anomalies in data to detect indicators of fraudulent activities and ensure compliance with laws and regulations.
- Policy and Legal: Support generating summaries and drafts to assist legal reviews of documents and preparation of internal documents.
Usage patterns and domains
The Policy for Responsible Use of AI in Government requires AUSTRAC to state the usage pattern(s) and domain(s) associated with our use of AI. For more information, refer to the Digital Transformation Agency (DTA) classification system for AI use.
AUSTRAC’s current AI usage patterns are:
- Workplace Productivity: Used to support non-sensitive research through the collation of publicly available information.
AUSTRAC’s current AI domains are:
- Corporate and Enabling: Supports corporate functions to improve operational efficiency and productivity.
AUSTRAC is currently developing and piloting uses of AI which will in the future:
- Expand usage patterns to include:
- Analytics for Insights to identify, produce and understand insights within structured or unstructured materials
- Workplace Productivity to include virtual assistants.
- Expand domains to include:
- Service Delivery to provide tailored and responsive services directly to external and internal stakeholders, through supporting staff who deliver these services.
- Law Enforcement, Intelligence and Security to support law enforcement and intelligence agencies through AI-enabled analysis of data from various sources which aid intelligence gathering.
Data privacy and security
Protecting the privacy and security of sensitive and classified information and the data of individuals is of paramount importance to us. We ensure that data is handled in compliance with the applicable Australian legislation and regulations including the Privacy Act 1988 (Cth), the Protective Security Policy Framework and other relevant data protection laws. Personal information is only handled, including collection, usage, and disclosure, where necessary in line with our privacy policies and the Australian Privacy Principles.
In accordance with internal guidelines and policies, staff who use publicly available generative AI tools for research purposes will not include or reveal any classified, personal or otherwise sensitive information. All AUSTRAC activities that use, or intend to use, AI will be subject to appropriate privacy assurance to ensure they comply with all relevant Australian legislation and policies relating to information and data.
AI safety and governance
AUSTRAC’s AI Accountable Official is the General Manager, Data (Chief Data and Analytics Officer). AUSTRAC is committed to implementing AI systems which align with evolving legislation, ethical standards, and public expectations. As we deploy AI into our regulatory, intelligence and corporate operational capabilities, we will follow whole-of-government guidelines to ensure any use of AI is guided by the following key principles:
- Enable: AUSTRAC strategically adopts AI to support its mission of protecting Australia’s financial system. We ensure that AI enhances decision-making, supports our analytical and regulatory capability and aligns with our purpose and public expectations.
- Engage: AUSTRAC uses AI in ethical, transparent and fair ways that protect individuals, uphold public confidence and reinforce democratic values. We priorities risk mitigation, explainability and fairness in all AI use.
- Evolve: AUSTRAC embraces learning and innovation by continuously improving our application of AI through adaptive policies, collaboration and evaluation.
To further support these efforts, AUSTRAC has established governance practices, policies and guidance to ensure the ethical, transparent and secure implementation of AI. This includes:
- AUSTRAC’s AI Policy, AI Governance Framework, and AI guidance. These are supported by internal governance committees and whole-of-government guidance, ensuring staff use AI safely and responsibly.
- All staff must complete trainings which build capability in the appropriate use of AI systems and handling of security classified information.
- Internal data and information governance forums that monitor AI performance, accountability, security and risk.
- Adherence to records management standards to ensure appropriate documentation, retention and traceability of AI-related activities.
As the use of AI within AUSTRAC expands, consideration will be given to additional governance processes and practices which will ensure the appropriate, ethical and safe use of AI.
Compliance with AI in Government Policy
Under the Policy for Responsible Use of AI in Government (AI in government Policy) and the standards for transparency statements we are required to report our compliance with the requirements under the policy.
At time of publishing, this section is compliant with version 1.1 of the AI in government policy. Version 2.0 of the policy introduces new requirements from the 15th December, 2025 which AUSTRAC is committed to implementing.
The following table outlines the requirements of version 1.1 of the AI in government policy and the status of compliance with those requirements:
| Requirement | Status |
|---|---|
| Accountable Official | Compliant |
| AI Transparency Statement | Compliant |
Contact information
We will regularly review and update our AI policies and practices as part of our ongoing commitment to the responsible use of AI. This includes staying informed about new developments in AI technology, ethics and regulatory requirements. We will strive to improve the transparency, fairness and effectiveness of our use of AI systems through continuous learning and adaptation.
This statement was last updated on 12 December 2025 and will be reviewed annually, when we make a significant change to our approach to AI or when new factors impact this statement.
If you have questions, concerns or would like more information about how AUSTRAC uses AI, contact us.
This guidance sets out how we interpret the Act, along with associated Rules and regulations. Australian courts are ultimately responsible for interpreting these laws and determining if any provisions of these laws are contravened.
The examples and scenarios in this guidance are meant to help explain our interpretation of these laws. They’re not exhaustive or meant to cover every possible scenario.
This guidance provides general information and isn't a substitute for legal advice. This guidance avoids legal language wherever possible and it might include generalisations about the application of the law. Some provisions of the law referred to have exceptions or important qualifications. In most cases your particular circumstances must be taken into account when determining how the law applies to you.