Australia’s approach to regulating artificial intelligence is currently principles and risk-based. In its interim report on digital technology regulation, the Australian Productivity Commission suggested that AI-specific legislation should be a “last resort,” suggesting that existing legal and regulatory frameworks are generally sufficient to manage AI risks, with tailored interventions only where clear regulatory gaps exist. Consistent with this approach, regulators are beginning to leverage current frameworks in practice. As recently as last month, the eSafety Commissioner issued enforceable notices to four AI chatbot providers under the Online Safety Act 2021 (Cth) requiring them to demonstrate compliance with the Basic Online Safety Expectations.
By contrast, the European Union has taken a very different approach. In August 2024, the EU passed the Artificial Intelligence Act (AI Act) - the world’s first comprehensive AI regulation. Framed as a horizontal piece of legislation, applying across all industries and sectors, the AI Act introduces a risk-based regulatory framework for the development, deployment and use of AI systems. Importantly, its reach is not confined to Europe: Australian businesses may be caught by its provisions if their AI systems are offered to EU users or if their outputs are used within the EU.[1]
This regulatory divergence highlights why Australian businesses operating internationally need to be aware of the EU framework, which can directly impact market access, contractual obligations, risk management and global competitiveness
Though European in jurisdiction, the AI Act is designed to have a global footprint. It applies not only to providers and developers of AI systems within the EU but also wherever AI systems are placed on the EU market or where their outputs are used within the EU.
In practice, obligations will extend to any Australian businesses that:
This means even Australian based businesses with remote or passive exposure to EU users - such as analytics, AI-enabled services, or embedded tools - may be subject to the AI Act. This is with the intention of preventing “regulatory arbitrage” and ensuring that businesses worldwide comply with Europe’s AI standards if they want access to Europe’s 450-million-strong single market.
So the AI Act is not a distant piece of foreign legislation. Its extraterritorial scope means compliance obligations may arise if AI products or services are marketed to EU users, or if outputs are used within the EU. Australian organisations operating globally - whether in finance, technology, health, education, or professional services - should be mapping their exposure to the EU market and assessing whether their AI systems fall within the AI Act’s regulated categories discussed further below.
At its core, the AI Act is not about banning artificial intelligence - it’s about regulating its development and use to ensure trust, transparency, and accountability. As evident from the explanatory memorandum for its proposal, the regulation aims to balance the social and environmental beneficial outcomes whilst recognising the risk of negative societal consequences. To achieve this, the AI Act adopts a risk-based approach, recognising that not all AI carries the same potential for harm. AI systems that shape lives, influence fundamental rights, or impact safety are subject to the highest levels of scrutiny, while lower-risk applications face lighter touch transparency obligations.
The AI Act establishes four broad categories of risk, ranging from AI systems that are prohibited, to those subject to strict regulatory obligations, through to applications that remain largely unregulated.
AI systems considered a clear threat to people’s safety, livelihoods, or rights are prohibited outright, except in very limited circumstances. These include AI systems which:

Unlike the ‘unacceptable risk’ category, which identifies specific AI systems and practices that are outright prohibited, the high-risk classification under the AI Act is largely industry-focused.
Rather than listing individual systems, the AI Act identifies use cases in Annex III that carry higher potential for harm due to the context in which they operate. These include:
The underlying rationale is that AI systems in these sectors inherently carry a greater risk of impacting fundamental rights, safety or societal fairness. As well, AI systems will be considered high risk if they are used as a safety component or a product covered by a law listed in Annex I (e.g. laws covering machinery, safety of toys and personal watercraft).[4]
Importantly, any AI system listed under Annex III is automatically considered high risk if it profiles individuals - meaning it engages in the automated processing of personal data to evaluate aspects of a person’s life such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location, or movements.
There are limited exceptions for this category, for example, if an AI system:

Providers whose AI systems fall within the high-risk category but believe their system is not high risk must document their thorough risk assessment before placing it on the market or putting it into service. This ensures accountability and provides a defensible record should regulators question the classification.
Providers of high-risk AI systems, whether EU-based or from third countries such as Australia, must:
Deployers, being natural or legal persons who deploy an AI system in a professional capacity not affected by end-users, also face obligations, though to a lesser extent. This too includes third country deployers if their AI system outputs are used in the EU.
Where an AI system poses limited risk of potential harm to individuals rights or safety, they are subject to lighter obligations. The primary obligations for providers and deployers of these AI systems is to ensure transparency by informing users that they are interacting with an AI system.[13]
Examples of limited-risk AI systems include chatbots and deepfake detection tools.

Systems posing minimal risk are not regulated by the AI Act, and include AI used in, for example, video games or spam filters.
General-purpose AI (GPAI) – being foundational AI systems often trained on large data sets, adaptable across many tasks – are regulated separately from the risk categories identified above (e.g. ChatGPT). All GPAI providers must document model development and testing, inform downstream users of its capabilities and limits, ensure copyright compliance, and publish a summary of training data.[14]
Free and open‑license GPAI models only need to comply with the copyright policy and training data summary requirements, unless the model is deemed to pose systemic risks, in which case additional obligations apply.[15] A voluntary Code of Practice issued on 10 July 2025 aims to support GPAI providers with compliance, covering transparency, copyright, and safety and security. Some key signatories already signed up include OpenAI, Google and Microsoft.
The AI Act was passed on 1 August 2024, and its implementation is staggered as follows:
This article was written by Ariel Bastian Senior Associate and Anna Kosterich Solicitor Corporate Commercial.
[1] AI Act, article 2(1)
[2] AI Act, article 5(1).
[3] AI Act, article 6(2).
[4] AI Act, article 6(1).
[5] AI Act, article 9.
[6] AI Act, article 10.
[7] AI Act, article 11.
[8] AI Act, article 12.
[9] AI Act, article 13.
[10] AI Act, article 14.
[11] AI Act, article 15.
[12] AI Act, article 17.
[13] AI Act, article 50.
[14] AI Act, article 53(1).
[15] AI Act, article 53(2).