Australia Privacy Act Reform: AI, Automated Decisions, and the New "Fair and Reasonable" Standard
Australia's Privacy and Other Legislation Amendment Act 2024 (Royal Assent November 2024) implements the first tranche of comprehensive Privacy Act reform. AI-relevant provisions: entities using automated decision-making that significantly affects individuals must disclose this in their privacy policy and provide meaningful information about the processing on request; civil penalties enhanced to AU$50M or 30% of adjusted turnover for serious or repeated breaches; extraterritorial reach clarified to cover all overseas AI providers serving Australian users; Children's Online Privacy Code (COPC) commenced April 2025. Phase 2 proposals (pending legislation) include a "fair and reasonable" processing standard that would impose proportionality analysis on AI training and inference — similar to GDPR's proportionality principle — and a direct right of action for individuals. The 13 Australian Privacy Principles already apply to AI: APP 3 governs collection of training data, APP 6 restricts secondary use for AI training, APP 7 requires AI targeting systems to honor marketing opt-outs, APP 11 requires AI infrastructure security, and APPs 12/13 support individual access and correction of AI-derived records.
The Privacy and Other Legislation Amendment Act 2024: What Changed for AI
The 2024 Amendment (Royal Assent November 28, 2024) implements the first tranche of Australia's Privacy Act review. Key changes affecting AI systems: (1) Automated decision-making transparency — new APP 1 provisions require entities using automated decision-making that significantly affects individuals to disclose this specifically in privacy policies and provide meaningful information on request; (2) Enhanced penalties — serious or repeated Privacy Act breaches now carry fines up to AU$50M, 3× the value of any benefit obtained, or 30% of adjusted turnover during the breach period, whichever is highest; (3) Extraterritorial clarification — overseas entities processing personal information of Australians in the course of carrying on business in Australia are explicitly bound by all APPs regardless of where data is held; (4) Children's Online Privacy Code (COPC) — commenced April 2025, establishing obligations for online services likely accessed by children including age-appropriate design, prohibition on targeted advertising to children, and enhanced consent requirements.
Australian Privacy Principles Applied to AI Training and Inference
All 13 APPs already apply to AI systems processing personal information of Australians. APP 1 (transparency): entities must disclose AI use in privacy policies — the 2024 amendments strengthen this with specific automated decision-making disclosure requirements. APP 3 (collection): AI training data collection must be reasonably necessary for stated purposes; sensitive information requires consent — using scraped Australian personal data to train AI without consent risks APP 3 breach. APP 6 (secondary use): collecting data for one purpose and using it to train AI for a different purpose is a secondary use requiring either consent, a reasonably expected purpose, or a legally recognized exception. APP 7 (direct marketing opt-out): AI targeting models must support per-individual opt-out at the inference level, not just campaign scheduling. APP 11 (security): AI model infrastructure and training datasets must be secured against unauthorized access, model extraction, and data poisoning. APPs 12/13 (access and correction): individuals can access personal information held about them, including AI inference records referencing them, and request correction of inaccurate source data and derived records.
The Proposed "Fair and Reasonable" Standard and Its Impact on AI
The most consequential proposed Phase 2 reform for AI is the "fair and reasonable" processing standard — modeled on GDPR's proportionality principle but adapted for Australian law. Under this proposal, personal information processing must be fair and reasonable in all circumstances, assessed against factors including: whether individuals would reasonably expect their information to be processed in that way; the nature and sensitivity of the information; the potential for harm to individuals; whether consent or opt-out was provided; and the proportionality of the entity's interests against the individual's privacy interests. For AI teams, the critical risk is secondary use of personal data for AI training. A customer who provides financial data for a loan would not reasonably expect it to be used to train a behavioral scoring model for unrelated purposes. Under the proposed standard, this training use may fail the fairness test even if technically within current APP 6 exceptions. Privacy-by-design for AI training pipelines — limiting training data to purposes individuals would reasonably expect — is necessary future compliance practice.
OAIC Enforcement and AI Regulatory Focus
The OAIC has identified AI as a priority enforcement area. The Privacy Commissioner has published updated AI guidance (2024) and the OAIC participates in international AI enforcement coordination through the Global Privacy Assembly. OAIC enforcement powers relevant to AI: compliance assessments (mandatory audits of organizations' privacy practices — AI systems can be within scope); civil penalty orders; injunctions to stop processing; and determinations requiring remediation. Enhanced penalty framework after 2024: serious or repeated breaches carry up to AU$50M, 3× benefit obtained, or 30% of adjusted turnover. Key precedent: the OAIC's Optus investigation (2023) resulted in mandatory OAIC audit powers being used for the first time — establishing that large-scale inadequate security under APP 11 triggers maximum penalty consideration. AI infrastructure holding personal data at scale creates equivalent exposure.
Phase 2 Proposals: Direct Right of Action and Stronger Automated Decision Rights
Phase 2 legislation (pending) will introduce: a direct right of action allowing individuals to bring privacy proceedings in court without going through the OAIC — creating class action exposure for systematic AI privacy violations; potentially a right to human review of automated decisions (comparable to GDPR Article 22); a "fair and reasonable" processing standard; and additional enforcement mechanisms. The direct right of action is the most commercially significant Phase 2 proposal for AI teams — it transforms privacy compliance from a regulatory risk (OAIC investigation) to a class litigation risk (competing plaintiff lawyers filing representative actions). AI systems that systematically violate privacy rights for large numbers of Australians create the same class action exposure that BIPA has created in Illinois or CCPA breach actions have created in California.