
Proposals for AI regulation in Australia
Offering valuable insights into the future landscape of AI regulation in Australia, both the Australian Government and ASIC have provided much needed guidance on AI safety and governance.
On 5 September 2024, the Government released two significant documents: the “Voluntary AI Safety Standard” guidance and a consultation paper on proposed mandatory AI guardrails for high-risk applications (Consultation Paper). These initiatives mark a significant step towards addressing the complex challenge of guiding AI use in Australia.
A need for regulatory action
The rapid realisation of the possibilities of generative AI and other advanced AI systems has raised alarm bells among industry leaders, prompting urgent calls for regulatory action. While the expected rapid adoption and leveraging of AI is exciting, it presents significant challenges for governments globally. It is imperative that the development and deployment of AI takes place safely and reliably, limiting inappropriate discrimination and protecting individual privacy. Additionally, the power of AI raises concerns about its impact on competition, national security, and the environment, given the computational resources required to power advanced AI. It is crucial that any proposed regulations are not overbearing and do not impede the safe and considered development of AI to the detriment of businesses, innovation and national interest.
In an initial attempt to strike a balance, key international regulatory developments have emerged, notably the European Union’s Artificial Intelligence Act 2024 and the United Kingdom’s proposed Artificial Intelligence (Regulation) Bill 2024.
The Australian Government’s response to date
Australia’s initial foray into AI regulation began in June 2023 with a public consultation on AI risks. On 17 January 2024, the Government released an interim plan prioritising testing, transparency, accountability, and international collaboration for safe AI development, as outlined in the March 2024 edition of Regulatory Radar.
As a next step, the Government is exploring mechanisms through its mandatory guardrails consultation paper for high-risk use cases and Voluntary AI Safety Standards that align with the international shift from industry self-regulation to a formal AI regulatory framework.
While Australia’s approach to high-risk AI regulation remains to be determined, we can almost certainly expect comprehensive oversight of AI systems in safety-critical sectors, such as healthcare, autonomous vehicles, and law enforcement. Drawing from these international precedents, Australian organisations will likely need to implement robust documentation, risk assessments, and human oversight measures, particularly for systems impacting individual rights and public safety. There is also the potential for a formal certification scheme to emerge, requiring third-party assessments and compliance audits.
What’s on the radar?
Voluntary AI Safety Standards
The Voluntary AI Safety Standards allow organisations to ensure the development and uses of AI systems are safe, secure, and trustworthy. The standards recommend organisations comply with the following 10 voluntary ‘guardrails’:
- Establish an accountability process including governance and compliance strategies
- Implement a risk management process to identify and mitigate risks
- Implement data governance measures to manage data quality, provenance and protect AI systems
- Test and monitor AI models and systems for performance
- Enable human oversight throughout an AI life cycle
- Inform users about AI decisions, interactions and content to build trust
- Allow users to challenge the AI use or outcomes
- Maintain transparency with other organisations across the AI supply chain for risk management purposes
- Keep records for third-party compliance assessments with guardrails
- Engage stakeholders and evaluate their needs and circumstances, focusing on safety, diversity, inclusion and fairness.
These ‘guardrails’ serve as useful initial guidance on voluntary steps that businesses could take at an organisational level to create necessary processes and at a system level for each AI use case.
Proposed mandatory guardrails for highrisk AI
The Government’s recent Consultation Paper seeks input on the proposed definition of high-risk AI, proposed mandatory
guardrails, and regulatory enforcement mechanisms.
Proposed definition of high-risk AI
The Consultation Paper suggests two broad categories where the mandatory guardrails would apply to “high-risk” AI use cases as proposed to be defined:
- “High-risk” arising from the proposed uses of an AI system (i.e. “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”) or uses of a general-purpose AI model (GPAI) that are known or foreseeable (i.e. risk has been determined with reference to the context in which that AI system will be used or the foreseeable applications for the AI system or GPAI model).
- “High-risk” relating to “advanced, highly capable GPAI models” where all possible applications and risks cannot be foreseen (i.e. the risk lies in the potential for these models to be used – or misused – for a wide range of purposes with emergent risks). A GPAI model is “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems.”
Mandatory guardrails
Under the Consultation Paper, the proposed mandatory guardrails replicate the 10 Voluntary AI Safety Standards, with the exception of the 10th voluntary standard which rather than focusing on stakeholder engagement, instead focuses on conformity assessments as set out below:
10. undertake conformity assessments to demonstrate and certify compliance with the guardrail.
It is intended that these mandatory guardrails would work in a complementary manner to Australia’s existing legal frameworks and that the guardrails should be adaptable to remain fit for purpose, in light of the continued evolution of AI technology.
Regulatory options
There are three options for implementing the above mandatory guardrails that are set out in the Consultation Paper:
- Domain-Specific Approach: Adapting existing regulatory frameworks to include the proposed guardrails.
- Framework Approach: Introducing new framework legislation with amendments to existing laws.
- Whole of Economy Approach: Enacting a new cross-economy “Australia AI Act”.
The feedback sought by the Australian Government on the Consultation Paper will inform which of the above three options will be taken to the implementation of the mandatory guardrails in the future.
ASIC Report on AI Governance for AFS Licensees
In late October 2024, ASIC released Report 798 on AI Governance for Australian Financial Services (AFS) and credit licensees (REP 798) following its review of AI use and governance arrangements by 23 AFS licensees. ASIC anticipates a substantial increase in AI adoption by AFS licensees within the next 12 months and expects AFS licensees to implement AI in a safe and prudent manner, adhering to appropriate risk frameworks and practices, noting that such licensees are subject to obligations under existing financial services legislation that are technology neutral.
ASIC’s REP 798 highlights:
- AI implementation can both amplify existing risks and introduce new risks for licensees
- 61% of AFS licensees plan to increase AI use in between October 2024 and October 2025
- 30% of AI use cases by AFS licensees involved third-party AI models, which needs to be considered when developing appropriate AI regulation and standards
- Most licensees have taken a measured approach, utilising AI primarily to enhance human decision-making rather than for autonomous operations
- ASIC has identified concerns that AI deployment by some AFS Licensees was out of step with their governance and risk management frameworks, with gaps identified in AI governance which is either lagging or being updated at the same time as the increasing use of AI by the licensees.
How should the insurance industry respond?
While AI regulation is in its infancy, there is a clear global drive to regulate its use and development to harness AI’s potential responsibly. Australian organisations engaging in, or looking to engage in AI development, should consider adopting the voluntary standards and participating in future government consultation processes. Early engagement will facilitate compliance with any future AI regulations once they are developed.
AFS licensees must ensure their AI deployment complies with existing conduct and consumer protection obligations, as ASIC has signalled that it expects such AI deployment to comply with the existing financial services legislation which is technology neutral. This includes implementing robust vendor management practices for third-party AI models, such as obtaining independent validation, establishing comprehensive service-level agreements, and developing clear protocols for model updates and changes.
In Australia, the Voluntary AI Safety Standards currently serve as non-binding guidelines, providing an insight into the future direction of Australia’s move towards structured AI regulation. Organisations should evaluate whether their AI applications may fall within the scope of “high-risk use cases” and consider adopting these voluntary standards to ease the transition to future regulations and mitigate legal risks.
This publication constitutes a summary of the information of the subject matter covered. This information is not intended to be nor should it be relied upon as legal or any other type of professional advice. For further information in relation to this subject matter please contact the author.
Stay updated with Gilchrist Connell’s news and insights, zero spam, promise.


We acknowledge the Traditional Custodians throughout Australia and their connection to land, culture, waters and skies. We pay our respect to the communities, the people, and Elders past, present and emerging.
Liability limited by a scheme approved under Professional Standards Legislation. Legal Practitioners employed by and the directors of Gilchrist Connell Pty Ltd are members of the scheme.


