
Regulatory Update - OAIC’s guidance on privacy and the use of commercially available AI products
On 21 October 2024, the OAIC published a guidance on privacy and the use of commercially available AI products (AI Guidance). The OAIC AI Guidance provides additional commentary to complying with obligations under the Privacy Act and the Australian Privacy Principles when adopting or implementing AI platforms.
The OAIC AI guidelines have been substantiated and supported in its enforcement actions involving:
- the use of facial recognition technology by Bunnings and Kmart intended to help address unlawful activity, which the OAIC found to be in breach of the Privacy Act as Bunnings and Kmart failed to have proper regard to its obligations, particularly as to consent and transparency requirements. The OAIC also referenced the impact of the technology on privacy rights, as well as collective values as a society.
- the indiscriminate collection of images of individuals’ faces from publicly available sources across the internet (including social media) by Clearview AI to store in a database on its servers, which was then used by it to assist law enforcement identify individuals from other data.
As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available AI platforms, due to the significant and complex privacy risks involved.
The OAIC has also set out guidance on privacy consideration with respect to the use of AI platforms. We set out the relevant guidance, considerations and regulations provided by the OAIC with respect to complying with privacy obligations prior to adopting or implementing AI platforms.
Conduct a Privacy Impact Assessment
Organisations considering the use of AI products should take a ‘privacy by design’ approach, which includes conducting a Privacy Impact Assessment (PIA). This can assist entities to:
- describe how personal information flows in a project
- analyse the possible impacts on individuals’ privacy
- identify and recommend options for avoiding, minimising, or mitigating negative privacy impacts
- build privacy considerations in the design of a project, and
- achieve the project’s goal while minimising the negative and enhancing the positive privacy impacts. [1]
If personal information is utilised on the AI platform
Identify appropriate intended uses, including whether personal information use is necessary and the best solution in the circumstances
Businesses should conduct due diligence to ensure the product is suitable to its intended uses. This includes:
- whether the product has been tested for such uses
- whether the intended uses are likely to constitute high privacy risk activities
- how human oversight can be embedded into processes
- the potential privacy and security risks, and
- who will have access to personal information input or generated by the entity when using the product.
Throughout the AI product lifecycle, the product must be regularly reviewed to ensure it remains fit for purpose, its use if appropriate and complies with privacy obligations.
APP 6 (disclosure of personal information for primary collected purpose or related secondary purpose)
Organisations should identify the intended uses of personal information within AI platforms and assess whether these align with the primary or secondary purposes for which the data was collected.
Where an organisation cannot clearly establish that such a secondary use was within reasonable expectations, it should seek consent for that use and / or offer individuals a meaningful and informed ability to opt-out.
APP 3 (reasonably necessary collection of personal information)
Organisations must ensure that:
- any personal information generated by an AI product is reasonably necessary for your business’s functions or activities
- the use of AI to infer or generate personal information is only by lawful and fair means
- it has identified whether it is unreasonable or impracticable to collect the personal information directly from the individual
- if so, consent be obtained to use AI to generate or infer sensitive information about a person, and
- personal information created by AI be destroyed or de-identified if it is not permitted to be collected.
APP 10 (accuracy of personal information)
Organisations must take reasonable steps to ensure that:
- the personal information it collects is accurate, up-to-date and complete, and
- the personal information it uses and discloses is accurate, up-to-date, complete and relevant, having regard to the purpose of the use or disclosure.
APP 1 (transparency with handling personal information)
Entities must manage personal information openly and transparently and implement reasonable practices to ensure compliance with the APPs. This includes, amongst others, updating privacy policies and notifications to clearly disclose how AI technologies use personal information.
Throughout the lifecycle of the AI product, your organisation should have in place processes for ensuring that the product continues to be reliable and appropriate for its intended uses.
APP 5 (notification of the collection of personal information)
Entities must take reasonable steps to either notify the individual of certain matters or ensure the individual is aware of those matters at or before collection time, or if impracticable, as soon as possible after. [2]
APP 11 (security of personal information)
Organisation must take reasonable steps to protect personal information from misuse, interference and loss, and unauthorised access, or disclosure. Furthermore, it is necessary to ensure that any third-party platforms comply with APP 11 through the contract reviews and governance control. As AI evolves, ongoing review of compliance with emerging laws and regulations is essential.
APP 1.7 (ADM)
The recent privacy reform introduced additional transparency requirements for ADM systems. This will require organisations using ADM systems that significantly affect individuals update their privacy policies to disclose:
- the personal information used
- decisions made solely by ADM systems, and
- decisions informed by ADM operations.
This publication constitutes a summary of the information of the subject matter covered. This information is not intended to be nor should it be relied upon as legal or any other type of professional advice. For further information in relation to this subject matter please contact the author.
Stay updated with Gilchrist Connell’s news and insights, zero spam, promise.


We acknowledge the Traditional Custodians throughout Australia and their connection to land, culture, waters and skies. We pay our respect to the communities, the people, and Elders past, present and emerging.
Liability limited by a scheme approved under Professional Standards Legislation. Legal Practitioners employed by and the directors of Gilchrist Connell Pty Ltd are members of the scheme.


