Australia’s national policy for ethical use of AI starts to take shape

By Dylan Bushell-Embling

The national framework for AI assurance in government, released in June, is the latest of a series of legislations by both the federal and states governments to develop a consistent framework for AI assurance.

Australian Federal and State Governments are in the process of developing legislation to ensure the ethical use of artificial intelligence by government agencies. One of the aims of this is to ensure that government procurement decisions are guided by factors that include AI ethics principles, the transparency of data, and proof of performance testing. Image: Canva

Following global trends, the Australian government published a report in June on the development of a national framework for AI assurance in government, which will be followed by the federal, state and territory governments in the country.


The framework sets a foundation for a nationally consistent approach to AI assurance, which refers to the process of testing AI systems to ensure they are trustworthy and conform to ethical principles.


This move is part of several steps being taken in the country to ensure the safe, responsible and ethical use of AI in government, balancing the transformative potential of AI with its possible harms.

Digital NSW taking the lead


New South Wales became the first jurisdiction in Australia to develop a whole-of-government AI ethics policy in 2020, and among the first in the world to develop a framework to guide the applications of AI in government in 2022.


To subscribe to the GovInsider bulletin click here


The state’s Digital NSW recently announced that this framework will be integrated into its broader Digital Assurance Framework. This will ensure the assurance framework applies to any relevant agency project with a budget exceeding A$5 million (S$4.36 million). The updated framework guides agencies through complying with the mandate to use the AI Ethics Policy and the AI Assessment Framework.


In July, the NSW Legislative Council published a report detailing the use of AI in the state. The report included 10 recommendations for the state government to consider, including the establishment of a NSW Chief AI Officer, who would seek to maximise the responsible use of AI by government departments and offices.


The report also recommends the establishment of a Joint Standing Committee on Technology and Innovation to provide continuous oversight of AI and other emerging technologies, and the development of a regulatory gap analysis to determine where new laws may be needed.


The state government’s response to the report is due in late October.

Risk-based approach


The Australian government’s national framework stipulates that governments should take a risk-based approach to assessing the use of AI on a case-by-case basis. It recommends that for high-risk settings, governments should consider oversight mechanisms such as external or internal review bodies.


Procurement decisions should be guided by factors including AI ethics principles, the transparency of data, and proof of performance testing. For the use of content produced using generative AI (GenAI), the framework asserts that decisions should focus on human oversight and accountability.


The framework also addresses the implementation of Australia’s AI Ethics Principles in government. These principles stipulate that AI systems should benefit individuals, society, and the environment, uphold the privacy rights of individuals, should not result in unfair discrimination, and that people should be able to challenge the use or outcomes of AI systems, amongst other principles.


Australia’s AI governance frameworks were put to the test earlier this year when the Department of Industry, Science and Resources collaborated with Singapore’s Infocomm Media Development Authority (IMDA) to test both countries’ AI ethics principles.


To subscribe to the GovInsider bulletin click here


The collaboration, which took place under the Digital Economy Agreement between both nations, examined how the nations’ respective governance frameworks apply to the National Australia Bank’s machine learning-based Financial Difficulty Campaign.


The key takeaway from the exercise was that the countries’ AI governance frameworks align and are compatible. No specific obstacles could prevent an Australian company from adhering to both Australia’s AI Ethics Principles and Singapore’s Model AI Governance Framework.

Guidelines for public servants


In July last year, the federal Digital Transformation Agency (DTA) published interim guidance on use of public generative AI tools by the Australian Public Service (APS). The guidance stipulates that APS staff should adhere to the AI Ethics Principles while using these tools.


In September 2023, the federal government convened a AI in Government Taskforce to provide recommendations on the use of AI by APS. Jointly led by the Digital Transformation Agency and the Department of Industry, Science and Resources, the taskforce modified the interim guidance to introduce two ‘golden rules’ for APS staff to follow.


These rules stipulate that APS staff should be able to explain, justify and take ownership of advice and decisions related to use of the technology. APS staff should also assume that any information input into public GenAI tools could become public and avoid inputting any classified or sensitive information.


Meanwhile, the Commonwealth Ombudsman has published best-practice guidance on the use of AI tools for automated decision making. The guidelines include principles for assessing the suitability of automated systems; ensuring compliance with administrative law requirements; ensuring that the design of an automated system complies with privacy requirements; establishing governance of automated systems projects; developing quality assurance processes to maintain accuracy in decisions; and ensuring the transparency and accountability of AI systems.


The responsible use of automated decision making in Australia is a sensitive subject due to the Robodebt scandal of 2020. The previous government was forced to apologise in Parliament after an automated system was unlawfully used to send out thousands of incorrectly calculated debt assessment notices to recipients of welfare payments during the preceding four years.

Safe and responsible AI


In January, the federal government published its interim response to last year’s consultation on the safe and responsible use of AI in society. The consultation concluded that Australia’s current regulatory framework does not sufficiently address risks presented by AI, and existing laws do not adequately prevent AI-facilitated harms before they occur.


To subscribe to the GovInsider bulletin click here


The government has stated it will take a risk-based response to support the safe use of AI in society, limiting any mandatory regulatory requirements to high-risk AI applications while attempting to allow low-risk applications to develop unimpeded.


The government has committed to conducting consultations before introducing mandatory guardrails for organisations developing and deploying AI systems in high-risk settings.


The government will also ask the National AI Centre to develop an AI Safety Standard for industry, which will seek to ensure that AI systems being deployed are safe and secure. Another avenue being explored is the voluntary labelling and watermarking of AI-generated material in high-risk settings.


(The author is a freelance journalist based in Australia)