On Friday, the robodebt royal commission will hand down its findings. What is expected is a scathing report chronicling the failure of the executive arm of the Australian government to fulfil its most basic responsibilities to its citizens. Robodebt ranks among the worst mistakes of the Commonwealth public service since Federation in 1901. Its lessons are profound and numerous, extending to failures of bureaucratic culture, transparency and reporting, and basic accountability in leadership.
Centrally, the malign act of robodebt was one of automated persecution, authorised at scale — a judgment handed down upon the vulnerable by a literal engine of the state, without direct recourse or right of reply. A radical violation of every principle of procedural justice, it demonstrated to the Australian people a profound disrespect and devaluing of human life and a disregard for the primary obligation of a duty of care.
Yet while these violations are without doubt all fundamental indictments, they are only the surface of a much deeper failure. At robodebt’s heart is a total misalignment between state administration and the basic values of the democratic project, made possible by human misjudgment of the risk that technology advancements pose to our systems of governance.
The lessons of robodebt are not local but global, and far-reaching into the future. The central concern is the danger of the unfettered use of technologies similar to artificial intelligence in consequential decision-making, driven by a human hunger for cost-efficiency and a corresponding hubris to presume that gain can come without cost. It is a lesson in the catastrophic harm of non-human administration of human life and the totalising and uncompromising consequences of authorising an agent machine to manage people completely free of moral oversight.
Robodebt was a program trained used by the Australian Taxation Office to detect and pursue discrepancies in income to ensure greater compliance in payments received by welfare recipients. It was meant to be cost-effective and efficient, deployed with the goal of optimising for the detection of non-compliance.
The field of AI has accelerated rapidly in the past decade, with advances not only in narrow-use programs (such as the AI systems employed by Centrelink) but more recently in the release of Large Language Modules (LLMs) such as ChatGPT. The fundamental issue with AI is that while we have made exponential leaps in the power of such systems to optimise for a wide array of tasks, we have made far fewer gains in the fields of safety and alignment.
We have built machines that are superior in their ability to solve and execute complex utility functions infinitely faster than our own brains. We have not, however, managed to achieve the optimisation of tasks while still programming for human values. At the most rudimentary level, alignment is the attempt to ensure that AI systems work for humans and support human goals, no matter how powerful the technology becomes.
Robodebt is an example of what can go wrong when AI ambitions are misaligned. Absent of human supervision, the algorithm made a series of decisions and took actions that had disastrous consequences for countless human lives, as seen in the case of Jarrad Madgwick, 22, whose tragic suicide in 2017 occurred as a direct response to receiving a debt notice from Centrelink.
Such systems will continue to be used in government, including in Home Affairs and in the detection of visa fraud. Around the world, governments are increasingly utilising the technology, for anything from calculating bounce-back rates in hospitals to waste management. Globally, governments are leaning into this space, finding new applications and touting the benefits of efficiency in service delivery and reductions to budget bottom lines.
Australia has just taught the world the cost of careless application. The areas in which governments should be investing are AI alignment, regulation, safety and risk-based approaches. The European Union has started this work and has favoured a risk-based approach, differentiating the use of AI according to whether the risk is unacceptable (such as the manipulation of human behaviour) or low risk. The EU framework is nuanced but cautious, and most critically, its focus is on preventing human harm and preserving democratic values.
Australia’s regulatory approach has been sluggish. The Department of Industry, Science and Resources released the discussion paper “Safe and Responsible AI in Australia” in June this year. While identifying some major themes and proposing a one-page, watered-down, risk-based approach at the end, the paper offers little to move the dial on AI safety. The proposed Australian risk-based approach reads more like a voluntary code of conduct than a regulatory framework. It underscores a difference in approach: while the EU has prioritised the protections of citizens, Australia has been reluctant to “stifle innovation” and has left us exposed in the process.
To unleash a powerful technology such as robodebt without consequence to the implications for human life is beyond irresponsible. It is a callous and calamitous failure of public administration, and the responsibility ultimately lies at the top. The action by the current government to initiate a royal commission was not only appropriate but essential.
In the end, robodebt was a decision made to deploy a powerful deterrent against the most vulnerable citizens in our society. It was a decision of the previous government and must be fully owned by those who knew and did not act. Complex technologies are a part of our future, but using them without human oversight, without proper regulation and in the absence of due thought about safety and alignment is not only irresponsible, it is criminal.
For anyone seeking help, Lifeline is on 13 11 14 and Beyond Blue is on 1300 22 4636. In an emergency, call 000.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.