ATO ‘hypervigilant’ with oversight of AI activities, Hirschhorn assures practitioners
The Tax Office is becoming increasingly data-driven, but has implemented safeguards and procedures to prevent “high-impact errors” from occurring, the deputy commissioner has said.
Jeremy Hirschhorn outlined the importance of human oversight when implementing AI systems at a UNSW conference in April, likening AI to a bionic arm as a tool, not a replacement.
“AI may be a helper. It can move things around, it can link, synthesise and analyse information, and it can do some things much faster and more consistently than we as humans can,” Hirschhorn said.
“But AI cannot determine what constitutes fairness and reasonableness, having considered unique taxpayer circumstances with compassion and empathy.”
The sentiment is timely for the Tax Office, which has recently come under fire following a pattern of incorrect data matching.
In one example, the tax authority sent letters which warned taxpayers that they had failed to include rental income in their tax returns. The issue was, many who received the letters didn’t own the properties listed in the letters, prompting complaints from disgruntled accountants and taxpayers.
Hirschhorn emphasised the importance of tax authorities such as the ATO having procedural and cultural safeguards against ‘high-impact actions’ made in error. Data itself could not reveal the full negative impacts that errors could have on taxpayers.
“This focus on potential errors is very hard. It forces you to understand the other person’s world (and how your actions may affect it),” he said.
“Thinking about errors requires a discipline as classic measures such as complaint levels or error rates do not get to the heart of whether your errors are impactful or not.”
He highlighted the tax authority’s role as a data steward, saying that the sharing of data with other agencies, including within government, must be in strict accordance with the law.
“Perhaps more importantly, and a lesson from Robodebt, is that the tax administrator must continue to act as a steward of that data even after it has been legally shared,” Hirschhorn said.
The Robodebt scheme exemplified the human impact of authorities’ errors and the consequences of data misuse. Under the scheme introduced in 2015, data and algorithms were misused to incorrectly claw back money from welfare recipients, resulting in $746 million being wrongfully recovered from vulnerable Australians.
While the money was eventually reimbursed, the emotional toll and financial hardship were far-reaching, a 2022 Royal Commission into the Robodebt Scheme found.
Ideally, Hirschhorn said, errors would be identified before they went to the taxpayer. As AI’s role and utility increased, he said that authorities should avoid ‘data hubris’ and cautioned against placing too much faith in automated systems, which were capable of inaccuracies.
“Noting that most people are fundamentally honest, a high 'hit rate' should be viewed with great caution. It is more likely to be a sign of 'data hubris' than widespread non-compliance, and should be treated as such until proven otherwise,” he said.
He also highlighted the UK Post Office scandal, where over 900 sub-postmasters were incorrectly prosecuted for stealing after faulty software said money was missing from their Post Office branch accounts.
“The UK Post Office scandal is a prime example of an institution having excessive trust in the computer systems and insufficient trust in ordinary people,” Hirschhorn said.
While AI had notable utility in some cases, such as auto-fill services that saved taxpayers valuable time and effort, Hirschhorn cautioned that it was important for organisations to retain strong human oversight across all of their processes, especially where errors would pose real human costs.
“If you do not know why your organisation is doing things ('the computer said so'), you are breaching your responsibility to be accountable to both the individual taxpayer, but also the broader system.”