Key points
- 今日热门事件Chair Joe Longo spoke at the UTS Human Technology Institute Shaping Our Future Symposium on the current and future state of AI regulation and governance.
- All participants in the financial system have a duty to balance innovation with the responsible, safe, and ethical use of emerging technologies 鈥 and existing obligations around good governance and the provision of financial services don鈥檛 change with new technology.
- 今日热门事件will continue to act, within our remit, to deter bad behaviour whenever appropriate and however caused. Our focus is 鈥 and will always be 鈥 the safety and integrity of the financial system and positive outcomes for consumers and investors.
Check against delivery
鈥淓xisting laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur.鈥[1]
These words are from the Federal Government鈥檚 interim report on AI regulation. I鈥檓 sure most of you are familiar with it. It鈥檚 clear, then, that a divide exists between our current regulatory environment and the ideal. Today鈥檚 theme of 鈥榖ridging the governance gap鈥 presupposes such a divide. It invites us to consider what AI governance and regulation might look like in the ideal, how great the divide is between that ideal and current circumstances 鈥 and, of course, how we might go about bridging that divide.
But it all hinges on that first question: what would need to be addressed for the regulatory framework to 鈥榝it the bill鈥? Or, to put it another way: in what way might the current regulatory framework inadequately prevent AI-facilitated harms? This question is key. We can only bridge the gap 鈥 and create our best approximation to the ideal 鈥 if we know where that gap lies.
So my purpose today is to look for that gap. But first, I want to make it very clear that any future AI regulatory changes should not be taken to mean that AI isn鈥檛 already regulated. It is. And I will devote the first part of my speech to making that clear.
AI is not the Wild West
Earlier this month Microsoft鈥檚 AI Tech & Policy Lead in Asia said that 鈥2024 will be the year that we start to build sensible, safe, and expandable regulation around the use of AI technologies.鈥[2]
While I agree with the sentiment, statements like this imply that AI is some kind of 鈥榃ild West鈥, without law or regulation of any kind. Nothing could be further from the truth. As the interim report noted, 鈥渂usinesses and individuals who develop and use AI are already subject to various Australian laws. These include laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy.鈥[3]
For example, current directors鈥 obligations under the Corporations Act aren鈥檛 specific duties 鈥 they鈥檙e principle-based. They apply broadly, and as companies increasingly deploy AI, this is something directors must pay special attention to, in terms of their directors鈥 duties.
In 2022, the Federal Court found that RI Advice breached its license obligations to act efficiently and fairly by failing to have adequate risk management systems to manage its cybersecurity risks.[4] It鈥檚 certainly not a stretch to apply this thinking to the use and operation of AI by financial services licensees. In fact, 今日热门事件is already pursuing an action in which AI-related issues arise, where we believe the use of a demand model was part of an insurance pricing process that led to the full benefit of advertised loyalty discounts not being appropriately applied.[5]
The point is, the responsibility towards good governance is not changed just because the technology is new. Whatever may come, there鈥檚 plenty of scope right now for making the best use of our existing regulatory toolkit. And businesses, boards, and directors shouldn鈥檛 allow the international discussion around AI regulation to let them think AI isn鈥檛 already regulated. Because it is. For this reason, and within our remit, 今日热门事件will continue to act, and act early, to deter bad behaviour whenever appropriate and however caused.
We鈥檙e willing to test the regulatory parameters where they鈥檙e unclear or where corporations seek to exploit perceived gaps. Among other things, that means probing the oversight, risk management, and governance arrangements entities have in place. We鈥檙e already conducting a review into the use of AI in the banking, credit, insurance, and advice sectors. This will give us a better understanding of the actual AI use cases being deployed and developed in the Australian market 鈥 and how they impact consumers. We鈥檙e testing what risks to consumers licensees are identifying from the use of AI, and how they鈥檙e mitigating against these risks.
Is this enough?
But just because existing regulation can apply to AI, that doesn鈥檛 mean there鈥檚 nothing more to do. Much has already been made of 2024 as 鈥榯he year AI grows up鈥. Phrases like 鈥榣eaps forward鈥, 鈥榬apid progress,鈥[6] and others abound, suggesting an endless stream of benefits to consumers and businesses in the wake of AI鈥檚 growth.
And they鈥檙e right. AI continues to be an astonishing development. The potential benefits to businesses and individuals are enormous 鈥 with an estimated 鈥榓dditional $170 billion to $600 billion a year to Australia鈥檚 GDP by 2030.鈥[7] But that very rapidity brings with it a host of questions.
In 1991, after the launch of the World Wide Web, it took seven years for it to gain 100 million users. When Myspace was launched 12 years later, it hit that milestone in three years. Facebook, YouTube, and Spotify all took four years, and Uber 鈥 that great disruptor 鈥 took five. Since then, times have decreased dramatically. When TikTok was launched in 2017, it took just nine months to reach 100 million users, while ChatGPT took鈥 just two months.
The open question here is how regulation can adapt to such rapidity. As food for thought, it took two years for the Fair Work Ombudsman to determine that Uber drivers are not employees.[8] This isn鈥檛 through any fault or delay 鈥 it鈥檚 the natural pace of any deliberative and considered regulatory organisation. But there鈥檚 a clear question here about whether our current regulatory framework is enough to meet the rapidity of that challenge.
So, even as AI 鈥榣eaps forward,鈥 at a rate never seen before, questions around transparency and explainability become paramount if we鈥檙e to protect consumers from harm 鈥 intended or not. Let me consider several questions and risks around the use of AI.
One question may be, will the 鈥榬apid progress鈥 of AI carry along with it the vulnerable man or woman struggling to pay their bills in the midst of a cost-of-living crisis, whose credit score is at the whim of AI-driven credit scoring models that may be inadvertently biased?
It isn鈥檛 fanciful to imagine that credit providers using AI systems to identify 鈥榖etter鈥 credit risks could (potentially) unfairly discriminate against those vulnerable consumers. And with 鈥榦paque鈥 AI systems, the mechanisms by which that discrimination occurs could be difficult to detect. Even if the current laws are sufficient to punish bad action, their ability to prevent the harm might not be.
In such a case, will that person struggling have recourse for appeal? Will they even know that AI was being used? And if they do, who鈥檚 to blame? Is it the developers? The company? And how would the company even go about determining whether the decision was made because of algorithmic bias, as opposed to a calculus based on broader data sets than human modelling? Dario Amodei, CEO of the AI company Anthropic, admits freely that 鈥渨e, humanity, do not know how to understand what鈥檚 going on inside these [AI] models.鈥[9] So if even the experts can鈥檛 explain how a particular system works 鈥 and it seems this is often the case 鈥 how can we justify using it? How can we be sure that vulnerable consumers are part of that great leap forward?
Or let鈥檚 consider the use of AI in fraud detection and prevention, with algorithms analysing patterns and anomalies in transactions to detect potentially fraudulent activities in real-time. What happens to the customer who鈥檚 debanked when an algorithm says so? What happens when they鈥檙e denied a mortgage because an algorithm decides they should be? When that person ends up paying a higher insurance premium, will they know why, or even that they鈥檙e paying a higher premium? Will the provider?
And what if a provider lacks adequate governance or supervision of an AI investment manager? When, as a system, it learns to manipulate the market by hitting stop losses, causing market drops and volatility鈥 when there鈥檚 a lack of detection systems鈥 yes, our regulations around responsible outsourcing may apply 鈥 but have they prevented the harm? Or a provider might use the AI system to carry out some other agenda, like seeking to only support related party product, or share offerings in giving some preference based on historic data. The point is, there's a need for transparency and oversight to prevent unfair practices 鈥 accidental or intended. But can our current regulatory framework ensure that happens? I鈥檓 not so sure.
Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks? Does it prevent failure to consider emerging risks that the models may not have encountered during training?
In addition to these questions I鈥檝e just posed, the Australian Signals Directorate last week outlined several further challenges around AI use, including:
- Data poisoning;
- Input manipulation;
- AI 鈥榟allucinations鈥; and
- Privacy and intellectual property concerns[10]
The first three in particular present a governance challenge to any entity using AI, not to become over-reliant on any model that can鈥檛 be understood, examined, and explained.
In response to these various challenges, some may suggest solutions such as red-teaming, or 鈥楢I constitutions鈥 鈥 the suggestion that AI can be better understood if it has an in-built constitution which it must follow. But even these have been shown to be vulnerable, with one team of researchers breaking through the control measures of several AI models simply by adding random characters on the end of their requests.[11] Another possibility, echoing the EU approach, might be the requirement to complete an 鈥楢I risk assessment鈥 before implementing AI in any case.[12] But even here, questions like those I鈥檝e already asked need to be considered to ensure the risk assessment is actually effective in preventing harm.
My point is, these questions of transparency, explainability, and rapidity deserve careful attention. They can鈥檛 be answered quickly or off-hand. But they must be addressed if we鈥檙e to ensure the advancement of AI means an advancement for all. And as far as financial markets and services are concerned, it鈥檚 clear there鈥檚 a way to go in answering them.
Conclusion
To sum up: AI, as everyone here knows, is a rapidly and constantly evolving space. But ASIC鈥檚 interest is 鈥 and will always be 鈥 two things:
- The safety and integrity of the financial system;
- Positive outcomes for consumers and investors.
AI may be able to help us achieve these ends; it can 鈥榗reate new jobs, power new industries, boost productivity and benefit consumers鈥.[13] But, as yet, no clear consensus has emerged on how best to regulate it. Business practices that deliberately or accidentally mislead and deceive consumers have existed for a long time 鈥 and are something we have a long history of dealing with. But this risk is exacerbated by the availability of vast consumer data sets and the use of tools such as AI and machine learning which allow for quick iteration and micro-targeting. As new technologies are adopted, monitoring consumer outcomes is crucial.
For now, existing obligations around good governance and the provision of financial services don鈥檛 change with new technology. That means all participants in the financial system have a duty to balance innovation with the responsible, safe, and ethical use of emerging technologies.
Bridging the governance gap means strengthening our current regulatory framework where it鈥檚 good, and shoring it up where it needs further development. But above it all, it means asking the right questions. And one question we should be asking ourselves again and again is this: 鈥渋s this enough?鈥
听
[1] Safe and Responsible AI in Australia Consultation: Australian Government鈥檚 Interim Response, p. 5
[2] Lucio Ribeiro, 鈥淒ecoding 2024: Experts unravel AI鈥檚 next big phase鈥, Forbes Australia,
[3] Safe and Responsible AI in Australia Consultation: Australian Government鈥檚 Interim Response, p. 15
[4] 今日热门事件 v RI Advice Group Pty Ltd
[5] 听今日热门事件states that IAG subsidiaries between January 2017 and December renewed over 1 million home insurance policies for brands including SGIO, SGIC and RACV
[6] Robots Learn, Chatbots Visualize: How 2024 Will Be A.I.鈥檚 鈥楲eap Forward鈥
[7] Safe and Responsible AI in Australia Consultation: Australian Government鈥檚 Interim Response, p. 4
[8] 鈥淯ber Australia investigation finalised鈥
[9] Madhumita Murgia, 鈥淏roken 鈥榞uardrails鈥 for AI systems lead to push for new safety measures,鈥 Financial Times, 7 October 2023,
[10] Australian Signals Directorate, Engaging with Artificial Intelligence (AI), 24 January 2024 (accessed 25 January 2024)
[11] Madhumita Murgia, 鈥淏roken 鈥榞uardrails鈥欌, op. cit.
[12] Cf.
[13] Safe and Responsible AI, op. cit., p. 18