ASIC seeks AI laws
ASIC wants specific laws to manage AI-related risks.
Australian Securities Investments Commission (ASIC) chair Joe Longo, in a recent keynote address, highlighted the challenges posed by AI in the financial landscape, underscoring the urgency for legal reforms tailored to this technological frontier.
Longo emphasised the existing gap between the current regulatory environment and an ideal framework capable of effectively overseeing AI.
While ASIC has been utilising existing laws to hold companies accountable for AI-related issues, Longo admitted; “We’re willing to test the regulatory parameters where they’re unclear or where corporations seek to exploit perceived gaps”.
The complexity of AI's impact on the financial sector is multifaceted.
On one hand, existing laws, such as those covering privacy, online safety, and anti-discrimination, apply broadly to AI.
But on the other, the specific challenges posed by AI - like opaque decision-making processes and the potential for inadvertent biases - are not adequately addressed by these general principles.
Longo asked; “Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks?”
The use of AI in areas like credit scoring and investment management raises significant issues around transparency, fairness, and accountability.
For instance, AI systems might inadvertently discriminate against vulnerable consumers in credit scoring, or manipulate market dynamics in investment management.
These are scenarios where current regulations might punish but not effectively prevent harm.
In response to these challenges, Longo mentioned several potential solutions, including the concept of “AI constitutions” coded into decision-making models and the enforcement of AI risk assessments.
However, he also acknowledged the vulnerability of these measures, citing an instance where researchers bypassed AI model controls by simply adding random characters to their requests.
The international context also plays a role.
The European Union's General Data Protection Regulation, for instance, imposes stricter controls on automated decision-making than current Australian laws.
ASIC says this disparity underscores the need for Australia to consider aligning its regulatory approach with global standards.
The regulator says it recognises the vast potential benefits of AI in boosting productivity and aiding consumers, but the need for a regulatory framework that ensures the safe, ethical, and responsible use of AI in the financial sector is paramount.