Good Governance Key To Adopting AI While Managing Risks

Image: Daniel Trinder, FMA Executive Director of Strategy and Design

At the FMA, we want to speak to firms on how we can help facilitate the responsible adoption of AI. We want to work with firms to ensure that they have the appropriate oversight to mitigate these risks and provide quality service to their customers.

I had the privilege of speaking at the 9th Annual EU-Asia Pacific Forum on Financial Regulation as part of the regional committee meetings of the International Organization of Securities Commissions (IOSCO) in Da Nang, Vietnam last week.

More efficient model training and declining compute costs through innovations such as DeepSeek, will speed up AI adoption across financial markets. AI has huge potential to transform the financial services sector, it also has the potential to largely exacerbate existing risks and introduce other risks.

From the FMA's perspective, there are three categories of risk for industry and regulators to consider - market manipulation, systemic risk, and consumer protection and ethical concerns.

Advanced generative AI techniques, especially those based on reinforcement and deep learning, bring about new forms of market manipulation that existing regulatory frameworks may not adequately address.

From a systemic risk perspective, AI has the potential to lead to increased herding behaviour and amplify market correlations making markets more susceptible to shocks and dislocations. Because AI relies on data quality and model accuracy, poorly calibrated models based on biased data could lead to flawed predictions. Cybersecurity threats also pose a risk as the adoption of AI technologies increases across financial markets, they become more vulnerable to cyberattacks.

There is also potential for AI tools to be used for fraudulent activities, including the creation of deepfake content, that misleads investors or distorts market information posing a significant threat to the integrity of financial markets.

Clarifying our expectations will permit greater adoption of AI and other emerging technologies, but also in a way that minimises risks. This would help ensure good governance which is paramount for safe AI adoption. Governance is not the only panacea, but without good governance arrangements that keep pace with the application of AI, risks of AI adoption increase.

Given Boards and senior management of financial institutions are accountable for their activities, this requires clear allocations of roles and responsibilities across the entire AI life cycle including possibly specifying the role of human intervention to minimise harmful outcomes from the adoption of AI.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.