IOSCO – Published guidance for intermediaries and asset managers using Artificial Intelligence and Machine Learning

September 7, 2021 – IOSCO published guidance to help members regulate and supervise the use of artificial intelligence (AI), machine learning (ML) by market intermediaries and asset managers.

Benefits & Risks: AI & ML help market intermediaries, asset managers, investors by increasing efficiency of existing processes, reducing cost of investment services, and freeing up resources. However, use AI and ML may also create or amplify risks, potentially undermining financial market efficiency and harming consumers and other market participants. IOSCO also noted that market intermediaries and asset managers’ use of AI and ML is growing, as their understanding of the technology develops and evolves.

Report on Use of AI and ML: IOSCO’s report described how market intermediaries, asset managers currently use AI & ML to reduce costs, increase efficiency, noted rise in use of e-trading platforms and how increasing availability of data have led firms to progressively use AI & ML in their trading and advisory activities, and risk management and compliance functions. Consequently, IOSCO reported that regulators are focusing on use and control of AI & ML in financial markets to mitigate the potential risks and prevent consumer harm.

Guidance: Report described six measures that seek to ensure market intermediaries and asset managers have appropriate governance, controls, and oversight frameworks. Namely over the development, testing, use and performance monitoring of AI & ML. Staff knowledge, skills, experience to implement, oversee, challenge AI & ML outcomes. There must be robust, consistent, and clearly defined development and testing processes to enable firms to identify potential issues prior to full deployment of AI & ML. Appropriate transparency disclosures to their investors, regulators, and stakeholders. Report included 2 annexes describing how regulators are addressing the challenges created by AI & ML and the guidance issued by supranational bodies in this area.

Conduct Standard Measures:
Measure 1: regulators should consider requiring firms to have designated senior staff in charge of oversight AI & ML development, testing, deployment, and monitoring. Includes documented internal governance framework, with clear lines of accountability.

Measure 2: regulators should require firms to adequately test and monitor the algorithms to validate the results of an AI & ML technique on a continuous basis.

Measure 3: regulators should require firms to have adequate skills, experience and
expertise to develop, test, deploy, monitor, oversee AI & ML controls used by the firm.

Measure 4: regulators should require firms to understand their reliance, manage their relationship with 3rd-party providers, including performance monitoring, and oversight.

Measure 5: regulators should consider level of disclosure of use of AI & ML firms need.

Measure 6: regulators should consider requiring firms to have appropriate controls in place to ensure data replied on for AI & ML performance is of sufficient quality to prevent biases, and sufficiently broad for a well-founded application of AI & ML.

Going Forward: IOSCO urged members to consider measures carefully for legal, regulatory framework. The use of AI & ML will likely increase as the technology advances, with the regulatory framework evolving in tandem to address the associated emerging risks. To keep the information up to date IOSCO may review the report, including definitions, and guidance.

For more information, visit https://www.iosco.org.

Discover More Regulatory Insights

Visit the AxiomSL resource center for recent Regulatory Changes for financial institutions, InsideView Blog, and Thought Leadership.



We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept