The European Securities and Markets Authority (ESMA) has issued guidance on the use of Artificial Intelligence in retail investment services to ensure firms comply with Markets in Financial Instruments Directive II (MiFID II) and prioritize clients’ best interests. According to the guidelines, despite variations in AI adoption across firms and Member States, AI’s potential impact on firm behavior and investor protection is substantial.
AI is being utilized in various capacities, such as enhancing customer service through chatbots, providing personalized investment advice, managing portfolios, forecasting market trends, and identifying opportunities. Additionally, AI aids in compliance by summarizing regulations, comparing policies, detecting non-compliance, and drafting reports. For risk management, AI evaluates and monitors investment risks, detects fraud, and automates tasks like data entry and report generation, thereby freeing employees to focus on more complex tasks.
However, several key challenges accompany the use of AI, including over-reliance, lack of transparency, data privacy concerns, and algorithmic bias. AI systems can obscure decision-making processes, affect service quality, raise privacy issues, and produce incorrect outputs, leading to potentially misleading advice. Biases in training data can result in inaccurate predictions, which are often difficult to identify and rectify.
The guidelines for firms on these issues include the following:
- Firms must prioritize acting in clients’ best interests, regardless of the tools they use.
- Firms should clearly inform clients about AI’s role in investment decision-making, ensuring information is presented clearly, fairly, and without misleading clients.
- Firms using AI for client interactions, like chatbots, should transparently disclose the use of such technology.
- The management body must understand and oversee AI applications, ensuring alignment with the firm’s strategy, risk tolerance, and compliance framework.
- Implement effective risk management frameworks for AI, including regular testing and monitoring to identify and mitigate potential risks and biases.
- Ensure data used for AI systems is relevant, sufficient, and representative, with rigorous oversight of data sourcing and continuous analysis.
- Develop risk management processes tailored to AI, including identifying and managing risks like algorithmic biases and data security vulnerabilities.
- Adhere to MiFID II requirements for outsourcing critical functions, particularly for AI in investment advice and portfolio management, ensuring due diligence and control over third-party AI solutions.
- Provide adequate training for staff on AI, covering operational aspects, potential risks, ethical considerations, and regulatory implications.
- Maintain stringent adherence to conduct of business requirements, ensuring the suitability of services and financial instruments provided to each client.
- Implement rigorous quality assurance processes for AI tools, including thorough testing and periodic stress tests.
- Ensure strict adherence to data protection regulations to safeguard sensitive client information.
- Maintain comprehensive records on AI utilization and related client complaints to ensure compliance with MiFID II requirements.
Click here to read ESMA’s Public Statement on the Use of Artificial Intelligence in the Provision of Retail Investment Services.