Over the past year or so, we have seen a lot of excitement about automation and digitalisation of business processes and user experience in the corporate services, trust and fund administration markets. 

There has been growing recognition that while increased automation, digitalisation and the underlying use of Artificial Intelligence (AI) brings efficiency, cost savings and improved customer experience, it also brings new risks and responsibilities.

There has long been a debate whether AI should be regulated or not. The Silicon Valley view has always been that the law should leave emerging technology alone, however, governments and human rights campaigners have recognised the risks to national security and human rights.

It will not come as a surprise that the EU is leading the way in this arena as well. In April 2021, the European Commission was the first to agree a regulatory framework for AI, determining the scope in a way that will affect not just the AI providers and users in the EU, but will also have an extra-territorial impact similar to GDPR .

What is the Regulation proposing? 

In summary, the AI Regulation proposes to introduce a comprehensive regulatory framework for Artificial Intelligence in the EU. The aim is to establish a legal framework that provides the legal certainty necessary to facilitate innovation and investment in AI, while also safeguarding fundamental rights and ensuring that AI applications are used safely. The main provisions of the AI Regulation are the introduction of:

1. Binding rules for AI systems that apply to providers, users, importers, and distributors of AI systems in the EU, irrespective of where they are based.

2. A list of certain prohibited AI systems.

3. Extensive compliance obligations for high-risk AI systems.

4. Fines of up to EUR 30 million or up to 6% of annual turnover, whichever is higher.

The Commission proposes a risk-based approach based on the level of risk presented by the AI system, with different levels of risk attracting corresponding compliance requirements. The risk categories include (i) unacceptable risk (these AI systems are prohibited); (ii) high-risk; (iii) limited risk; and (iv) minimal risk.

Scope of the AI Regulation

Drawing further parallels with GDPR, the regulation is very broad in scope from two points of view: its definitions of what constitutes an AI system and who the rules apply to. 

In terms of definitions, the AI Regulation intentionally defines 'AI systems' very broadly, to make the legal framework as technology-neutral and future-proof as possible, taking into account the fast technological and market developments related to AI. 

Title 1 of the AI regulation defines AI system as: 

"Software that is developed with one or more of the techniques and approaches listed in Annex 1 and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

Annex 1 complements the Title I definition by listing as in scope the following:

  • machine-learning approaches, including supervised, unsupervised, and reinforcement learning, using a wide variety of methods including deep learning;
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference, and deductive engines, (symbolic) reasoning, and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimization methods.

Because a vast majority of technology used by the Corporate Services, Trust, and Fund Administration industry is fairly simple, as it stands, they are not likely to fall into the scope of the AI regulation.

However, this kind of broad definition of AI will likely bring into scope a number of AI services used by the rest of the financial services industry, e.g. increasingly advanced tools for fraud prevention pattern analysis.

One technology that may fall into the scope and is increasingly used by the CSP and trust Administration industry is the technology used for anti-money laundering due diligence processes and transaction monitoring. This technology enables more efficient pattern recognition and complex learning from large amounts of data, so it makes sense to use it when firms need to process large amounts of complex data for their AML compliance.

In terms of who will fall into the scope, the AI Regulation will apply to: 

  • providers that place AI systems on the market or put AI systems into service, regardless of whether those providers are established in the EU or in a third country;
  • users of AI systems in the EU; and
  • providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU.

Therefore, the AI Regulation will apply to actors both inside and outside the EU as long as the AI system is placed on the market in the EU or its use affects people located in the EU.

What is prohibited and what is permitted?

The AI Regulation lists a number of uses of AI systems which will be strictly prohibited as they are believed to bear an unacceptable risk as they contravene EU values and violate fundamental rights. Examples of such systems include AI systems that deploy subliminal techniques to exploit vulnerabilities of specific persons or groups to materially distort their behaviour in a manner that causes physical or psychological harm, the use of systems by public authorities (or on their behalf) to evaluate the trustworthiness of individuals based on their social behaviour – such as social scoring introduced by the Chinese government.

Last but not least, the Regulation prohibits AI systems used for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement, unless it is strictly necessary for a targeted crime search (e.g. terrorism).

The rest of the AI systems categorised as High Risk, Limited Risk, and Low Risk will need to abide by certain rules and guidance.

While there is no definition of the term 'high risk AI', Articles 6 and 7 indicate the criteria used to determine whether a system should be high risk. These include AI systems intended to be used as a safety component of products (or which are themselves a product), and stand-alone systems whose use may have an impact on the fundamental rights of natural persons. These systems include for example “real-time” and “post” biometric identification systems, education and vocational training, employment, and credit scoring. The list will be kept updated and is likely to be expanded further.

The Regulation imposes a number of general requirements on high-risk AI systems, such as transparency (designed to ensure users are able to interpret its output and use it appropriately), human oversight (designed in a way there is human oversight, aimed at minimising risks to health, safety, and fundamental rights), establish a risk management system, appropriate data governance for purposes of training, validation and testing of AI systems, appropriate technical documentation demonstrating compliance with the AI Regulation and ensuring appropriate security.

While the above is mostly relevant to the providers of AI systems, the Regulation also imposes obligations on the users of AI systems. Obligations include: to use the systems in accordance with the instructions of the provider and implement all technical and organisational measures stipulated by the provider to address the risks of using the high-risk AI system; ensure all input data is relevant to the intended purpose; monitor operation of the system and notify the provider about serious incidents and malfunctioning; and maintain logs automatically generated by the high-risk AI system, where those logs are within the control of the user.

Other AI systems (non-high risk) are not subject to any specific requirements however, the Commission has stated that providers of non-high-risk AI systems should be encouraged to develop codes of conduct intended to foster voluntary application of the mandatory requirements applicable to high-risk AI systems.

For certain 'limited risk' AI systems, transparency requirements are imposed – for example, where the AI system is intended to interact with a natural person, it must be designed in a way that the user is made aware they are interacting with an AI system and not a human (e.g. chatbots).

All other 'minimal risk' AI systems are not subject to any additional legal obligations. The majority of AI systems used in the EU fall into this category, and the providers may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.

What does that mean for the industry?

Considering the broad scope of the Regulation, the publication of it may have well caused some concern in the AI sector as well as among the early adopters of AI in the wealth management industry.

However, the way the Regulation has been designed, the impact looks to be – at least for the time being – manageable. The main areas of AI systems currently used are linked to AML customer due diligence and transaction monitoring and robo-advice (investments).

The brunt of the new obligations will be borne by the AI providers of such systems however, the users of those AI systems will also need to make sure that they implement extra controls and oversight over how the AI systems are used by their employees.

For the rest of the AI systems used by the industry, currently it looks like the voluntary transparency and code of ethics will apply. However, things may change in the future as the regulations get refined and new risks are recognised.

As the examples show, while you may consider AI as something that isn’t going to touch the industry anytime soon, as with most technological revolutions, the incremental steps are small but over time will transform a market. At TrustQuay our mission is to help corporate services, trust and fund administrators to automate and digitalisation their business, and you can be sure that Artificial Intelligence is firmly on our radar.

About the Author

Nina Mileksic