Article
Is your insurance company ready for the EU AI Act?
The insurance industry is not spared by the EU AI Act. Life and health insurers, in particular, are facing extensive requirements. This should be seen as an opportunity to anchor artificial intelligence even more firmly into the organization.
The CxO Strategic Priorities Study 2024 by Horváth revealed, among other things, that the majority of insurance companies have already gained experience with artificial intelligence (AI) beyond pricing and risk assessment. Many are currently in the testing and implementation phase. At the same time, the European Union has launched the AI Act, a piece of legislation that is intended to regulate AI and will therefore also affect insurance companies. In this article, we will therefore first discuss the new regulation and then use examples from the insurance industry to illustrate the regulatory requirements. To finish, we provide insurance companies with initial recommendations regarding the establishment of AI governance.
The new AI Act in brief
Depending on the respective risk categorization of the AI systems, the AI Act sets specific requirements for test procedures, risk and quality management, transparency obligations, and ongoing monitoring, among other things. The AI systems are divided into four risk categories, which are shown in the following diagram.
Implications for insurance companies
The risk-dependent governance requirements from the perspective of an insurance company are outlined below. We focus on admissible AI systems and exemplary use cases.
For classification as a high-risk AI system, Annex III of the AI Act, which contains a specific list of high-risk AI systems, must be observed. This is particularly relevant for life and health insurers, as risk assessment and pricing in these sectors have been explicitly classified as high-risk AI systems. This means that they will be subject to extensive governance requirements in the future. These include the definition of company-wide processes and standards for development and quality assurance, as well as the monitoring of AI systems and the management of inherent risks. It is also crucial to clearly define associated responsibilities and to prepare AI users for their own role in AI-supported processes through training and a (voluntary) code of conduct.
AI systems with transparency requirements primarily include chatbots, for example in sales support or claims management. When using chatbots under their own name, insurance companies are required to inform the insurance customer that an interaction with an AI system is taking place.
In addition, there are various AI systems at insurance companies that use machine learning, and logic- and knowledge-based concepts as well as statistical methods. This includes, for example, pricing in property insurance. There are voluntary requirements for these AI systems, which are limited to the establishment of codes of conduct. This code may also include the optional application of governance requirements intended for high-risk AI systems.
Our recommendations
We generally recommend that insurance companies keep a close eye on these regulatory developments. The AI Act already makes it clear that further delegated regulations and implementing acts are to be expected, including supplements to the list of high-risk AI systems as well as techniques and approaches that are classified as artificial intelligence and therefore fall within the scope of the AI Act. In order to implement the new requirements in time, we recommend first evaluating the AI use cases that are being used or planned in order to determine the need for action. The next step is to outline and implement a legally compliant operating model for AI applications.
With the implementation of the AI Act, the question of an efficient operating model for AI applications and leveraging synergies throughout the company remains unanswered. We therefore recommend that insurance companies develop their own AI governance in addition to the previous identification and implementation of use cases and beyond the regulatory requirements. This enables insurance companies to centrally manage the implementation of their own AI strategy as well as regulatory requirements, technology, training needs, and much more. Proper implementation will increase stakeholder confidence in the performance, safety, reliability, and fairness of the AI systems used.
Dr. Mägebier, A.