DNB's AI Guidance: balancing innovation with prudence

Article
NL Law

The Dutch Central Bank (De Nederlandsche Bank, DNB) has shared new key considerations on the use of artificial intelligence in the insurance sector. The March 2025 sector update titled "AI at Insurers: opportunities and risks”(AI bij verzekeraars: kansen en risico's) ") outlines DNB's supervisory expectations based on findings from a 2024 industry survey. The publication builds on DNB's "SAFEST" principles framework, which was first introduced in a 2019 exploratory study conducted jointly with the Netherlands Authority for the Financial Markets.

AI adoption in Dutch insurance

DNB's industry assessment revealed that of 36 insurers surveyed, 15 are already applying AI in their business processes. Common applications include analysing unstructured data for risk assessment, developing personalized product recommendations, implementing fraud detection mechanisms, and automating claims processing. Most insurers primarily view AI as a tool for improving operational efficiency and enhancing customer experience.

A notable finding is that few insurers have established dedicated AI roadmaps or articulated long-term strategic visions. When questioned about obstacles to AI development, insurers frequently cited insufficient internal expertise and fundamental shortcomings in data infrastructure and data quality as primary impediments to progress.

A balanced risk perspective

Most insurers identified non-financial risks as their primary concern, including potential reputational damage and business continuity issues, which were generally perceived as more significant than direct financial risks. Many firms expressed apprehension that opaque AI-driven decisions could erode customer trust or contravene ethical norms.

NB acknowledges these concerns but  financial (and other prudential) risks are of DNB's primary concern.". DNB's review specifically identified certain AI applications, particularly those in asset allocation or reserve optimization domains, that could pose material prudential risks if models were to behave unexpectedly. This underscores DNB's central message: while AI offers compelling benefits, insurers must proactively manage all associated risks before they threaten consumer interests or the insurer’s financial soundness.

Charting the regulatory path for AI in insurance

DNB clearly establishes that insurers remain fully subject to all existing legal and regulatory requirements when deploying AI solutions. Financial institutions are explicitly expected to use AI responsibly, with all current compliance obligations applying in full to AI-driven activities. These include data protection regulations, consumer protection laws, anti-discrimination provisions, and Solvency II governance requirements.

Moreover, the European Union's AI Act, which entered into force in 2024 and began its phased implementation in early 2025, introduces additional specific requirements for high-risk AI systems. DNB emphasizes that insurers must fully comply with these standards, which encompass rigorous risk assessments and human oversight of certain algorithmic systems. Even AI systems not formally classified as "high-risk" demand proper controls and oversight.

DNB is also aligning its approach with forthcoming sector-specific guidance from the European Insurance and Occupational Pensions Authority (EIOPA), expected later in 2025. This signals continued close regulatory attention to AI in insurance, with oversight mechanisms likely to evolve as the broader regulatory landscape develops.

SAFEST by design  

Until specific guidance will be made available by EIOPA, the cornerstone of DNB's guidance remain  the six SAFEST principles that define the regulator's conception of "responsible AI" in the financial sector:

  1. Soundness: AI applications must demonstrate technological robustness, reliability, and accuracy, and operate within the bounderies of applicable rules and regulations. Insurers should conduct thorough testing and validation of AI models to prevent errors and ensure models receive high-quality data inputs. From a prudential perspective, DNB regards soundness as paramount – if multiple firms rely on similar flawed AI tools, systemic risks could potentially emerge.
  2. Accountability: Insurers must maintain clear human accountability for all AI-generated decisions and outcomes. DNB expects organizations to designate appropriate oversight mechanisms and assign ultimate responsibility for AI-driven processes within management structures. This necessitates robust governance structures surrounding AI deployment to maintain algorithmic control within defined risk appetite parameters.
  3. Fairness: AI implementation should not undermine fair treatment of customers or introduce biases into decision-making processes. Insurers are expected to define fairness within their specific context and demonstrate that their AI models adhere to these criteria. This may involve conducting comprehensive bias audits and using diverse datasets for model training.
  4. Ethics: Beyond legal compliance, DNB highlights the importance of ethical considerations in AI utilization. Ethical AI practices include respecting customer privacy, considering societal impacts of personalized pricing strategies, and maintaining appropriate levels of solidarity in insurance risk pooling. The Dutch Association of Insurers' "Ethisch kader Datagedreven Toepassingen" (Ethical Framework for Data-Driven Applications) represents a valuable industry initiative that DNB views favourably.
  5. Skills: DNB expects insurers to invest in developing AI knowledge throughout their organizations. From board directors to frontline staff, personnel should understand the fundamentals of AI model operation, including limitations and potential risks. This may necessitate hiring specialized data scientists and training senior management to ask appropriate questions regarding AI initiatives.
  6. Transparency: Insurers should prioritize transparency regarding their AI utilization – both in explaining AI-generated decisions and clearly communicating where AI is being applied. While this doesn't require disclosing proprietary algorithms, it does necessitate maintaining documentation that can be communicated comprehensibly to regulators and, where relevant, to customers.

Collectively, these SAFEST principles constitute a comprehensive framework for AI governance in the insurance sector. DNB's message is clear: insurers should integrate these principles into their policies and systems proactively to ensure AI innovations remain within safe and acceptable parameters.

Actionable implementation for insurers

In light of DNB's guidance, insurers operating in the Dutch market should implement concrete measures to meet supervisory expectations. Of primary importance is the development of a clear AI strategy and governance structure. This may involve establishing dedicated AI committees that include compliance and risk management representatives, adopting internal policies on AI utilization, and documenting which AI applications are currently in use or planned.

Creating a detailed inventory of current AI systems will be crucial for determining which applications are subject to the AI Act's requirements. Early compliance planning is strongly advisable, particularly for high-risk AI systems that may need to be registered or conform to specific standards.

Insurers should furthermore embed the SAFEST principles into their operations through concrete practices. This includes conducting rigorous model validation, assigning clear responsibility for AI outcomes, reviewing models for potentially biased results, instituting ethics reviews before deploying new AI solutions, investing in training programs to strengthen in-house AI expertise, and maintaining thorough documentation practices.

Third-party AI risk management represents another critical area. DNB explicitly highlights the necessity for insurers to control risks arising from AI systems provided by external vendors. This includes conducting thorough due diligence and incorporating contractual provisions addressing data quality, performance metrics, audit rights, and regulatory compliance.

Finally, insurers should maintain active engagement with regulators and industry initiatives as AI oversight continues to evolve. DNB has indicated it will conduct more in-depth, risk-based examinations of certain insurers' AI implementations during the second half of 2025, suggesting that regulatory expectations may continue to develop.

Embracing AI with responsibility

DNB's 2025 key considerations, together with the AI Regulation and the anticipated guidance of EIOPA clearly signal that the era of "light-touch" experimenting with AI in insurance is drawing to a close. Moving forward, Dutch insurers will be expected to apply the same level of rigor to AI model governance as to any other significant risk or compliance matter.

By strengthening AI oversight proactively – through comprehensive strategies, detailed policies, and organizational cultures emphasizing ethical AI use – insurers can not only satisfy regulatory requirements but also position themselves advantageously in an increasingly AI-driven marketplace. As DNB aptly observes, sound and ethical AI practices will be essential for maintaining public confidence and operational stability in the insurance sector amidst accelerating technological transformation. Those insurers who align proactively with the new guidance will be optimally positioned to harness AI's benefits "safely" – fulfilling the very promise embedded in the regulatory acronym that DNB has established as its supervisory standard.