NAIC Adopts Revised Model Bulletin on AI

admin
8 Min Read

On December 4, the NAIC adopted a revised version of the Model Bulletin on Use of Artificial Intelligence (AI) Systems by Insurers (attachment 15A at the link) after discussion and adoption of the bulletin by the Innovation, Cybersecurity, and Technology (H) Committee the preceding Friday. While the bulletin reflects the NAIC’s acknowledgment of the benefits of innovation for insurers and insureds, it focuses on protecting consumers from perceived risks. Based on the bulletin, if insurers use AI systems, whether their own or those of third-party vendors, they remain obligated to comply with applicable legal and regulatory standards, including unfair trade practices and unfair claims settlement laws, that require, at a minimum, that decisions made by insurers are not inaccurate, arbitrary, capricious or unfairly discriminatory. To enable insurers to meet such standards even when using AI systems, the bulletin sets forth an expectation that insurers will “develop, implement, and maintain a written program (an ‘AIS Program’)” and provides corresponding guidelines.

The adopted bulletin includes a number of changes from the discussion draft first made public in July. Most notably, the adopted bulletin does not reference or define “big data” or include it in the definition of “AI Systems.” While removing a formal definition of “bias,” it adds a number of definitions specific to AI, including “Generative Artificial Intelligence,” “Model Drift” and “Predictive Model” and expands its definition of AI.

Both industry and consumer representatives had expressed concern to the committee that the bulletin’s continued reference to “bias” was unconnected to existing statutes and regulations. The committee considered changing the reference to “unfair discrimination” or “statistical bias” but ultimately concluded that neither phrase captured the intended meaning and opted to keep the reference as is. The committee also acknowledged the challenges insurers face in working with third-party vendors and will establish a task force to investigate the issues.

As with the original draft, the revised bulletin sets forth adopting states’ expectations regarding how insurers will govern the development and use of AI, and advises them of the type of information and documentation that regulators may request of insurers during investigation or examination. Specifically, the bulletin sets forth expectations for development of a written AIS program for the responsible use of AI systems, designed to mitigate the risk of “Adverse Consumer Outcomes” (a defined term in the bulletin). The bulletin provides a framework for how the AIS program should address governance, risk management and internal controls, internal audit functions, and third-party AI systems and data, during all phases of implementing an AI system and in all aspects of its use across the insurance life cycle.

To prioritize transparency, fairness and accountability — principles that are trending globally in the design and implementation of AI systems — insurers are expected to adopt a governance framework for the oversight of AI systems used by insurers. Regulators may ask insurers for information and documentation relating to these AIS programs, including implementation and compliance with the programs. The bulletin also points to third-party AI systems and data as potential targets for regulator inquiry, including their documentation concerning validation, testing and auditing.

While insurers with concerns regarding the original draft’s inclusion of big data in AI systems may breathe a sigh of relief, there remain significant concerns regarding how adopting states will implement expectations regarding third-party vendors — including the bulletin’s contemplation that insurers “[r]equire the third party to cooperate with the insurer with regard to regulatory inquiries and investigations related to the Insurer’s use of the third-party’s product or services.” Regulatory treatment of insurers’ use of third-party data and AI systems is especially of concern for smaller insurers, which do not have the resources to build their own AI systems and more often resort to external sources of data. The bulletin leaves significant leeway for some states to put onerous burdens on insurers with respect to third-party contracts and regulator access to information, but also provides flexibility for states to allow the use of third-party vendors with more general oversight.

Insurers should also be wary of how the bulletin compares to regulations and requirements already issued by various states, including Connecticut’s notice concerning the “Usage of Big Data and Avoidance of Discriminatory Practices,” which explicitly addresses big data with a requirement that “all data used to build models or algorithms will be provided to the CID upon request,” and Colorado’s regulation 10-1-1 implementing Colorado Revised Statute § 10-3-1104.9, which focuses more on documented controls and systems rather than regulator access to data. New York, the District of Columbia and California have also addressed insurer use of AI and big data to varying degrees, and various states have enacted, or are in the process of enacting, laws addressing the corporate use of AI that will apply to insurers and other companies alike.

The model bulletin is drafted as such to enable regulators to issue it without the formal rulemaking that accompanies the implementation of regulations. Notwithstanding the detail in the bulletin, it notably allows flexibility for insurers to adopt alternative means to demonstrate compliance with applicable laws in their use of AI systems. At the very least, the bulletin conveys the NAIC’s expectations from insurers. Any insurer that hasn’t done so already should assess how its existing enterprise risk management program addresses the use of AI systems and consider how to design its AIS program. It would also behoove insurers to review their ongoing and proposed use of third-party vendors that provide AI systems and data, as well as the contracts with such vendors, in order to, among other things, assess the insurers’ vendor due diligence processes and continued oversight and audit rights.

Time will tell how quickly, and to what extent, the bulletin will be adopted, but any insurer that hasn’t done so would best be advised to start evaluating its use of AI now, as the regulation of insurer use of AI, in some form, is here to stay.

Share This Article
By admin
test bio
Please login to use this feature.