Generative AI has already triggered several events that could point to increased claims for insurers. |
|
The surge in use of generative AI could amplify insurers' existing exposures in lines such as media liability, professional liability and cyber.
The technology, which synthesizes novel content from vast databases of existing material in response to human prompts, triggers some risks likely to be covered under existing policies. But the rapid pace of generative AI's development means the status quo will not hold for long, experts told S&P Global Market Intelligence.
A future where generative AI-related intellectual property, technology and even terrorism risks are covered under a specific policy is foreseeable, according to George Beattie, head of innovation at specialty underwriting agency and insurer CFC Underwriting Ltd. "I think these risks will become so dynamic that the ability of the market to digest them in existing covers will become more difficult," he said in an interview.
Generating risk
Generative AI burst onto the world stage a little over a year ago when research company OpenAI released its ChatGPT chatbot interface, and it has already triggered several events that could point to increased claims for insurers. There has been a string of generative AI-related copyright infringement suits, such as a complaint filed in September by a group of well-known authors against OpenAI, alleging that the company used their works to train its large language models.
The technology's ability to write computer code, automate processes and imitate humans could allow cyber-criminals to mount an increasing number and variety of attacks, potentially inflating claims for cyber insurers.
So far, the industry has not imposed any generative AI-specific restrictions or exclusions to brace for these potential claims, but they are keeping a close eye on developments.
"We're definitely insuring the effects of AI now," Peter Hedberg, vice president of cyber underwriting at cyber underwriting agency Corvus Insurance Holdings Inc., said in an interview. "I don't know what our prescriptive policy response is going to be yet, but I anticipate the next 12 months are going to be very instructive for how we treat it going forward."
There are some efforts to adapt insurance policies designed for traditional AI to the generative AI world. Munich Re's Insure AI product suite has been offering since 2018 coverage for financial losses stemming from underperformance of traditional AI models. Its two products — aiSure for developers of AI tools and aiSelf for users — would also cover financial losses arising from generative AI models not performing as expected, such as when they present incorrect information as fact, according to Michael Berger, head of Insure AI at Munich Re.
The company is working with its clients to expand the products' coverage to other areas, such as the copyright infringement risks associated with generative AI, as well as the risk of generative AI output discriminating against certain groups, Berger said in an interview. AI image generators, for example, have produced output displaying racial and cultural stereotypes, such as viral news firm Buzzfeed's now-deleted post featuring AI-generated images of what Barbie would look like in different countries.
Further change could come when claims start to emerge. CFC's current products can serve AI developers and users in the vast majority of cases, according to Beattie.
"[But] as actors, bad actors particularly, start to use these tools, we will have to adapt our position," Beattie said. He added: "The industry will adapt very quickly based on demand from new types of companies. The sector will respond even faster if losses feed through that we haven't seen before."
The industry faces a "transformation process" of determining whether it will cover generative AI risks in traditional policies as standard, through policy amendments with an additional premium, or under specialist stand-alone policies, Berger said. "There needs to be more understanding and more development and more clarity around those three buckets," Berger said, adding that he expected the industry to develop toward that.
Call for calm
The industry should exercise restraint when considering generative AI exclusions and stand-alone cover, according to Gregory Eskins, global cyber product leader at Marsh LLC.
"We need to take a very thoughtful and methodical approach to underwriting, pricing and developing new products," Eskins said in an interview. "We don't want to jump the gun on this because existing products can still be fit for purpose."
It is "absolutely appropriate" for cyber underwriters, for example, to start asking more questions about how clients are using and controlling generative AI and pricing for any increased risk, Eskins said. But he added: "The wholesale exclusion because a client might utilize this type of technology, we think, would be an inappropriate way to approach this."
Novel risks not covered by existing policies are more likely to emerge, Eskins said, when generative AI starts to converge with other new technologies. "At that point there may absolutely be a requirement for new products within the insurance marketplace," he said.
Beattie predicted that risks different enough to warrant a separate industry subsector would emerge in five to 10 years when AI is capable of adaptive, human-level tasks rather than linear ones.
In the meantime, insurers have some protection against generative AI-fueled claims increases. If generative AI fuels an increase in cyber claims, insurers can demand that policyholders install better software security and ransomware detection, according to Hedberg. "I don't know if I'm going to address it specifically via policy language, but underwriting is definitely going to start taking it into consideration," he said.