Because the potential AI market is so large - and the financial threat of losses so huge - this promises to be a very profitable business for insurers who will limit their liability but charge large fees for what they offer, which corporate executives and boards will gladly pay to limit at least some of their financial exposure. JL
Belle Lin reports in the Wall Street Journal:
Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches, insurance providers have started offering financial protection against AI models that fail. Copyright infringement from large language models is a major hurdle for businesses, forcing vendors to offer customers legal financial backing. “You will find more people starting to ask, ‘Who takes the risk? How do you fund it?’” Major carriers could offer coverage for financial losses stemming from AI and generative AI. Existing liability or cybersecurity policies could soon be amended for generative AI. Some financial protection covering losses from the use of AI models could become table stakes.The many ways a generative artificial intelligence project can go off the rails poses an opportunity for insurance companies, even as those grim scenarios keep business technology executives up at night.
Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches several years ago, insurance providers have started taking steps into the AI space by offering financial protection against models that fail.
Corporate technology leaders say such policies could help them address risk-management concerns from board members, chief executives and legal departments.
“You will find more and more people starting to ask, ‘Who takes the risk? How do you fund it? And can you take care of some of the risk for us?’” said Niranjan Ramsunder, chief technology officer and head of data services at digital technology and information-technology services firm UST.
Although it’s still early days, analysts say there is appetite for AI insurance, and major carriers could offer specialized coverage for financial losses stemming from AI and generative AI—a technology still in its early stage of adoption across businesses. Existing liability or cybersecurity policies could also soon be amended for generative AI, though there isn’t yet a clear-cut example of generative AI causing data leakage, for instance, resulting in damages to a business.
Business risks associated with generative AI include everything from cybersecurity issues to the potential for copyright infringement, inaccurate or biased outputs and misinformation and the leaking of proprietary company data.
Michael Berger, head of Munich Re’s Insure AI product. PHOTO: MUNICH RE “I would bet that over fifty percent of large enterprises would buy some of these insurance policies if they come out, and they make sense,” said Avivah Litan, a Gartner analyst who focuses on AI trust, risk and security.
, which offers an insurance policy for companies selling AI services, launched its coverage in 2018, said Michael Berger, head of the German reinsurer’s Insure AI product. It also insures enterprises developing their own AI models by covering financial losses if their homegrown models make a mistake that a human wouldn’t have, for instance.Armilla Assurance, a Toronto-based startup launched this year, offers what it calls a product warranty, backed by reinsurers including
and Chaucer, that AI models will work the way their sellers promise.Recognizing concerns that businesses may have with embedding generative AI into operations, vendors including
, and are offering other ways of managing its risks.IBM last week said its standard contractual intellectual property protections will apply to the generative AI models it has developed. Adobe in June said that businesses can purchase IP indemnification from the software company for generative-AI-created content on its Firefly platform.
In September, Microsoft announced a commitment to defend and pay for lawsuits stemming from a customer’s use of its generative-AI-based Copilot tools. The company said customers must be using its built-in guardrails, which aim to filter out copyrighted content.
“I would bet that over fifty percent of large enterprises would buy some of these insurance policies if they come out, and they make sense.”
— Avivah Litan, a Gartner analyst who focuses on AI trust, risk and securityThe potential for copyright infringement from tapping large language models is a major hurdle for businesses, analysts say, putting vendors in the hot seat to offer customers legal financial backing if needed. Microsoft, OpenAI and other vendors have been sued for violating internet users’ privacy rights and copyrights.
Thomas Dohmke, CEO of Microsoft-owned GitHub, said the new commitment has unlocked deals. Technical and product teams want to use GitHub’s Copilot generative AI coding assistant, and Microsoft’s financial commitment to defend its customers provides a way to help get their legal departments onboard, he said.
The recent development of cyber insurance offers some lessons for AI. Cyber insurers stepped up scrutiny of policyholders’ security arrangements during the pandemic, resulting in more expensive policies and coverage denials. Then, amid a rise in costly hacks, insurers increased premiums and pared back what their policies cover. AI coverage policies could follow a similar path, analysts say, as underwriting evolves and insurers start to pay out costly claims.
There are plenty of other challenges, too. Without historical data about an AI model’s use in business and how it performs, it is hard for insurers to assess risk. Generative AI models are also changing so quickly that risk-assessment methods will need to be dynamic as well.
So far, Armilla Assurance, Swiss Re and Munich Re are relying on their own AI expertise and proprietary assessment frameworks to price out risk.
Armilla Assurance evaluates the risk of a given AI model by looking at a combination of eight factors including training data, who built it, how it performs in testing and how the customer uses the model. That determines the risk—and insurability—of the customer and its use of an AI model, said Karthik Ramakrishnan, the startup’s co-founder and CEO. So far, it is starting to test some generative AI models in addition to other forms of AI, he said.
If the model fails, Armilla Assurance reimburses the customer for up to the amount that they paid for licensing fees to the AI vendor, it said. The startup collects a premium consisting of a percentage of those licensing fees, which varies depending on the risk and complexity of the model.
Jerry Gupta, Swiss Re’s senior vice president of property and casualty research and development, said its partnership with Armilla Assurance, which focuses on model accuracy, is the first of its AI-related insurance products that could be designed to address more complex issues like bias, copyright and data privacy. “As we learn, as we get more data, then we’ll figure out what the next steps are,” he said.
Munich Re prices the risks of AI models using an in-house team of research scientists, Berger said. “The pricing task is to find a reliable statistical estimator for the uncertainty of the respective AI model on new and unseen data.”
Assessing generative AI risk compared with other forms of AI requires a special set of considerations, including accounting for text-based prompts, which induce “more variability in performance” and intellectual property infringement risks, Berger said. There will also be a need for additional insurance solutions that cover AI risks like discrimination, he said.
Analysts, technology leaders and insurers say that insurance policies, or some kind of financial protection covering potential losses from the use of generative AI models, could become table stakes in the next few years as companies increasingly use AI in the course of daily business.
The opportunity for insurers could be huge over the next decade. Researchers from McKinsey estimate that generative AI could add trillions of dollars a year to global economic output, and that will lead to questions over how to manage its risks. Most insurers are thinking about how to capture that opportunity, said Ellen Carney, a Forrester analyst who covers insurance.
“This is going to be a given in insurance companies’ product set, even for small businesses, even for other insurance companies,” Carney said.
To be sure, business technology leaders are hesitant to rely solely on insurance as a means of managing AI risk. Part of the appeal of an insurance policy is that it offers a way of passing on that risk to someone else, UST’s Ramsunder said, but it is one of many risk-management strategies to rely on.
Just as security experts caution against using cyber insurance as a substitute for good cybersecurity practices, companies should build in guardrails to protect against data leakage, for instance, and use other security tools and technologies as a first line of defense.
“Insurance is the last mile once a disaster hits,” said Tim Armandpour, chief technology officer of digital operations company PagerDuty “Companies need systems in place to issue a response to a technological problem quickly.”
0 comments:
Post a Comment