(AI) Artificial Intelligence in Cyber
Over the past year, all eyes have been on the impact of generative artificial intelligence (AI) on cyber insurance. First, the rise of generative AI will likely increase the frequency of cyber attacks. For example, there is a major concern that phishing attacks will become far easier and more effective for determined hackers.
Generative AI has the capability to craft cunning messages without the grammatical flaws that characterize many current phishing attempts. Moreover, generative AI’s data mining capabilities will further amplify these attacks, as the gathering of company-related information will become even easier to mine. Thus, expect more phishing attempts in the future, as generative AI’s ability to produce convincing deepfakes could lead to a rise in social engineering attacks.
For instance, a realistic deepfake of a company’s CEO could be used to deceive an employee into initiating a fraudulent wire transfer. It’s important to note that many of the safeguards that have been effective in the past may no longer be sufficient to counter AI-driven cyber attacks. This underscores the need for updated and robust security measures.
Hackers didn’t want to be left out of all the AI-related hysteria. So, they created a generative AI called WormGPT.
It was a first attempt, and most reports tend to believe that it has been a failure. However, there have been advertisements in hacker forums for machine learning experts to develop better large language models for nefarious purposes, but there could be a constraint on hackers creating AI models, as high-power computing and specialized Nvidia chips are required. Even companies like Google and Facebook are struggling to obtain these chips. So, it may be a while before these malicious models take off.
Ultimately, the most significant impact of generative AI may be a surge in cyber attacks. In light of this uncertain future, it’s imperative for companies to reevaluate their current insurance limits, and this reassessment will help determine whether the existing coverage will be sufficient in the face of a potential increase in claims.
AI Coverages?
Does AI pose any unique coverage-related risks under a cyber policy beyond an increase in claims? At present, most insurers do not appear to believe that there are any specific coverage issues caused by AI. In The Betterley Report’s “Cyber/Privacy Market Survey 2024,” the top cyber insurers were asked the following questions.
- Any specific coverages related to AI?
- Any definitions that relate to AI?
- Any exclusions related to AI?
- Any risk management services provided to insureds that are related to AI exposures?
Except for one insurer, all either answered these questions “No” or provided no response. Thus, the majority of insurers are simply keeping an eye on AI rather than directly modifying coverage under their policies.
An insurer in the survey, Coalition, took a significant step: It added affirmative AI language to its cyber policy. Specifically, Coalition included an AI security event in its definition of “Security Failure.” It also incorporated fraudulent instructions using deepfakes into its definition of funds transfer fraud.
Not to be outdone, another insurer, Districts Mutual Insurance, filed an endorsement titled Amend Definition of Fraudulent Instruction (Artificial Intelligence). That endorsement states:
The definition of Fraudulent Instruction is deleted in the entirety and replaced with the following:
Fraudulent Instruction means the transfer, payment or delivery of Money or Securities by an Insured as a result of fraudulent written, electronic, telegraphic, cable, teletype or telephone instructions provided by a third party, including any fraudulent instructions resulting from the use of deep fake technology, synthetic media, or any other technology enabled by the use of artificial intelligence, that are intended to mislead an Insured through the misrepresentation of a material fact which is relied upon in good faith by such Insured.
Fraudulent Instruction will not include loss arising out of:
- any actual or alleged use of credit, debit, charge, access, convenience, customer identification or other cards;
- any transfer involving a third party who is not a natural person Insured, but had authorized access to the Insured’s authentication mechanism;
- the processing of, or the failure to process, credit, check, debit, personal identification number debit, electronic benefit transfers or mobile payments for merchant accounts;
- accounting or arithmetical errors or omissions, or the failure, malfunction, inadequacy or illegitimacy of any product or service;
- any liability to any third party, or any indirect or consequential loss of any kind;
- any legal costs or legal expenses; or
- proving or establishing the existence of Fraudulent Instruction.
Source: Districts Mutual Insurance, Amend Definition of Fraudulent Instruction (Artificial Intelligence) (DMI—BR E16416 5-24).
It will be interesting to see whether other insurers add similar language to their cyber forms.