AI-Driven Cybercrime: Threats and Insurance Implications
Explore how AI-driven cybercrime is reshaping risk management and insurance. From deepfakes to generative phishing and adaptive malware, learn about emerging threats, real-world cases, and the insurance implications for underwriting, coverage, and claims.
1️⃣ Deepfakes, generative phishing, adaptive malware, and AI-aided credential attacks make scams far more convincing, scalable, and harder to detect.
2️⃣ From multimillion-dollar voice deepfake heists to AI-generated claims fraud, criminals are using AI now, not just hypothetically.
3️⃣ Some policies explicitly cover AI-driven fraud, while others remain ambiguous. Broad AI exclusions are impractical since proving AI involvement is nearly impossible.
4️⃣ Deepfake evidence and AI scams complicate verification, forcing insurers to invest in AI-powered fraud detection and investigative units.
5️⃣ Strong verification protocols, AI-enabled defenses, scenario planning, and coverage reviews are critical to mitigating AI-driven cybercrime exposure.
Just as organizations use AI to innovate, cybercriminals are leveraging AI to enhance the scale, speed, and sophistication of their attacks. From deepfake videos that can impersonate CEOs or loved ones to AI-generated phishing campaigns and malware that adapts on the fly, AI-driven cybercrime is making attacks harder to detect and prevent. AI enables threat actors to create highly personalized, convincing scams – even mimicking voices or videos to trick employees into wiring funds or divulging credentials. At the same time, AI has lowered the barrier to entry for less-skilled hackers, automating tasks that previously required substantial expertise. This report explores emerging AI-enabled threats, highlights recent incidents, and discusses how these trends are reshaping cyber risk models, insurance underwriting, claims, and regulation.
Emerging AI-Enabled Cyber Threats
AI is driving both new and familiar forms of cyberattacks. Below are some of the most significant AI-driven threats that have emerged, often amplifying traditional tactics with greater potency and realism:
Deepfakes and Synthetic Media Fraud
Deepfake technology uses AI to create hyper-realistic fake audio, video, or images that impersonate real people. Criminals are exploiting deepfakes to impersonate executives, officials, or family members in order to deceive targets. For example, threat actors can clone a CEO’s voice or face to authorize fraudulent fund transfers or to bypass biometric security checks. Such AI-generated audio/video is increasingly convincing – one survey found that in 2024 fully half of businesses worldwide experienced fraud involving AI-altered audio or video. Deepfakes have been used in schemes ranging from business email compromise (augmented with a phone call from a “deepfake CEO”) to fake kidnapping and extortion calls targeting individuals. The realism of deepfakes erodes trust in digital communications and makes verification challenging. Notably, only 12% of people feel confident they can recognize a deepfake, underscoring how easily this AI trickery can be weaponized.
Generative Phishing and Social Engineering at Scale
AI language models (like GPT) enable “generative phishing” – automated production of highly convincing phishing emails, texts, or chats. Rather than clumsy spam with poor grammar, attackers use AI to craft polished, personalized messages that mirror a victim’s writing style or knowledge base. Large Language Models can scrape social media and company info to tailor phishing lures to each target, making social engineering more believable than ever. This has led to an explosion in phishing volume. One analysis noted phishing email volumes skyrocketed by 856% in recent years with the help of AI text generators. AI chatbots can even conduct interactive scams, impersonating customer support or a colleague in real-time chat. The FBI warns that generative AI drastically reduces the time and effort needed to craft deceitful content, allowing criminals to reach a wider audience with very convincing messages. In short, AI lets “phishing-as-a-service” operate at industrial scale.
AI-Powered Malware and Autonomous Attacks
Beyond social engineering, AI is also being used to enhance malware. “AI-powered malware” can refer to malicious code that adapts intelligently to evade detection or optimize its attack strategy. For instance, attackers can use AI to automatically scan a breached network, identify high-value data, and adjust malware behavior on the fly. Security researchers have demonstrated proof-of-concept malware that generates new code autonomously at runtime to defeat endpoint defenses (e.g. the BlackMamba polymorphic keylogger which employed an AI model to continually morph its code). AI can also rapidly create new malware variants or mutate ransomware payloads to bypass signature-based antivirus controls. While fully autonomous “self-driving” malware is still mostly theoretical, the trend is toward malware that uses AI to reason and adapt, making it more stealthy. Notably, cyber experts observe that so far these AI-enhanced malware techniques have not achieved capabilities beyond what skilled human hackers can do. However, the potential for AI to increase speed and scale of attacks is real – especially as models improve. Even today, AI tools are lowering the skill barrier for malware creation by helping novice attackers write functional malicious code or find exploits.
AI-Augmented Credential Cracking and Exploitation
AI is also empowering certain attacks like password cracking, credential stuffing, and vulnerability discovery. Machine learning models can be trained on millions of known passwords to better predict likely password patterns, aiding in faster cracking of hashes or login credentials. Likewise, bots powered by AI can intelligently automate credential stuffing (trying stolen username/password combos on many sites) in ways that mimic human behavior to evade detection. According to Allianz, AI has “revolutionized traditional attack vectors like password cracking and DDoS attacks by making them more precise and effective,” with AI systems able to identify vulnerabilities and optimize attack timing for maximum impact. We also see AI being used to solve CAPTCHAs and other anti-bot challenges, using computer vision models. In short, tasks that once slowed attackers (guessing passwords, finding weak spots) can now be accelerated with AI, increasing the success rate of attacks like credential abuse and network intrusion.
Attacks on AI Systems (Model Inversion and More)
Not only do criminals use AI as an attack tool – they also target the AI systems used by businesses. One emerging risk is model inversion attacks, where an adversary exploits an AI model to infer or extract sensitive data that was used in training. For example, a hacker might query a machine learning API (say, a facial recognition or medical AI service) and reconstruct private information about individuals in the training set. This is essentially an AI-specific privacy breach. Similarly, prompt injection attacks can manipulate AI chatbots or assistants by feeding malicious input that causes them to reveal confidential info or perform unintended actions. If a company deploys an AI chatbot connected to internal data, an attacker could craft prompts to make it spill proprietary data or customer PII – effectively turning the AI against its owner. We have already seen incidents of AI chatbots being tricked into outputting confidential records. These AI supply-chain threats mean that the AI tools businesses adopt could become new attack surfaces. In essence, as organizations embrace AI for efficiency, they must also guard against AI-specific exploits like data poisoning (feeding corrupt data to distort an AI’s decisions) or model theft (stealing a proprietary model to reuse it maliciously). Criminals are keenly interested in hacking AI systems themselves, especially if it grants access to large troves of sensitive information.
Real-World Incidents Involving AI-Enabled Cybercrime
While some AI threats remain theoretical, many have already materialized in real incidents. A growing number of crimes illustrate how AI tools are being used in the wild:
Deepfake Voice Heists
One of the earliest high-profile cases was in 2019, when criminals cloned a corporate executive’s voice to trick a UK energy firm into transferring $243,000 to a fraudulent account. Since then, such voice impersonation scams have multiplied. In 2020, criminals similarly used AI voice cloning in an elaborate scheme that defrauded a bank in the United Arab Emirates of $35 million – fooling a bank manager into thinking they were authorizing transfers for a client via a phone call. More recently, police and the FBI have warned of “kidnapping” scams where a parent gets a call with their child’s exact voice crying for help – all generated by an AI from a few seconds of online audio. These incidents show the devastating realism that AI voice synthesis has achieved, enabling scams that were impossible just a few years ago. Even when victims are wary, the emotional manipulation of hearing a loved one’s voice can override skepticism – making voice deepfakes a potent weapon for extortion and fraud.
AI-Generated Phishing and Fraud Campaigns
In 2023–2024, cybersecurity firms observed a sharp uptick in phishing campaigns and online fraud schemes turbocharged by generative AI. For example, Trend Micro reported an accelerating “arms race” of criminals creating custom AI chatbots and large language models for illicit use. On dark web forums and Telegram channels, offerings like “WormGPT” and “FraudGPT” emerged – essentially ChatGPT clones with no ethical safeguards, advertised to write malware or phishing emails on demand. Researchers found that even after an early version of WormGPT was supposedly shut down in 2023, it resurfaced on underground markets, evolving with voice-enabled features and spawning copycats (DarkBERT, DarkGemini, etc.). These criminal AI tools are sold with promises of “unfiltered” malicious capabilities and guaranteed anonymity. The result has been more frequent and convincing fraud schemes. In one case, scammers used AI chatbots on fake investment websites to engage victims in real-time and convince them to part with money. Other fraudsters use AI to mass-produce fake social media profiles (including profile pictures that are AI-generated faces) to conduct romance scams or “pig butchering” crypto scams at scale. Law enforcement agencies worldwide have noted that AI is enabling crime at an industrial scale, as seen in global busts of fraud rings that leveraged AI for widespread scams.
AI-Evading Malware and Attacks
There have been reports of malware in the wild employing AI techniques to thwart security. For instance, in 2023 Microsoft detailed a malware strain that used an AI model to decide when to activate or hide itself based on user behavior, making it harder to catch (an example of AI-powered evasion). And while not all claims are verified, multiple hacking groups have boasted about using ChatGPT to assist in writing ransomware code or finding zero-day exploits more quickly. In the ransomware arena, criminals are certainly using AI tools to analyze stolen data faster. The UK’s National Cyber Security Centre (NCSC) assessed that AI will “heighten the global ransomware threat,” enabling attackers to parse exfiltrated data more efficiently and identify the most valuable files to extort. There is also concern that AI could help select optimal ransom amounts or craft more coercive extortion messages tailored to victims’ profiles. So far, no completely autonomous “AI worm” has caused damage in the wild, but these trends point to increasingly smart malware operations. Even mid-tier cybercriminals can now leverage AI-based services (for example, to bypass voice authentication or to quickly repackage known malware with minor tweaks), which speeds up the attack cycle.
Insurance Fraud and Deepfake Claims
AI’s misuse isn’t limited to hacking and scams; it’s also creeping into fraudulent insurance claims. In the UK and elsewhere, insurers have seen an increase in deepfake or digitally altered evidence submitted for claims. For example, a claimant might use a deepfake editing app to concoct video footage of a staged car accident or alter receipts and documents – all to bolster a false claim. According to Swiss Re, insurers report a rising use of deepfakes in claims fraud, especially for low-value, high-volume claims that fraudsters try to slip through. There are documented cases of people using AI tools to manipulate photos of vehicle damage or even create bogus medical images to support health insurance claims. These incidents force insurers to invest in new fraud detection measures (like deepfake forensics) and demonstrate that AI-enabled crime touches every corner of the risk landscape – including the claims process itself.
From millions stolen via voice spoofing to fraudulent claims flooding insurers, the impact is already being felt. Both businesses and individuals have suffered losses, and the frequency of AI-enabled incidents is poised to grow. In fact, Lloyd’s of London analysts predict that the frequency and severity of AI-assisted cyber attacks will increase in the next 1–2 years before defenders catch up, potentially leading to a surge in claims and losses in the near term.
Implications for the Insurance Industry
AI-driven cyber threats are reshaping how insurers evaluate risk, structure coverage, and handle claims. The evolving threat landscape is forcing the industry to adapt on multiple fronts – from underwriting practices to policy language, claims handling, and even the regulatory environment.
Cyber Risk Modeling and Underwriting
Underwriters are beginning to factor AI-enabled cyber risks into their models and pricing. As generative AI lowers the cost of launching attacks, even smaller organizations face increased threat exposure. Insurers historically focused cyber coverage on larger enterprises or certain sectors, but now AI tools let attackers target many victims (including small businesses and individuals) with ease. This democratization of cyber risk means underwriters must revisit assumptions about frequency and severity. For instance, if convincing deepfake scams can hit anyone, the likelihood of social engineering losses goes up across the board. Industry observers indeed anticipate an uptick in the volume of successful attacks – and thus claims – due to AI, expanding insurers’ exposure.
Insurers are responding by collecting more information on an insured’s controls against AI-based threats. Underwriting questionnaires may ask businesses about procedures for verifying wire transfer requests (e.g. using code words or call-backs to prevent voice fraud) and about employee training to recognize deepfake or AI-generated scams. Some cyber insurers now explicitly require robust social engineering fraud controls, given the rise of AI-enabled phishing. There is also movement toward scenario analysis: insurers model worst-case AI-driven events (like a deepfake-enabled CEO fraud combined with network intrusion) to evaluate portfolio risk.
On the flip side, insurers themselves are deploying AI for underwriting advantages. With AI-powered analytics, insurers can better profile risks in real time and detect emerging threats. For example, AI may be used to scan an applicant’s external digital footprint for signs of vulnerability (like exposed credentials or risky use of AI bots) and adjust premiums accordingly. More forward-looking carriers are developing predictive models for AI-related risks, helping identify which insureds are most exposed to things like deepfake fraud. According to Allianz, “predictive modeling for emerging AI-related risks will enhance underwriting processes” by allowing more accurate pricing of these novel exposures. In short, underwriting is becoming a more dynamic discipline: insurers must continuously update risk criteria as AI threats evolve, while also harnessing AI to refine their own risk assessments.
Policy Coverage and Evolving Language
AI threats have exposed gaps and ambiguities in traditional insurance coverage, prompting the need for updated policy language. Many existing cyber insurance policies were written before deepfakes and generative AI were on the radar; as a result, coverage for such scenarios might be implicit or unclear. This is changing quickly. Insurers are now expanding cyber policies to explicitly cover AI-specific perils like deepfake extortion, AI-assisted fraud, or liability arising from the failure of an AI system. For example, a cyber policy might explicitly define a “social engineering attack” to include fraudulent instructions delivered via audio or video impersonation, ensuring that deepfake voice scams are covered losses. Similarly, some carriers offer coverage endorsements for “computer fraud including fraudulent impersonation” to address this new risk. Allianz Commercial notes that going forward, traditional cyber insurance may be broadened to cover “AI-powered social engineering” and deepfake incidents as standard.
At the same time, insurers are cautious about unbounded exposure to AI risks. There have been early signs of AI exclusions being drafted – a “knee-jerk reaction” to uncertainty around AI-related losses. For instance, an insurer worried about an avalanche of deepfake fraud claims might consider an exclusion for losses “caused by deepfake technology” or require policyholders to opt in for such coverage at additional premium. Thus far, blanket AI exclusions have not become common in the market (insurers realize excluding too broadly would gut the value of cyber coverage as AI use grows). Indeed, as one industry legal analysis points out, if AI tools became pervasive in attacks, excluding them would mean excluding a large swath of claims, which is not a commercially viable strategy. Moreover, enforcing an AI exclusion would be tricky – it would require proving that the attacker used AI, which, as experts note, is very difficult with current forensics. A criminal isn’t going to announce “this phishing email was AI-generated,” and metadata that betrays an AI’s involvement can easily be stripped away. Given these challenges, most insurers are instead taking a measured approach: clarifying coverage rather than excluding.
We are also seeing new insurance products being developed for AI liabilities. Beyond cybercrime, companies worry about the risks of deploying AI systems that might err. Insurers are exploring policies to cover liability from AI system failures or algorithmic errors (for example, if a software company’s AI causes client financial loss, or a self-driving AI causes accidents). These would be analogous to tech E&O or product liability, tailored to AI. Additionally, the concept of “AI insurance” is emerging – policies that might insure an AI model itself against tampering, adversarial attacks, or intellectual property theft. While still nascent, the insurance industry is clearly pivoting to address AI both as a peril and as something to be protected in its own right. Expect policy wordings to continue evolving rapidly: terms like “deepfake,” “synthetic media,” or “malicious AI code” may soon appear in definitions of covered perils or exclusions, where two years ago they were absent.
Claims Handling and Fraud Detection
Just as AI is aiding cybercriminals, it’s also becoming a tool for insurers in claims and fraud management. AI-enhanced claims handling can help insurers respond faster to cyber incidents. For example, an insurer might use AI algorithms to analyze incident details and estimate losses more quickly, or even automate elements of the claims adjudication process for common events. With cyber claims volume increasing, such efficiency is valuable. Allianz has highlighted that AI-driven automation can streamline claims processing, enabling faster and more accurate assessments. In practical terms, this could mean using AI to triage claims (flagging which ones might be severe breaches versus minor), or employing machine learning to analyze forensic reports and recommend appropriate payouts.
Fraud detection is an area where insurers are aggressively deploying AI – because they have to. The same technologies enabling deepfake claims fraud can be used to catch it. Insurers are now using AI-based image forensics to detect signs of manipulation in videos or photos submitted with claims. For instance, AI can help spot the subtle artifacts or anomalies in deepfake media (such as irregular facial blinks or digital artifacts) that human adjusters might miss. Studies show AI-based fraud detection systems have significantly improved identification of fraudulent claims – over 40% improvement in detecting fraud, according to industry analyses. This is crucial as fraudulent claims not only cost money but also undermine trust.
However, AI in claims handling is not without challenges. Adjusters must be trained to understand AI-generated evidence and not be fooled by it. In one reported case, an insurer initially approved a personal injury claim supported by photos that were later found to be AI-faked – the payout was reversed, but only after a costly investigation. To avoid such scenarios, insurers are investing in training and new protocols. Some have created special investigative units focused on AI-related fraud, bringing together data scientists and claims experts to handle suspicious cases involving deepfakes or AI-generated documents.
Claims teams are also grappling with coverage interpretation for AI incidents. Does a policy’s definition of “computer fraud” include an AI-generated voice tricking an employee? Most likely yes, but these are new scenarios to test policy language. As noted, if insurers were to exclude AI-caused losses, they would then face the burden of proving AI was involved to deny a claim – which is currently very hard. Thus, many insurers are choosing to cover these losses and treat them as just another form of cyber incident or fraud, rather than fight unwinnable attribution battles.
Overall, AI creates new headaches in verifying truth, but also provides powerful means to detect lies. For insurance carriers, staying ahead will require continual investment in AI-driven fraud defense and perhaps collaboration across the industry to share deepfake detection intelligence. The operational costs for insurers are likely to rise, as they must deploy more advanced tools and expert personnel to keep pace with increasingly sophisticated, AI-enabled fraud tactics.
Regulatory and Legal Developments
Regulators worldwide are waking up to AI’s role in cyber risks, and this has several implications for insurers and policyholders. In the financial sector, regulators like the New York DFS have issued guidance on managing AI-related cyber risks. DFS’s 2024 industry letter, for instance, urges banks and insurers to assess how AI could amplify threats and to incorporate those scenarios into their cybersecurity programs. This means that under regulations (like NY’s Part 500 cybersecurity rules), companies may need to demonstrate controls against AI-enabled fraud and AI-specific vulnerabilities. Insurers not only have to do this for their own operations, but also are indirectly affected if their insured clients fall short – because that elevates risk of losses.
On the legal front, there’s a flurry of new laws targeting malicious uses of AI, especially deepfakes. Globally, policymakers are clamping down on AI-driven crime and mandating transparency. The European Union’s AI Act (which entered into force in 2024) explicitly outlaws certain harmful AI-based practices – for example, AI systems that impersonate people without disclosure are being regulated, and generative AI models are required to include safeguards like watermarking of deepfake content. By mid-2025, the EU AI Act mandated transparency for AI-generated media and prohibited the most egregious cases of AI identity manipulation. This kind of regulation, while aimed at tech providers, should eventually help by making it harder for bad actors to obtain untraceable deepfake tools or by creating legal liabilities for those who misuse AI.
In the United States, after years of piecemeal state laws, the federal government passed its first law directly addressing malicious deepfakes in May 2025. The TAKE IT DOWN Act makes it a crime to create or share certain AI-generated sexual imagery or harmful impersonations without consent. It focuses on non-consensual explicit deepfakes and imposes obligations on online platforms to swiftly remove such content when reported. While this law is targeted (largely at protecting individuals from intimate-image abuse), it reflects a broader trend: more legislation is coming to tackle AI-facilitated fraud and impersonation. Several other bills are in the works – from the DEFIANCE Act (to give victims of sexual deepfakes a right to sue) to proposals banning deceptive AI in election campaigns and fake likenesses in commercials. For insurance, these laws could influence claims and coverage. For example, if an AI deepfake causes someone personal harm, there may soon be clearer legal pathways to recover damages from perpetrators or platforms – potentially reducing some costs that might otherwise fall to insurers. Conversely, companies could face fines or lawsuits if they don’t remove deepfake content (per new duties like the 48-hour removal rule in the TAKE IT DOWN Act). This introduces a new type of liability that some companies might look to insure against (e.g., a media company might want insurance for liabilities arising from unknowingly hosting deepfake content).
Other jurisdictions are also notable: China enacted strict rules requiring content creators to label AI-generated media and forbidding use of deepfakes for fraud or misinformation. These went into effect in 2023 and were updated in 2025, reflecting China’s aggressive stance on controlling AI misuse. European countries like France and Denmark are pushing innovative legal concepts – Denmark is treating a person’s likeness (face/voice) as personal property and legislating that unauthorized AI replication of it is illegal, with hefty fines for platforms that fail to remove such deepfakes. This effectively could give victims and insurers a clearer route to get harmful deepfake content taken down and perhaps seek restitution. At least 23 U.S. states have also passed varied deepfake laws as of late 2024, addressing issues from election interference to impersonation scams.
For insurers, the regulatory landscape means two things. First, compliance: insurers using AI in their own business (for underwriting or claims) must heed emerging AI regulations (e.g., transparency, fairness, data protection requirements), though that’s outside the scope of cybercrime per se. Second, regulations may mitigate some cyber risks over time (for instance, if deepfake detection and labeling become mandatory, it could reduce successful scams) – but in the interim, they also create new compliance burdens and legal exposure that need to be reflected in insurance coverage. Risk managers should track AI-related laws in all jurisdictions they operate in, as these laws will shape both the threat environment and the potential liabilities after an incident.
Impact on Personal vs. Commercial Lines
AI-driven cyber threats affect both individuals and businesses, but the implications differ across personal and commercial insurance lines:
Personal Lines
Consumers are increasingly victimized by AI-enabled scams like voice impersonations, fake videos, and AI-powered identity theft. Traditional homeowners or personal property insurance typically does not cover money voluntarily sent to a scammer (which is how many AI scams play out). This has led to a rise in offerings like personal cyber insurance or identity fraud expense coverage. Such policies can help reimburse victims for financial losses or expenses in recovering from identity theft. For example, if a family is duped by a deepfake voice call into sending $10,000 to criminals, a personal cyber policy might cover that loss (depending on the terms for fraud). Likewise, identity theft policies now have to consider coverage for scenarios like an imposter creating a deepfake video to take out a loan in the victim’s name. Insurers are starting to include “cybercrime” endorsements on homeowners policies, which can cover scams, though coverage limits are often low. The surge in AI scams targeting the public – nearly 29% of people in one country (New Zealand) were targeted by deepfake scams in just the past year – suggests a growing need for personal lines coverage to adapt. We may see insurers add explicit wording covering "loss arising from impersonation via electronic means" or similar, to address these scenarios. On the claims side, personal line insurers are also contending with deepfake misuse. Consider automobile insurance: a fraudster might submit AI-altered dashcam footage to “prove” an accident that never happened. Personal health insurers might receive deepfake doctors’ notes or manipulated MRI images. All this forces personal line carriers to step up fraud detection just like commercial insurers. Additionally, educating policyholders is key – some insurers now provide fraud awareness resources (for example, advising customers on how to spot AI voice scams and encouraging them to set up family code words as the FBI recommends). For personal insurance, the main implications are product innovation to cover new scams, increased claims scrutiny, and loss prevention efforts to help customers avoid becoming victims in the first place.
Commercial Lines
Businesses face multifaceted risks from AI-driven cybercrime – direct financial losses (e.g. theft via social engineering), privacy breaches, reputational damage from deepfake misinformation, and even potential liability if their own AI misbehaves. The insurance market has primarily responded through cyber insurance and crime insurance products. Most commercial cyber policies today will cover incidents like a deepfake fraud (typically under social engineering fraud or cybercrime extensions) or a breach caused by an AI exploitation. Cyber insurers have paid claims where, for instance, a deepfake voicemail led an employee to send money to a fraudster – treating it analogously to an email phishing-induced loss. Cyber insurers are updating underwriting questions to gauge a company’s controls around these risks (e.g., does the company have verification steps for fund transfers, does it train staff on deepfake awareness?). We are also seeing brokers push insurers for clarity: Arthur J. Gallagher advises clients to ensure their cyber policies explicitly mention coverage for deepfake-related fraud, noting that not all policies are up to speed yet. Beyond cyber policies, other commercial lines are touched: Directors & Officers (D&O) insurers worry about deepfake news or fake executive remarks moving stock prices and triggering shareholder suits. Professional liability insurers (E&O) are watching if a deepfake or AI-fueled scam leads to a company’s failure to deliver service (could clients allege negligence for not verifying something?). Even crime bonds and fidelity insurers (which traditionally cover employee theft) have been drawn into covering external fraud that uses impersonation. For example, a commercial crime policy’s "fraudulent instruction" insuring agreement might respond when an employee is tricked by a fake voice or chatbot into sending money. Insurers have had to modernize the definitions in these policies to ensure a synthetic AI voice qualifies as a “fraudulent instruction” source. Additionally, commercial insurers (especially cyber) are offering risk mitigation services: some provide clients with access to AI-based email filtering or deepfake detection tools as part of the policy services, aiming to prevent losses before they happen. In terms of claims, commercial insurers have seen a spike in social engineering claims over the past two years, a portion of which is attributed to more convincing AI-generated scams. This has put upward pressure on cyber insurance premiums and retention levels. Insurers might respond by sub-limiting coverage for social engineering fraud unless certain controls are in place. For risk managers at companies, the message is that AI threats are now part of the underwriting calculus – demonstrating preparedness for these threats can directly impact insurability and pricing.
Takeaways for Risk Managers and Underwriters
AI-driven cybercrime is a fast-evolving challenge. To manage this risk, both insurers (underwriters/claims) and corporate risk managers can take proactive steps:
Stay Informed on AI Threat Trends
Continuous education is vital. Keep abreast of threat intelligence from credible sources (e.g., reports from cybersecurity firms, FBI/Interpol alerts) about how criminals are using AI. This helps in anticipating new attack vectors. Underwriters should update their risk questionnaires at least annually to reflect the latest modus operandi (for instance, if “jailbreak” AI bots or new deepfake techniques are on the rise, incorporate those into risk assessments).
Strengthen Verification and Controls
Traditional controls must be adapted to counter AI tricks. Establish rigorous verification protocols for any requests involving funds transfer or sensitive data changes – e.g., require out-of-band confirmation or secret passphrases that an imposter wouldn’t know. Train employees to treat even audio/video communications with skepticism if anything seems off, and to double-check via known channels. For risk managers, this may involve revising incident response plans to include deepfake scenarios (e.g. how to respond if a fake video of your CEO is circulating) and ensuring multi-factor authentication is in place so stolen voice alone can’t bypass security. Underwriters, when evaluating clients, will look for these kinds of controls as indicators of resilience against AI-enhanced threats.
Leverage AI for Defense
The security community isn’t letting criminals have all the AI advantages. Organizations should deploy AI-driven defenses – such as email filters using machine learning to spot AI-generated phishing, or identity verification systems that can detect deepfake artifacts on video calls. Many vendors now offer deepfake detection services; risk managers could consider these for high-risk processes (like verifying a large financial transaction request). Insurers themselves are using AI to flag suspicious claims; likewise, corporate fraud teams can use AI to monitor transactions and communications for anomalies that might indicate an AI-assisted breach. Embracing defensive AI can help even the playing field and is increasingly factored into insurers’ underwriting evaluations (companies with strong AI-enabled security may earn more favorable terms).
Review and Update Insurance Coverage
Given the fluid threat landscape, policyholders should review their insurance policies with an eye on AI-related gaps. Cyber insurance, crime bonds, kidnap & ransom, and even general liability policies might need endorsements or clarifications to ensure coverage for AI-driven incidents. For example, does your cyber policy explicitly cover “social engineering fraud” or “imposter fraud” and is the sub-limit sufficient for your exposure? Brokers and underwriters should collaborate to make coverage more affirmative wherever possible, as this avoids disputes later. If insurers are introducing any new exclusions (for instance, some reinsurers have pondered AI exclusions), be aware of them and negotiate alternatives if possible. In essence, align expectations so that when an AI-related loss occurs, coverage responds as intended.
Engage in Scenario Planning and Stress Testing
Risk managers can perform tabletop exercises for AI-related incidents – e.g., a deepfake voice call leads to a wire transfer, or a malicious AI alters a critical database. Walk through how your team would detect, respond, and recover from such events. This can reveal preparedness gaps (perhaps staff are not trained to question a CEO’s voice) which you can then address via training or technical controls. Insurers, on their side, are increasingly stress-testing their portfolios against AI-extreme scenarios (like a wave of deepfake scams hitting many insureds at once). Underwriters should consider worst-case accumulations (for example, many small businesses falling for the same AI phishing kit) and ensure pricing or capacity accounts for that. Both insurers and insureds benefit from this forward-looking approach, reducing surprises when incidents happen.
Monitor Regulatory Compliance
Keep an eye on emerging AI regulations and legal duties that affect cyber risk. For businesses, non-compliance (say, not following a new deepfake takedown law) could lead to fines or lawsuits, which may or may not be insured. Ensuring compliance (such as adhering to data protection in AI deployments, or content labeling rules) will not only reduce regulatory risk but also improve security posture. Insurers should ensure their products align with legal trends – for example, offering coverage for regulatory fines or defense costs related to AI incidents where allowable, or advising clients on how to meet new standards (some insurers produce risk bulletins on AI governance to guide policyholders). Treat regulators as partners in mitigating AI threats: their guidelines (like the DFS cybersecurity AI guidance) often highlight prudent practices that insurers also want to see from insureds.
The insurance industry has a history of adapting to new perils, and AI is proving to be the latest test. By understanding the nuances of AI-enabled attacks and proactively adjusting coverage and risk controls, insurers and risk managers can help make stronger cyber resilience and innovative insurance solutions go hand in hand to counter the emerging generation of AI-empowered adversaries.
Thanks for reading.