Skip to content
Knowledge base

AI in cybersecurity — offensive and defensive applications in 2026

AI in cyberattacks vs AI in cyberdefense — deepfake, spearphishing, anomaly detection, threat hunting, SOAR. Case studies and 2026 predictions.

Artificial intelligence has turned cybersecurity into an arms race in which both sides — attackers and defenders — use the same technologies. In 2026, AI is no longer an experimental add-on but a fundamental component of both cyberattacks and cyberdefense. Deepfakes in video conferences, spearphishing generated by LLMs, autonomous agents scanning networks — these are not science fiction scenarios but realities that security teams face every day.

The AI arms race in cybersecurity — why it matters

The relationship between AI and cybersecurity is fundamentally asymmetric — and this asymmetry works in favor of attackers. Defenders must protect the entire attack surface, 24 hours a day, 365 days a year. Attackers need to find only one vulnerability, at one moment. AI amplifies this asymmetry in several ways.

On the offensive side, AI dramatically lowers the barrier to entry. Attack techniques that previously required years of experience and deep technical knowledge — such as creating convincing phishing campaigns, generating polymorphic malware, or discovering zero-day vulnerabilities — can now be partially automated by AI models available publicly or on the black market.

On the defensive side, AI offers solutions to problems that humans cannot handle alone: analyzing millions of security events daily, detecting subtle anomalies in user behavior, correlating seemingly unrelated alerts into a coherent picture of an attack. According to the IBM Cost of a Data Breach 2025 report, organizations with extensive AI-based security automation detect breaches on average 108 days faster and save over USD 1.7 million per incident compared to organizations without such automation.

The context of 2026 is particularly significant. Language models (GPT-4, Claude, Gemini) have reached a level where generated content is practically indistinguishable from human-written text. Generative audio and video models (deepfake) can create realistic materials in real time. Agentic AI systems can autonomously plan and execute complex sequences of actions. All of this means that both attacks and defense are entering a new era — the era of AI-native cybersecurity.

📚 Read the complete guide: AI Security: AI w cyberbezpieczeństwie - zagrożenia, obrona, przyszłość

AI in offensive operations — deepfake and impersonation

Deepfake — meaning audio and video content generated or modified by AI — has become one of the most serious cybersecurity threats in 2026. While in 2023 deepfakes were primarily a technological curiosity and a political disinformation problem, in 2025-2026 they became a full-fledged tool for targeted attacks on organizations.

Deepfake audio — next-generation CEO fraud. BEC (Business Email Compromise) attacks have evolved into VAC (Voice AI Compromise). Attackers clone the voice of the CEO, CFO, or another board member based on publicly available recordings (conference presentations, interviews, webinars) and make phone calls to finance departments with instructions to authorize wire transfers. In 2025, a high-profile case involved an attack on an energy company in Hong Kong, where a deepfake audio of the CFO led to a transfer of USD 25 million.

Real-time video deepfake — videoconference attacks. This is the newest and most dangerous variant. The attacker joins a video conference impersonating a board member, business partner, or client. Generative models from 2025-2026 can transform the camera image in real time, overlaying another person’s appearance with natural facial expressions, lip movements, and reactions to the conversation. The case from Hong Kong in early 2024, where a deepfake CFO on a video conference led to a transfer of USD 25 million, was just the beginning.

Deepfake in recruitment social engineering. A new vector, observed since 2025. Attackers create fake profiles of candidates for positions with access to critical systems (administrators, developers), conduct job interviews using deepfake video, and after “being hired” gain access to the organization’s infrastructure from the inside.

Defense against deepfake requires a paradigm shift. Traditional identity verification methods (facial recognition, voice verification) are ineffective against advanced deepfakes. Organizations must implement multi-channel identity verification — for example, a codeword system, callback to a known number, or confirmation through a second communication channel. Training employees to recognize deepfake signals is also critical.

AI-powered social engineering and spearphishing at scale

Social engineering — meaning the manipulation of people to gain access to information or systems — has always been the most effective attack vector. AI has taken it to an entirely new level, eliminating traditional limitations: time, scale, and quality.

Traditional spearphishing required the attacker to manually research the target — analyzing LinkedIn profiles, publications, and social media activity — and then writing a convincing email. The process took hours per target. AI-powered spearphishing automates the entire pipeline. The language model analyzes the victim’s public profile (LinkedIn, Twitter, GitHub, publications), identifies topics of interest, communication style, and contacts, and then generates a personalized email that sounds like a message from a real colleague or business partner.

Scale. What previously required a team of social engineers working for weeks can now be done by a single operator with access to an LLM in hours. Personalization, which was previously a luxury reserved for APT attacks on high-value targets, is now available at mass scale.

Quality. AI-generated phishing emails do not contain the typical warning signs of traditional phishing: grammatical errors, unnatural style, or obvious manipulation attempts. Language models generate text that is grammatically correct, stylistically consistent with the target organization’s communication, and contextually embedded in current events.

Multi-channel approach. AI-powered social engineering is not limited to email. Attackers combine email spearphishing with deepfake audio (a follow-up call “confirming” the email), fake LinkedIn profiles (building relationships before the attack), and SMS/WhatsApp phishing with personalization.

Research conducted in 2025 by the University of Illinois found that spearphishing generated by GPT-4 has a click rate 135% higher than traditional human-crafted phishing. This is a fundamental change in the threat landscape.

Automated vulnerability discovery and exploitation

AI is also transforming the reconnaissance and exploitation phases of the attack cycle. AI-powered tools can automatically scan infrastructure, identify vulnerabilities, and — most dangerously — create exploit chains (vulnerability chaining) that combine multiple lower-severity vulnerabilities into a single destructive attack vector.

Automated vulnerability discovery. AI models trained on CVE (Common Vulnerabilities and Exposures) databases, public exploits, and source code can identify patterns of vulnerable code. Tools such as ML-enhanced fuzzers generate test inputs that discover vulnerabilities significantly faster than traditional methods. In a defensive context, the same tool serves for security testing — but in an attacker’s hands, it becomes a weapon.

Vulnerability chaining. This is an area where AI has a particular advantage over humans. A security analyst sees a single low-risk vulnerability and may ignore it. AI can analyze thousands of such vulnerabilities and identify a combination that — connected together — provides full system access. It is like a puzzle — each piece is harmless by itself, but together they form a complete attack picture.

Evasion techniques. AI is also used to create malware that bypasses detection systems. Polymorphic malware generated by AI models changes its signature with each iteration, making signature-based detection difficult. Adversarial models test malware against EDR/XDR systems and automatically modify the code until it passes undetected.

Autonomous exploit development. The most advanced application — agentic AI systems that can independently discover a vulnerability, develop an exploit, test it in an isolated environment, and deliver a ready-to-use payload. In 2025, researchers from Carnegie Mellon University demonstrated that GPT-4 with access to tools can independently exploit 87% of known one-day vulnerabilities (1-day vulnerabilities).

AI in defensive operations — anomaly detection and UEBA

On the defensive side, AI brings transformative capabilities that address fundamental security problems — above all, the scale and complexity of data that humans cannot handle on their own.

Anomaly detection is the foundation of defensive AI application. Machine learning models build profiles of “normal” behavior — network traffic, user activity, login patterns, data transfers — and then identify deviations in real time. Unlike rule-based systems (signature-based), AI can detect threats that have never been seen before (zero-day, insider threats).

UEBA (User and Entity Behavior Analytics) — one of the most effective defensive AI applications. The system builds a behavioral profile of every user and device in the organization: typical working hours, applications accessed, volumes of data transferred, login locations. When behavior deviates from the profile — for example, an employee logs in at 3:00 AM from a new location and begins downloading large amounts of data — the system generates a high-priority alert.

Examples of detection that are practically impossible without AI: detecting lateral movement (traversal between systems) in a network of thousands of hosts, identifying data exfiltration masked as normal DNS traffic, correlating a distributed attack in which individual actions appear innocent but together form an attack pattern, and detecting insider threats — an employee who slowly, over weeks, accumulates data before leaving the company.

Challenges of defensive AI include false positive management (an overly sensitive model generates avalanches of false alerts, leading to alert fatigue), adversarial evasion (attackers who know the defensive model can deliberately modulate their behavior to fit within the “normal” profile), and explainability (deep learning models are black boxes, making it difficult for analysts to understand why an alert was generated).

AI-enhanced threat hunting and SOC operations

The Security Operations Center (SOC) is a natural beneficiary of AI — it is an environment where analysts are flooded with thousands of alerts daily, the majority of which are false positives. AI transforms SOC operations at several levels.

Automated alert triage. AI prioritizes alerts based on context — not just alert severity but its correlation with other events, the profile of the attacked system, incident history, and current threat intelligence. Instead of 5,000 alerts to review, a Tier 1 analyst receives 50 enriched, contextualized alerts with action recommendations. According to Ponemon Institute research, AI-powered triage reduces alert analysis time by 70%.

AI-supported threat hunting. Traditional threat hunting is a laborious process — an analyst formulates hypotheses about potential threats and manually searches logs for evidence. AI supports this process in two ways: hypothesis generation (the model analyzes threat intelligence and proposes threat scenarios specific to the organization) and automated searching (the model searches petabytes of logs, correlating events from multiple sources, in timeframes inaccessible to humans).

Investigative copilot. A new category of AI tools — security analyst assistants. The analyst describes suspicious activity in natural language, and the model automatically executes appropriate queries to the SIEM, correlates results, draws an attack timeline, and proposes next investigation steps. Microsoft Security Copilot, Google SecOps AI, and Splunk AI Assistant are examples of commercial implementations of this approach.

Predictive threat analysis. AI analyzes trends in threat intelligence, attack patterns on organizations in similar industries and regions, and internal data on vulnerabilities and exposure, then generates forecasts — which attacks are most likely in the coming weeks. This enables proactive strengthening of defenses instead of reactive response.

SOAR and automated incident response with AI

SOAR (Security Orchestration, Automation and Response) — platforms that automate incident response — gain a new dimension through integration with AI. Traditional SOAR relies on predefined playbooks: “if alert type X, execute sequence of steps Y.” AI adds decision intelligence to this.

Adaptive playbooks. Instead of static rules, AI dynamically adjusts the playbook based on incident context. For example: a standard playbook for phishing may include email isolation, user password reset, and attachment scanning. An AI-enhanced playbook analyzes the phishing content, correlates it with current threat actor campaigns, checks whether other people in the organization received similar messages, and adjusts the scope of response — from simple deletion to a full halt of email traffic from the attacker’s domain.

Automated containment. When an active attack is detected, AI can autonomously make isolation decisions — such as disconnecting an infected host from the network, blocking a suspicious account, or halting transactions. Calibration is key here — overly aggressive automation can cause operational disruptions (e.g., blocking the CEO’s account due to a false positive).

Post-incident analysis. After an incident concludes, AI automatically generates a report — attack timeline, techniques used (mapping to MITRE ATT&CK), affected systems, actions taken, and preventive recommendations. This significantly accelerates the post-mortem process and ensures documentation consistency.

Mean Time to Respond (MTTR). Organizations using AI-enhanced SOAR report a 60-80% reduction in MTTR compared to manual response. In the context of NIS2, where organizations have 24 hours for an early warning, this automation can determine whether the regulatory deadline is met.

Case studies 2025-2026 — incidents involving AI

Analysis of real incidents from 2025-2026 illustrates the scale and evolution of AI threats in cybersecurity.

Deepfake attack on the financial sector (2025). An attack group used deepfake audio and video to conduct a coordinated attack on three European banks. A deepfake CFO of one of the banks authorized a series of transactions on a video conference totaling more than EUR 35 million. The attack was detected only after 72 hours, when AML (Anti-Money Laundering) systems identified unusual transaction patterns. The incident led to a revision of authorization procedures across the entire EU banking sector.

AI-powered phishing campaign targeting healthcare (2025). A phishing campaign using GPT-4 targeted over 200 hospitals in Western Europe. The attackers generated personalized emails for medical staff, masquerading as communication from EHR (Electronic Health Records) system providers. The success rate was 47% — more than three times higher than a typical phishing campaign. The attack led to ransomware infections that encrypted systems in 12 hospitals.

Adversarial ML attack on autonomous SOC (2026). The first documented case of an adversarial ML attack on a production defensive AI system. The attackers, after gaining initial network access, spent 3 weeks “training” the anomaly detection system, gradually introducing increasingly anomalous behaviors in a way that the system adapted as the “new normal.” Once the behavioral profile was sufficiently expanded, they conducted mass data exfiltration that generated no alerts.

Weaponized agentic AI (2026). The most recent case study — an autonomous AI agent created by an APT group that independently conducted reconnaissance, identified vulnerabilities, created exploits, and exfiltrated data. The agent operated for 6 weeks in the network of a large aerospace manufacturer before being detected. The incident signaled a new era — the era of autonomous cyberattacks, where the human operator merely initiates the operation and AI executes the rest.

How to prepare the organization — readiness for AI threats

Preparing an organization for AI-related threats requires a multi-layered approach encompassing technology, processes, and people. Below we present the key steps.

Step 1: Deploy AI-powered defensive tools. Organizations that rely solely on traditional tools (signature-based AV, rule-based SIEM) cannot detect AI-powered attacks. The minimum is: EDR/XDR with an ML component, UEBA, and AI-enhanced SIEM with behavioral correlation. This is not optional — it is a necessity in 2026.

Step 2: Training on recognizing AI threats. Employees must know that deepfakes exist and can be used against them. The training program should cover recognizing deepfake audio and video (artifacts, unnatural movements), identity verification through alternative channels, recognizing AI-generated phishing (subtle signals), and procedures in case of a suspected AI attack.

Step 3: Identity verification procedures. Implementation of a codeword system (passwords established offline, verified during unusual requests), callback procedures (verification by calling back a known number), and multi-channel confirmation (important decisions require confirmation through at least two independent channels).

Step 4: Red-teaming using AI. Regular security tests should include AI attack scenarios — deepfake, AI phishing, and adversarial ML. Only by testing against real AI threats can an organization verify the effectiveness of its defenses.

Step 5: Monitoring AI usage in the organization. Shadow AI — unauthorized use of AI tools by employees — is a real risk. The organization should have an acceptable use policy for AI, monitoring of data transfers to external AI services, and an approved list of permitted AI tools with defined usage rules.

Step 6: Incident response for AI incidents. Existing incident response procedures should be extended to include AI-specific scenarios: deepfake attacks, prompt injection on the organization’s AI system, adversarial ML on defensive systems, and data leakage through AI tools.

How does nFlo support cybersecurity in the AI era?

The 2026 threat landscape requires a security partner that understands both classic attack vectors and the new generation of AI-based threats. nFlo integrates AI-powered defensive tools within its SOC service, providing organizations with 24/7 monitoring using advanced behavioral analytics, anomaly detection, and automated incident response.

Our approach encompasses continuous monitoring with AI-enhanced SIEM and UEBA, threat hunting supported by machine learning, automated incident response with adaptive playbooks, red-teaming including AI attack scenarios (deepfake, AI phishing, adversarial ML), training for organizations on recognizing AI threats, and advisory on building AI governance policies.

With over 200 clients, 500 completed projects, and a 98% retention rate, nFlo combines experience with innovation. Our response time of under 15 minutes means that in the event of an incident — even one involving AI — the organization has immediate support from experts who understand both traditional and AI-native threats.

Summary

  • AI is an arms race — both attackers and defenders use the same technologies. An organization without AI-powered defense is at a structural disadvantage.
  • Deepfake is a real business threat — videoconference attacks, audio CEO fraud, and fake candidates in recruitment. Traditional identity verification is insufficient.
  • AI-powered spearphishing — 135% higher success rate than traditional phishing. Personalization at mass scale eliminates typical warning signs.
  • Defensive AI transforms SOC — anomaly detection, UEBA, automated triage, and AI copilots reduce MTTR by 60-80% and address the alert fatigue problem.
  • Adversarial ML is a new vector — attackers can manipulate AI-based defensive systems by “training” them on malicious data.
  • Human in the loop remains critical — AI augments, it does not replace. Strategic decisions, business context, and atypical incidents still require human judgment.
  • Preparation requires a multi-layered approach — technology (AI tools), processes (verification procedures, incident response), and people (training, awareness).

Frequently Asked Questions

How do attackers use AI in cyberattacks?

Main offensive AI applications: deepfake audio/video for impersonation (CEO fraud), AI-powered spearphishing with personalization at mass scale, automated vulnerability discovery and exploitation, evasion techniques bypassing detection systems, and polymorphic malware generation.

How does AI help defend against cyberattacks?

Defensive AI applications: anomaly detection in network traffic and user behavior (UEBA), automated threat hunting, alert prioritization in SOC, automated incident response (SOAR), predictive threat analysis, and real-time malware analysis.

Will AI replace security analysts?

Not in the foreseeable future. AI augments analyst work — it automates Tier 1 triage, reduces alert fatigue, and accelerates analysis. However, strategic decisions, atypical incidents, and threat intelligence still require human judgment. The optimal model is “AI + human in the loop.”

What are the biggest AI threats in cybersecurity in 2026?

Top 2026 threats: real-time deepfake (videoconference attacks), AI-powered social engineering at mass scale, automated vulnerability chaining, adversarial ML attacking defensive systems, and weaponized agentic AI (autonomous attacking agents).

How can you prepare an organization for AI threats?

Key steps: deploying AI-powered defensive tools, training on recognizing deepfake and AI phishing, identity verification procedures (e.g., codewords), monitoring internal AI usage, and red-teaming using AI.

Share:

Talk to an expert

Have questions about this topic? Get in touch with our specialist.

Product Manager
Grzegorz Gnych

Grzegorz Gnych

Sales Representative

Response within 24 hours
Free consultation
Individual approach

Providing your phone number will speed up contact.

Want to Reduce IT Risk and Costs?

Book a free consultation - we respond within 24h

Response in 24h Free quote No obligations

Or download free guide:

Download NIS2 Checklist