Cybersecurity 2026–2029: Forecast by Netwrix Security Research Lab

As AI-driven capabilities expand, risk increases accordingly. Coordinated governance of identity and data therefore becomes mission-critical.

Netwrix, a recognized leader in identity and data security solutions, has published its security outlook. The report projects that the next wave of cybersecurity disruption will stem from adversaries scaling identity-based attacks to compromise data security, particularly as agentic AI becomes more prevalent.

The outlook, prepared by the Netwrix Security Research Lab, outlines the trends most likely to reshape cybersecurity between 2026 and 2029. The analysis is based on ongoing research into real-world identity attacks and documented data exposure paths identified by Netwrix researchers.

What’s Most Likely to Reshape Security in 2026

Identity Automation Increases the Interdependency Between Identity and Data Security

By 2026, identity security is expected to expand significantly in workflow orchestration and automation across provisioning, token validation, and privilege management. These automated workflows now define who and what can access sensitive data. As a result, failures in identity automation directly translate into data exposure risk.

Adversaries are moving beyond targeting individual credentials. Their focus is shifting toward identity orchestration, federation trust relationships, and misconfigured automation. Because access to critical data repositories begins with identity, unified visibility across identity and data security is essential. This visibility enables detection of misconfigurations, reduction of blind spots, and faster response.

Agentic AI Expands Identity-Driven Data Access at Scale

As AI systems begin to perform tasks autonomously, they depend on identities to access, move, and act on data. It becomes critical to understand which identities AI agents use, what data they can access, and under whose authority they operate.

Without coordinated identity governance and data controls, agentic AI can rapidly magnify data exposure. The interdependence between identity security and data security intensifies as AI-driven automation operates continuously and at scale.

Cyber Insurance Requires Demonstrable Identity and Data Controls

As identity automation and AI-driven access increase exposure risk, cyber insurers are changing how they evaluate and price risk. Instead of relying solely on periodic questionnaires, insurers are shifting toward continuous validation of identity and data security controls.

Insurers are expected to depend on telemetry that shows how identities access sensitive data in real time. Organizations that demonstrate consistent alignment between identity governance and data protection may obtain more favorable terms. Those lacking visibility are likely to face greater scrutiny.

What Is Unlikely to Reshape Security in 2026

The Primary AI Risk Is Acceleration, Not Full Autonomy

Although AI is already influencing cybersecurity in significant ways, fully autonomous and self-directed AI-driven cyberattacks are unlikely to become the dominant threat in 2026. Recent state-sponsored espionage campaigns show that selective and resource-intensive autonomy is possible under favorable conditions. However, such operations remain supervised by humans, are vulnerable to unreliable feedback, and depend on permissive identity and access environments.

Executing effective, fully autonomous attack campaigns in real enterprise settings remains complex, expensive, and unpredictable. High signal noise, environmental variability, hallucination-prone outputs, operational risk, absence of reliable feedback loops, and substantial infrastructure costs make fully autonomous attacks economically impractical in most scenarios over the next year.

Instead, adversaries will continue using AI to accelerate established techniques such as reconnaissance, impersonation, access abuse, and workflow execution. AI will augment rather than fully replace human decision-making. The more immediate challenge for defenders will be preserving resilience against AI-accelerated attacks by denying the conditions automation relies on, including broad access, clean feedback, and durable reward structures. Strong identity controls and comprehensive data visibility remain the most effective safeguards, even as automation advances.

What’s Next on the Horizon by 2027

AI-Driven Convergence of Systems and Data

AI-based optimization increasingly depends on agents that connect identity systems and data sources previously managed in isolation. These agents operate across multiple systems to execute defined workflows, including accessing applications, identities, and data on behalf of users or teams.

When access conditions or data sensitivity change at any stage of the workflow, governance frameworks must ensure that the AI agent’s permissions remain appropriate. In practice, this requires continuous validation of identity context, access privileges, and policy alignment across interconnected systems. Static and siloed controls are no longer sufficient.

Data Gains More Embedded Protection

Data is increasingly expected to carry encryption, provenance, and access policies as it moves across users, systems, and environments. Provenance provides context regarding data origin, usage history, and which identities or systems have interacted with it.

Although this approach can mitigate the impact of breaches, inconsistent implementation introduces risks of fragmentation and blind spots. Strong identity context, standardized metadata, and consistent enforcement of policies are necessary to make self-protecting data effective and manageable at scale.

Looking Ahead to 2028 and 2029: A Key Risk

A Decline in Trust in AI Weakens Security

If economic pressures reduce investment in governance and oversight, organizations may be left with unmanaged models, undocumented drift, and compliance gaps. Early mapping of identity relationships, data dependencies, and AI ownership will be essential to sustaining resilience.

AI Vendor Instability Becomes a Data and Continuity Risk

As reliance on a growing number of emerging AI providers increases, determining where organizational data resides and who ultimately controls it becomes more difficult. When prompts, training data, models, and outputs are processed or stored outside the enterprise, retrieving, governing, or even locating that data may become challenging if a provider can no longer operate as expected. This risk will intensify as AI vendors are acquired, restructured, or exit the market. Enterprises may struggle to recover, manage, or even identify data associated with those services.

These developments generate interconnected risks across compliance, security, and business continuity. Without clearly defined data ownership, strong identity controls, and structured exit strategies, reliance on emerging AI providers can transform pilot initiatives into sustained data exposure and operational risk. Organizations that do not implement enforceable data ownership and governance mechanisms may discover that early AI initiatives have gradually evolved into persistent exposure and continuity challenges.

Підписатися на новини