Proofpoint: Half of Global Organizations Experienced AI Incidents

Proofpoint released its 2026 AI and Human Risk Landscape report, which explores the widening gap between how quickly organizations are operationalizing AI and how prepared they are to secure and investigate the risks that follow.

According to the report, 87 percent of organizations have deployed AI assistants beyond the pilot stage, while 76 percent are actively piloting or rolling out autonomous agents. Yet while organizations are investing in AI tools and controls, many cannot confirm those controls are effective. 52 percent are not fully confident their AI security controls would detect a compromised AI, and half of those with controls in place have already experienced a confirmed or suspected AI-related incident.

Further, most organizations report they are not fully prepared to investigate AI-related incidents that span multiple systems and channels. Only one-third say they are fully prepared to investigate one.

Key findings include:

AI deployment has outpaced security readiness: AI adoption has moved into production faster than governance frameworks have matured.

While 87 percent of organizations have deployed assistants beyond pilot stage and three-quarters are advancing autonomous agents, more than half describe security as catching up, inconsistent or reactive. 42 percent report experiencing a suspicious or confirmed AI-related incident, indicating that exposure is already present in live environments.

Collaboration channels are the primary AI attack surface: AI is expanding the attack surface, enabling threats to spread at machine speed and impact connected workflows.

While email remains the most common threat vector at 63 percent exposure now extends across third-party SaaS and cloud applications (47 percent), social and messaging platforms (41 percent), and AI assistants or agents (36 percent). Among organizations that experienced an AI-related incident, exposure increases across every channel, including 67 percent in email and 53 percent involving AI systems.

Confidence exceeds control effectiveness: While many organizations have security controls in place, they also lack assurance.

63 percent of organizations report having AI security coverage in place, yet 52 percent are not fully confident those controls would detect compromised AI. And more than half of organizations with controls still reported an AI-related incident. Gaps persist in training (47 percent), visibility into AI or agent activity (42 percent) and governance alignment across teams (41 percent).

Investigation readiness lags behind incident reality: When AI-related incidents occur, many organizations struggle to investigate them effectively.

Only one-third of respondents say they are fully prepared to investigate an AI- or agent-related incident, and 41 percent report difficulty correlating threats across channels. As AI-related activity spans email, collaboration platforms and cloud systems, the ability to reconstruct events depends on visibility across connected environments, which many organizations do not yet have.

Tool sprawl is a structural barrier: Fragmentation across security stacks is compounding the challenge, limiting visibility and slowing response when incidents move across systems at machine speed.

94 percent of organizations say managing multiple security tools is at least moderately challenging, and more than half describe it as very or extremely difficult. Respondents cite operational cost pressures (45 percent), integration challenges (42 percent) and difficulty correlating threats (41 percent).

Security architecture becomes a strategic priority as AI scales: More than half of organizations are actively pursuing vendor and tool consolidation, and a majority believe a unified platform is more effective than point solutions.

Over the next 12 months, 61 percent plan to expand AI protections, 56 percent intend to extend collaboration channel coverage and 53 percent expect to move toward a unified platform approach.

Access the full report.