Discover Gartner®
Managing GenAI Risk Through
Security Behavior and Culture Programs
GenAI is changing how work gets done and expanding the human attack surface in the process. As employees use AI to create, decide, and communicate, traditional security awareness can nolonger contain the risk. Discover why managing GenAI risk now demands a shift from awarenessto secure behavior and culture - without slowing innovation.
Why GenAI Has Broken Traditional Cybersecurity Defences
Accelerated GenAI adoption has exposed a critical gap in organizational defenses
Employees are using GenAI in ways existing training, policies, and detection controls were never built to handle; while attackers weaponize AI to make social engineering faster, more convincing, and far harder to spot.
Ask yourself:
- Are your employees prepared to detect and respond to AI‑powered phishing and deepfakes?
- Do you have real visibility into (and control over) shadow AI across your workforce?
- Are your security programs shaping secure GenAI behaviors, or simply increasing awareness without reducing risk?
This Gartner® research helps cybersecurity leaders understand why unmanaged human interaction with GenAI has become a primary driver of cyber risk and why evolving security behavior and culture programs is critical to closing this gap before attackers exploit it at scale
Trusted By
OutThink's Takeaways:
Managing GenAI Risk Through Security Behavior and Culture
Gartner analyst guidance on how accelerated GenAI adoption is reshaping the human attack surface; and what cybersecurity leaders must change to reduce employee‑initiated risk. Explore the critical shifts required to protect AI investments while strengthening organizational resilience.
Accelerated GenAI adoption has introduced new, employee‑driven risk that traditional controls cannot contain. Unmanaged use of public GenAI tools, unsafe handling of sensitive data, and shadow AI on corporate devices are expanding the attack surface faster than detection and governance models can adapt. Cybersecurity leaders must now manage how employees interact with AI, not just the technology itself.
Threat actors are using GenAI to power deepfakes, highly personalized phishing, and adaptive social engineering attacks that bypass employee intuition and legacy training. As AI‑assisted malicious content continues to rise, employees can no longer rely on “spot the red flag” techniques. Security programs must evolve to build behavioral resilience against AI‑driven deception.
Employees frequently use personal or unapproved GenAI tools for work, often inputting sensitive or proprietary information without visibility or control. This “shadow AI” introduces significant privacy, IP, and regulatory risk - while remaining largely invisible to security teams. Managing GenAI risk now requires clear behavioral guardrails, not just policy statements
Traditional security awareness programs were not designed for continuous human‑AI interaction. Gartner emphasizes the need to strengthen security behavior and culture programs (SBCPs) to drive secure GenAI practices, reinforce vigilance against AI‑enabled attacks, and embed human oversight into daily AI use. This people‑centric shift is essential to reducing risk without slowing innovation.
Why We Believe This Report Matters for Security Leaders
Lead.
Source: Gartner, “Cybersecurity Trend: GenAI Breaks Traditional Cybersecurity Awareness Tactics”, by Alex Michaels and Richard Addiscott, January 14, 2026.
GARTNER is a trademark of Gartner, Inc. and/or its affiliates.




