Shadow AI and Human‑Driven GenAI Risk: Why Organisations Need Human‑Centric AI Governance in 2026

Shadow AI and Human‑Driven GenAI Risk: Why Organisations Need Human‑Centric AI Governance in 2026

Feb 28

Team OutThink
Team OutThinkBlogs written by Team OutThink
View Profile

Generative AI has rapidly entered the workplace, but governance and security controls have not kept pace. Across industries, employees are adopting AI tools to improve productivity often using personal accounts or unapproved platforms outside organisational oversight. This phenomenon, commonly known as “shadow AI,” mirrors earlier shadow IT trends but introduces significantly greater risk because conversational interfaces encourage direct sharing of sensitive information. Surveys show that a majority of enterprise employees using generative AI have entered confidential company data into public systems, including customer details, financial information, and proprietary project material.

The scale of this exposure is growing quickly. Corporate data submissions to AI tools increased dramatically in recent years, with sensitive content such as source code, research data, and customer information increasingly appearing in prompts and file uploads. At the same time, many employees access these tools through personal accounts, obscuring activity from IT and security teams and creating compliance blind spots.

Because AI outputs can be reused, shared, or integrated into business processes, a single risky interaction can propagate widely across documents, codebases, or decisions. Effective GenAI risk management, therefore, requires shifting from purely technical defenses to human-centric governance that addresses how employees actually use AI in real workflows.

This article explores how everyday employee use of generative AI is creating new enterprise risks, from shadow AI adoption and sensitive data exposure to hidden policy violations. It explains why workers bypass approved tools, how routine workflows can become data-loss pathways, and what organisations can do to manage human risk in AI adoption.

How Generative AI Quietly Spread Across the Enterprise

Generative AI has entered organisations at a speed that governance structures were not designed to match. In many cases, adoption did not begin with formal IT rollouts but with employees and business units experimenting with publicly available tools to complete everyday tasks such as drafting documents, analysing data, or writing code. As a result, usage often became widespread before policies, monitoring systems, or risk assessments were introduced.

Enterprise telemetry illustrates how quickly this shift occurred. In a single year, GenAI usage increased by roughly 200%, while prompt activity grew by about 500%, with tools being used across multiple departments at once. These patterns indicate that generative AI moved rapidly from experimentation into routine operations.

Network data further shows the breadth of the ecosystem that employees are accessing. One report identified more than 6,500 distinct GenAI domains and over 3,000 AI applications observed in enterprise traffic, creating a highly fragmented landscape that is difficult to manage centrally.

Gartner research on GenAI risk underscores how much of this activity occurs outside formal oversight. A survey conducted by Gartner of cybersecurity leaders found that 69% of organisations suspect or have evidence of employees using prohibited public GenAI tools. This rapid uptake of unsanctioned tools is identified as a major contributor to security and compliance risk.

Generative AI is no longer experimental, it is already embedded in everyday work across departments. But widespread use alone is not the primary risk. The real issue begins with what employees are actually putting into these systems.

How Routine AI Use Is Exposing Sensitive Data

For most employees, generative AI tools feel like just another productivity application, not a place where sensitive information could escape the organisation. As a result, confidential material is often entered into these systems during normal work activities. Surveys indicate how common this behaviour has become. One large study found that 57% of employees who use GenAI admitted to entering confidential company data into public tools, including customer records, internal documents, proprietary plans, and financial information.

The types of information shared show that exposure is not limited to a single category of data. Employees reported entering personal or employee details (31%), project or product information (29%), customer data (21%), and financial information (11%). This suggests that multiple forms of sensitive business information are being handled through external AI systems.

Risk increases further when users upload files rather than short prompts. Documents, spreadsheets, and source code can contain large volumes of confidential material in a single interaction. Analysis of enterprise activity found that more than 20% of uploaded files to generative AI tools included sensitive corporate data.

In the same study by Gartner, it states that employees are using public GenAI tools in ways existing policies and controls were not designed to manage, including unsafe handling of sensitive information. The research identifies unmanaged human interaction with GenAI as a growing driver of organisational risk.

Much of this sensitive data exposure happens through tools organisations may not even know are being used. To understand the scale of the problem, it is important to look beyond approved platforms and examine the hidden layer of AI activity taking place across the enterprise.

Shadow AI - The Invisible Layer of Enterprise Activity

Not all enterprise AI use happens through approved platforms. A significant portion occurs outside official oversight, a phenomenon commonly referred to as “shadow AI.” This includes employees using public or unsanctioned tools for work tasks without informing IT or security teams. Research suggests this behaviour is both widespread and deliberate. One survey found that about 59% of employees use unapproved AI tools at work, and many acknowledge keeping this usage hidden from their employers.

Personal accounts play a major role in enabling this activity. Instead of logging in through enterprise-managed systems, employees often access AI services using private email accounts, which bypass corporate logging, monitoring, and contractual protections. Security reporting indicates that roughly 68% of workers use personal logins for work-related AI tasks, effectively placing these interactions outside organisational visibility.

On 29 January 2026, a high-profile incident demonstrated how this risk can materialise even at the highest levels of cybersecurity leadership. Madhu Gottumukkala, the acting director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), reportedly uploaded sensitive government contracting documents into a public version of ChatGPT. Although the files were not classified, they were marked “For Official Use Only,” indicating they were not intended for public disclosure. The uploads triggered automated security alerts within federal systems and prompted an internal review by the Department of Homeland Security to assess potential exposure.

Because shadow AI activity is hidden, organisations often discover it only after problems occur. Enterprise telemetry shows measurable consequences, with companies reporting an average of more than 200 GenAI-related policy violations per month, including incidents involving source code and regulated data exposure.

Gartner identifies unmanaged use of public generative AI tools as a growing blind spot for security teams, noting that employees are adopting these tools faster than organisations can govern them. This creates situations where sensitive activities occur beyond established controls. Shadow AI does not emerge in a vacuum. Employees turn to unofficial tools for specific reasons, often tied to how work is structured and what resources are available to them. Understanding those motivations is key to addressing the problem effectively.

Why Employees Bypass Approved Tools

When employees use unapproved AI tools, it is rarely an attempt to sidestep security controls. More often, it is simply the fastest way to get work done. Generative AI can dramatically reduce the time needed for writing, analysis, coding, and routine tasks, so people reach for whatever tool helps them move forward, especially when deadlines are tight or workloads are high. Surveys consistently show that productivity and efficiency are the main drivers behind AI use in the workplace.

Approved enterprise tools, however, do not always offer the same capabilities or ease of use as widely available consumer applications. If official solutions feel slower, more restricted, or poorly integrated into everyday workflows, employees naturally gravitate toward alternatives that allow them to complete tasks with less friction. Over time, this creates a gap between what organisational policy expects and what real work actually requires, a gap that becomes difficult to close through enforcement alone.

Limited training makes the situation worse. Only about 24% of employees report receiving mandatory guidance on safe AI use, leaving most people to rely on their own judgment about what is acceptable. Many are not even sure what the rules are: around 44% say they do not know whether their organisation has an AI policy at all, which makes consistent compliance unlikely.

As generative AI adoption has accelerated, organisations have struggled to provide clear, practical guidance at the same pace, leaving employees to make day-to-day decisions about tool use on their own, a situation that increases the likelihood of risky behaviour even when intentions are not malicious.

Even when the reasons are understandable, the technology itself introduces risks that traditional controls were never designed to handle. Generative AI changes how information is shared, processed, and stored, creating exposure pathways that did not previously exist.

Why GenAI Creates Unique Data Exposure Pathways

Generative AI creates a different kind of risk, mainly because of how people use it. Instead of filling out structured fields or uploading carefully selected data, employees interact with these tools by describing problems in their own words. That usually means sharing context, sometimes a lot of it, so the system can give a useful answer. This shift from structured input to open conversation has effectively widened the organisational attack surface, because every day problem-solving can now involve sending sensitive details outside the company without anyone intending to do so.

These systems also reward more information with better results. To refine a draft, debug code, or analyse a situation, users tend to include background such as internal processes, client details, or proprietary ideas. Security agencies refer to this pattern as “conversational oversharing,” where the design of the interface itself encourages disclosure that would rarely happen in traditional software.

What happens to that information afterwards is another concern. Public AI providers may retain prompts and responses for purposes like debugging, safety checks, or improving their models, meaning the data can persist outside organisational control long after the interaction ends. Much of this transfer occurs through routine work activities such as pasting text to draft reports, analysing documents, or generating code, effectively turning normal productivity tasks into a channel for data exfiltration.

The risk does not come only from standalone AI tools. Many everyday applications now include embedded AI features that may send user inputs to external services without triggering traditional monitoring systems, further reducing visibility into how data is being handled.

These pathways are not merely theoretical vulnerabilities. When sensitive information flows through them, the impact can be felt across legal, operational, and financial dimensions of the organisation.

Organisational Consequences of Human-Driven AI Risk

The risks associated with generative AI are no longer theoretical; they are showing up as real business problems. A major concern is that these risks are driven less by the technology itself and more by how employees use it. Rapid adoption has introduced new ways for sensitive information to leave the organisation, influence decisions, or create compliance exposure, often through routine work activities rather than deliberate misconduct.

One immediate consequence is regulatory and legal risk. When employees submit personal data, customer information, or other regulated material to external AI systems, organisations may unknowingly breach privacy obligations or contractual requirements. Because these interactions can occur outside approved platforms, they may not be visible until after the exposure has already happened.

Intellectual property is another area of concern. Employees frequently use AI tools to debug code, refine product ideas, or analyse internal documents, which can lead to proprietary information being shared externally. Global intellectual property bodies caution that AI interactions can become an unintended channel for disclosure of protected material.

Operational impacts can be just as significant. Decisions influenced by inaccurate or fabricated AI outputs can affect reports, forecasts, or communications, introducing risks that are difficult to detect until consequences emerge. At the same time, organisations are reporting a steady rise in policy violations linked to AI use, with enterprise telemetry indicating hundreds of incidents per month as adoption expands.

If the consequences are significant, the next challenge is identifying where the risk is most likely to originate. In practice, exposure is closely tied to patterns of behaviour rather than specific tools alone.

Behavioural Indicators of High-Risk AI Use

Not everyone who uses generative AI creates the same level of risk. What matters far more is how people use it. Gartner’s cybersecurity research points out that exposure increases when employees interact with AI tools outside approved controls, making behaviour a stronger predictor of risk than the technology itself.

One common pattern is heavy reliance on public AI tools instead of company-approved systems. These platforms often sit completely outside corporate visibility, so anything entered into them may leave the organisation without logging, monitoring, or contractual protection. Security reports show employees are accessing thousands of different AI services from corporate networks, many of which security teams don’t actively manage.

Risk also rises when employees use personal accounts for work. Logging into AI tools with private email addresses bypasses enterprise safeguards entirely, meaning organisations cannot see what data is being shared or how it is used. Surveys suggest this is a common practice across workplaces.

Another concern is uploading or pasting sensitive information without checking what it contains. Documents and codebases can hold far more confidential material than users realise, and studies show a significant portion of files submitted to AI tools include sensitive data.

Finally, problems escalate when AI outputs are used directly in important decisions without verification. Gartner notes that unmanaged human use of GenAI can influence business processes in ways organisations cannot easily control.

Recognising risky behaviours is only useful if organisations can respond to them effectively. The focus, therefore, shifts from identifying the problem to reducing exposure in ways that still allow employees to benefit from AI.

Reducing Shadow AI and Human Risk

Trying to eliminate shadow AI entirely is rarely effective. Employees can access public tools from any browser or personal device, so outright bans often push usage further underground rather than stopping it. Gartner’s cybersecurity research notes that generative AI adoption is moving faster than traditional control mechanisms, which means organisations need approaches that manage use rather than assume it can be prevented.

A more practical starting point is to ensure approved tools genuinely support how employees complete their day-to-day work. If an enterprise AI solution helps someone draft emails, analyse spreadsheets, prepare reports, or write code as quickly and conveniently as public tools, they are far less likely to look elsewhere. When the official option fits naturally into individual workflows, behaviour shifts without the need for constant enforcement. Clear policies also matter, but they must be specific enough to guide real decisions, which tools are allowed, what data can be shared, and who is accountable for safe use. Vague statements about “using AI responsibly” do little to change behaviour.

Training is another key piece, especially when it reflects real work scenarios rather than abstract rules. Surveys show employees respond better to concrete examples of safe and unsafe use, and organisations that provide this kind of guidance see stronger compliance. Experts emphasise that governance and education must develop alongside adoption for AI to be used safely at scale.

Finally, visibility is essential. Organisations cannot manage risks they cannot see, so many are shifting from blocking tools to monitoring how data moves to and from AI services. This approach recognises that generative AI is already embedded in workflows and focuses on reducing harm rather than trying to eliminate use altogether.

Tactical measures can reduce immediate risks, but long-term resilience requires a broader approach. Sustainable AI adoption depends on governance models that account for human behaviour, organisational culture, and evolving technology.

Toward a Human-Centric AI Governance Model

As generative AI becomes woven into everyday work, organisations are discovering that technical controls alone cannot address the risks. Governance increasingly needs to focus on how people actually use these tools for their habits, decisions, and the context in which AI fits into daily tasks. The NIST AI Risk Management Framework reflects this shift, emphasising that organisations should foster a “culture of risk management” in which policies, technical safeguards, and operational practices reinforce one another rather than operate in isolation.

In practical terms, this means moving away from attempts to prohibit AI entirely and toward managing its safe use. Because employees can access public tools independently, blanket bans tend to be ineffective and may simply drive usage out of sight. Guidance from industry and advisory bodies stresses that controls must align with real workflows; if security measures make work significantly harder, they are likely to be bypassed.

Transparency also plays a crucial role. Encouraging employees to disclose when and how they use AI enables organisations to understand exposure and provide support, whereas secrecy prevents effective risk management. Integrating AI oversight into existing cybersecurity and compliance programs helps avoid fragmented governance structures that are difficult to maintain.

The broader pattern across organisations is that risks arise primarily from human interaction with AI systems rather than from the technology alone, making behaviour and culture central to effective governance. Viewed this way, employees are not only the source of potential risk but also the key to managing it, as they are the ones who ultimately decide how AI is used in practice.

That's a wrap!

Generative AI has transformed employees from passive system users into active participants in data processing, knowledge creation, and decision support. The dominant vulnerability is no longer the software itself but how people interact with it under real-world pressures. Organisations that address AI risk through human-centric governance, rather than purely technical controls, will be far better positioned to capture its benefits while avoiding invisible, large-scale exposure.

Sources

  1. https://www.probablypwned.com/article/netskope-shadow-ai-genai-data-violations-2026
  2. https://www.businesswire.com/news/home/20250804371445/en/Menlo-Securitys-2025-Report-Uncovers-68-Surge-in-Shadow-Generative-AI-Usage-in-the-Modern-Enterprise
  3. https://www.gartner.com/en/newsroom/press-releases/2025-11-19-gartner-identifies-critical-genai-blind-spots-that-cios-must-urgently-address0
  4. https://www.techmonitor.ai/ai-and-automation/57-enterprise-employees-input-confidential-data-ai-tools
  5. https://dataconomy.com/2025/02/28/57-percent-of-employees-expose-sensitive-data-to-genai
  6. https://www.axios.com/2025/07/31/workers-company-secrets-chatgpt
  7. https://outthink.io/gartner-cybersecurity-insights/
  8. https://cybernews.com/ai-news/ai-shadow-use-workplace-survey
  9. https://www.businesswire.com/news/home/20250226490609/en/TELUS-Digital-Survey-Reveals-Enterprise-Employees-Are-Entering-Sensitive-Data-Into-AI-Assistants-More-Than-You-Think
  10. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
  11. https://cloudsecurityalliance.org/artifacts/security-implications-of-chatgpt
  12. https://www.wipo.int/about-ip/en/artificial_intelligence
  13. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
  14. https://assets.kpmg.com/content/dam/kpmg/xx/pdf/2023/ai-governance-framework.pdf
  15. https://www.linkedin.com/pulse/executive-guide-ai-governance-building-trust-from-data-himanshu-patni
  16. https://www.scworld.com/news/shadow-ai-on-the-rise-sensitive-data-input-by-workers-up-156
  17. https://www.telusdigital.com/about/newsroom/telus-digital-survey-reveals-enterprise-employees-use-of-shadow-ai
  18. https://www.ndtv.com/world-news/trumps-indian-origin-cyber-chief-madhu-gottumukkala-uploaded-critical-files-on-chatgpt-report-10903996/amp/1
Share

Experience OutThink

Related Articles
Security Awareness vs Secure Behaviour: Why Training Fails and What Actually Reduces Human Cyber Risk
Team OutThink
04/03/2026

Security Awareness vs Secure Behaviour: Why Training Fails and What Actually Reduces Human Cyber Risk

Read More about AI-Native Cybersecurity Human Risk Management
Shadow AI and Human‑Driven GenAI Risk: Why Organisations Need Human‑Centric AI Governance in 2026
Team OutThink
28/02/2026

Shadow AI and Human‑Driven GenAI Risk: Why Organisations Need Human‑Centric AI Governance in 2026

Read More about AI-Native Cybersecurity Human Risk Management
How Security Behaviour & Culture Programs (SBCP) Actually Change Cybersecurity Outcomes
Team OutThink
27/02/2026

How Security Behaviour & Culture Programs (SBCP) Actually Change Cybersecurity Outcomes

Read More about AI-Native Cybersecurity Human Risk Management
How to Spot AI‑Generated Videos: Why Detection Now Depends on Human Judgement, Not Visual Clues
Team OutThink
25/02/2026

How to Spot AI‑Generated Videos: Why Detection Now Depends on Human Judgement, Not Visual Clues

Read More about AI-Native Cybersecurity Human Risk Management
Human-Centric Cybersecurity: Why Secure Behaviour Is the New Security Perimeter
Team OutThink
24/02/2026

Human-Centric Cybersecurity: Why Secure Behaviour Is the New Security Perimeter

Read More about AI-Native Cybersecurity Human Risk Management
What Makes a Human Risk Management Platform Effective in 2026
Team OutThink
20/02/2026

What Makes a Human Risk Management Platform Effective in 2026

Read More about AI-Native Cybersecurity Human Risk Management
How AI Is Supercharging Smishing – and What Can Actually Prevent It
Team OutThink
18/02/2026

How AI Is Supercharging Smishing – and What Can Actually Prevent It

Read More about AI-Native Cybersecurity Human Risk Management
Behaviour vs Recognition: The Real Skills Security Awareness Training Must Build for Effective Cyber Resilience
Team OutThink
12/02/2026

Behaviour vs Recognition: The Real Skills Security Awareness Training Must Build for Effective Cyber Resilience

Read More about AI-Native Cybersecurity Human Risk Management
Beyond Detection: The New Model for Deepfake Awareness Training
Team OutThink
09/02/2026

Beyond Detection: The New Model for Deepfake Awareness Training

Read More about AI-Native Cybersecurity Human Risk Management
Deepfake Phishing Simulations: The New Battleground for Cybersecurity Teams
Team OutThink
02/02/2026

Deepfake Phishing Simulations: The New Battleground for Cybersecurity Teams

Read More about AI-Native Cybersecurity Human Risk Management
Why Most Phishing Training Programs Fail - And the Best Phishing Simulation Tools to Turn Them Around
Team OutThink
23/01/2026

Why Most Phishing Training Programs Fail - And the Best Phishing Simulation Tools to Turn Them Around

Read More about AI-Native Cybersecurity Human Risk Management
GenAI and the Illusion of Control: Why Enterprise Software Is Quietly Undermining Data Security
Markus Sanio
20/01/2026

GenAI and the Illusion of Control: Why Enterprise Software Is Quietly Undermining Data Security

Read More about AI-Native Cybersecurity Human Risk Management
2026 Ultimate Guide to AI Security Training Platforms & Tools
Team OutThink
20/01/2026

2026 Ultimate Guide to AI Security Training Platforms & Tools

Read More about AI-Native Cybersecurity Human Risk Management
The Best Proofpoint Alternatives & Competitors in 2026: What to Choose (and Why)
Team OutThink
06/01/2026

The Best Proofpoint Alternatives & Competitors in 2026: What to Choose (and Why)

Read More about AI-Native Cybersecurity Human Risk Management
The Best Adaptive Security Alternatives & Competitors in 2026
Team OutThink
31/12/2025

The Best Adaptive Security Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
The Best Hoxhunt Alternatives & Competitors in 2026
Team OutThink
28/12/2025

The Best Hoxhunt Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
The Best KnowBe4 Alternatives & Competitors in 2026: What to Choose (and Why)
Team OutThink
25/12/2025

The Best KnowBe4 Alternatives & Competitors in 2026: What to Choose (and Why)

Read More about AI-Native Cybersecurity Human Risk Management
The Best MetaCompliance Alternatives & Competitors  in 2026
Team OutThink
22/12/2025

The Best MetaCompliance Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
The Best SANS Security Awareness Alternatives & Competitors in 2026
Team OutThink
22/12/2025

The Best SANS Security Awareness Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
The Best Infosec IQ Alternatives & Competitors  in 2026
Team OutThink
22/12/2025

The Best Infosec IQ Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
The Best Cofense Alternatives & Competitors  in 2026
Team OutThink
22/12/2025

The Best Cofense Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
The Best Mimecast Alternatives & Competitors  in 2026
Team OutThink
22/12/2025

The Best Mimecast Alternatives & Competitors in 2026

Read More about AI-Native Cybersecurity Human Risk Management
Designing Human-Centric Cybersecurity
Markus Sanio
01/12/2025

Designing Human-Centric Cybersecurity

Read More about AI-Native Cybersecurity Human Risk Management
Beyond Tools - The Human Factor in Cybersecurity
Markus Sanio
01/12/2025

Beyond Tools - The Human Factor in Cybersecurity

Read More about AI-Native Cybersecurity Human Risk Management
The Misaligned Incentives of Cybersecurity : Lessons from Healthcare
Markus Sanio
26/11/2025

The Misaligned Incentives of Cybersecurity : Lessons from Healthcare

Read More about AI-Native Cybersecurity Human Risk Management
Human Risk Management and ISO 27001
Gry Evita Sivertsen
31/10/2025

Human Risk Management and ISO 27001

Read More about AI-Native Cybersecurity Human Risk Management
There are (at least) Three Ways You Should be doing SAT Campaigns Differently (Part 2)
Rory Attwood
27/09/2025

There are (at least) Three Ways You Should be doing SAT Campaigns Differently (Part 2)

Read More about AI-Native Cybersecurity Human Risk Management
There are (at least) Three Ways You Should be doing SAT Campaigns Differently (Part 1)
Rory Attwood
12/09/2025

There are (at least) Three Ways You Should be doing SAT Campaigns Differently (Part 1)

Read More about AI-Native Cybersecurity Human Risk Management
Slid into my DMs: The rise of AI Phishing Influencers
Olivia Debroy
18/08/2025

Slid into my DMs: The rise of AI Phishing Influencers

Read More about AI-Native Cybersecurity Human Risk Management
Beyond Passwords: Inside the Largest Credential Leak in History
Olivia Debroy
04/08/2025

Beyond Passwords: Inside the Largest Credential Leak in History

Read More about AI-Native Cybersecurity Human Risk Management
Practical Guide to COM-B
Andy Wood
25/07/2025

Practical Guide to COM-B

Read More about AI-Native Cybersecurity Human Risk Management
Building the Foundation: The Crucial Role of Security Culture in Today's Organizations
Andy Wood
18/07/2025

Building the Foundation: The Crucial Role of Security Culture in Today's Organizations

Read More about AI-Native Cybersecurity Human Risk Management
How to Build and Sustain a Successful Security Champions Program
Andy Wood
11/07/2025

How to Build and Sustain a Successful Security Champions Program

Read More about AI-Native Cybersecurity Human Risk Management
It’s Time to Make Peace With Imperfection in Cybersecurity Human Risk Management
Thea Mannix
27/06/2025

It’s Time to Make Peace With Imperfection in Cybersecurity Human Risk Management

Read More about AI-Native Cybersecurity Human Risk Management
Why I Refused to Say “People Are the Weakest Link in Cyber”
Jane Frankland
26/06/2025

Why I Refused to Say “People Are the Weakest Link in Cyber”

Read More about AI-Native Cybersecurity Human Risk Management
Can Your People Outthink a Deepfake?
Olivia Debroy
19/06/2025

Can Your People Outthink a Deepfake?

Read More about AI-Native Cybersecurity Human Risk Management
Trusting HTTPS Could Be Your Biggest Mistake - Here’s Why
Olivia Debroy
16/06/2025

Trusting HTTPS Could Be Your Biggest Mistake - Here’s Why

Read More about AI-Native Cybersecurity Human Risk Management
The Human Risk Behind Scareware Attacks
Olivia Debroy
13/06/2025

The Human Risk Behind Scareware Attacks

Read More about AI-Native Cybersecurity Human Risk Management
Why Whaling Attacks Are the Caviar of Cybercrime
Olivia Debroy
10/06/2025

Why Whaling Attacks Are the Caviar of Cybercrime

Read More about AI-Native Cybersecurity Human Risk Management
Biometrics Are Here: Are We Ready for the Human Risks?
Olivia Debroy
06/06/2025

Biometrics Are Here: Are We Ready for the Human Risks?

Read More about AI-Native Cybersecurity Human Risk Management
I’m a Human Risk Manager (I Think?)
John Scott
03/06/2025

I’m a Human Risk Manager (I Think?)

Read More about AI-Native Cybersecurity Human Risk Management
How Microsoft’s ‘Passwordless by Default’ Might Save Security
Olivia Debroy
28/05/2025

How Microsoft’s ‘Passwordless by Default’ Might Save Security

Read More about AI-Native Cybersecurity Human Risk Management
The Cyber Risk Within: Insider Threats
Olivia Debroy
26/05/2025

The Cyber Risk Within: Insider Threats

Read More about AI-Native Cybersecurity Human Risk Management
What Is ‘Human Risk’ in Cyber?
Olivia Debroy
22/05/2025

What Is ‘Human Risk’ in Cyber?

Read More about AI-Native Cybersecurity Human Risk Management
What if Agentic AI Could Stop Human Risks Before They Happen?
Olivia Debroy
19/05/2025

What if Agentic AI Could Stop Human Risks Before They Happen?

Read More about AI-Native Cybersecurity Human Risk Management
How to Run a Cybersecurity Awareness Training Program in Academia
Ravi Miranda
15/05/2025

How to Run a Cybersecurity Awareness Training Program in Academia

Read More about AI-Native Cybersecurity Human Risk Management
Phishing in 2025: Cybercriminals Are Smarter Than You Know
Olivia Debroy
14/05/2025

Phishing in 2025: Cybercriminals Are Smarter Than You Know

Read More about AI-Native Cybersecurity Human Risk Management
Why Cybersecurity Human Risk Management Benefits CISOs
Gry Evita Sivertsen
29/04/2025

Why Cybersecurity Human Risk Management Benefits CISOs

Read More about AI-Native Cybersecurity Human Risk Management
The Strategic Role of Adaptive Security Awareness Training Content
Roberto Ishmael Pennino
21/04/2025

The Strategic Role of Adaptive Security Awareness Training Content

Read More about AI-Native Cybersecurity Human Risk Management
Cybersecurity's Comfort Zone Problem
Jane Frankland
15/04/2025

Cybersecurity's Comfort Zone Problem

Read More about AI-Native Cybersecurity Human Risk Management
Turning Employees into Payment Security Champions: Your Guide to Free PCI Awareness Training
Roberto Ishmael Pennino
11/04/2025

Turning Employees into Payment Security Champions: Your Guide to Free PCI Awareness Training

Read More about AI-Native Cybersecurity Human Risk Management
AI Phishing: The Rising Threat of Intelligent Cyber Deception
Roberto Ishmael Pennino
02/04/2025

AI Phishing: The Rising Threat of Intelligent Cyber Deception

Read More about AI-Native Cybersecurity Human Risk Management
What Maslow’s Hierarchy of Needs Reveals About Cybersecurity Flaws
Jane Frankland
01/04/2025

What Maslow’s Hierarchy of Needs Reveals About Cybersecurity Flaws

Read More about AI-Native Cybersecurity Human Risk Management
Smishing: The Phishing Attack That Lives in Your Pocket
Roberto Ishmael Pennino
24/03/2025

Smishing: The Phishing Attack That Lives in Your Pocket

Read More about AI-Native Cybersecurity Human Risk Management
How Adaptive Security Awareness Training Drives Better Cybersecurity Outcomes: The Science
Rory Attwood
11/03/2025

How Adaptive Security Awareness Training Drives Better Cybersecurity Outcomes: The Science

Read More about AI-Native Cybersecurity Human Risk Management
Quishing: When QR Codes Become Cyber Traps - Your Essential Guide to Protection
Roberto Ishmael Pennino
10/03/2025

Quishing: When QR Codes Become Cyber Traps - Your Essential Guide to Protection

Read More about AI-Native Cybersecurity Human Risk Management
Domain Spoofing: The Cyber Trick You Can’t Afford to Ignore
Roberto Ishmael Pennino
10/03/2025

Domain Spoofing: The Cyber Trick You Can’t Afford to Ignore

Read More about AI-Native Cybersecurity Human Risk Management
PIPEDA Compliance: Why PIPEDA Training is Important
Roberto Ishmael Pennino
21/02/2025

PIPEDA Compliance: Why PIPEDA Training is Important

Read More about AI-Native Cybersecurity Human Risk Management
CCPA Training: Building a Culture of Privacy and Compliance
Roberto Ishmael Pennino
10/02/2025

CCPA Training: Building a Culture of Privacy and Compliance

Read More about AI-Native Cybersecurity Human Risk Management
Data Privacy Week: How Convention 108 Paved the Way for Modern Privacy Laws
Roberto Ishmael Pennino
31/01/2025

Data Privacy Week: How Convention 108 Paved the Way for Modern Privacy Laws

Read More about AI-Native Cybersecurity Human Risk Management
TISAX Training: Strengthening Automotive Information Security and Compliance
Roberto Ishmael Pennino
27/01/2025

TISAX Training: Strengthening Automotive Information Security and Compliance

Read More about AI-Native Cybersecurity Human Risk Management
GDPR Training: Building a Culture of Compliance
Roberto Ishmael Pennino
20/01/2025

GDPR Training: Building a Culture of Compliance

Read More about AI-Native Cybersecurity Human Risk Management
What Is DORA? DORA Training for Compliance
Dr. Charlotte Jupp
20/01/2025

What Is DORA? DORA Training for Compliance

Read More about AI-Native Cybersecurity Human Risk Management
Risk Quantification for Cybersecurity Human Risk Management
Lev Lesokhin
13/12/2024

Risk Quantification for Cybersecurity Human Risk Management

Read More about AI-Native Cybersecurity Human Risk Management
Adaptive SAT: The Future Is Now
Roberto Ishmael Pennino
12/11/2024

Adaptive SAT: The Future Is Now

Read More about AI-Native Cybersecurity Human Risk Management
NIST Recommends New Guidelines for Password Security
Roberto Ishmael Pennino
11/11/2024

NIST Recommends New Guidelines for Password Security

Read More about AI-Native Cybersecurity Human Risk Management
Empowering Organizations with Adaptive Security Awareness Training
Roberto Ishmael Pennino
07/11/2024

Empowering Organizations with Adaptive Security Awareness Training

Read More about AI-Native Cybersecurity Human Risk Management
Why Humans Should Be the New Frontline in Cyber Defense
Roberto Ishmael Pennino
06/11/2024

Why Humans Should Be the New Frontline in Cyber Defense

Read More about AI-Native Cybersecurity Human Risk Management
Behavioral Analytics Are Changing Cybersecurity
Roberto Ishmael Pennino
04/11/2024

Behavioral Analytics Are Changing Cybersecurity

Read More about AI-Native Cybersecurity Human Risk Management
Cybersecurity Awareness Month 2024: Your Security Journey Doesn't End Here
Roberto Ishmael Pennino
01/11/2024

Cybersecurity Awareness Month 2024: Your Security Journey Doesn't End Here

Read More about AI-Native Cybersecurity Human Risk Management
Cybersecurity Awareness Training for Remote Workforces
Roberto Ishmael Pennino
25/10/2024

Cybersecurity Awareness Training for Remote Workforces

Read More about AI-Native Cybersecurity Human Risk Management
Would You Skip an Update if You Knew What It Could Cost You?
Roberto Ishmael Pennino
24/10/2024

Would You Skip an Update if You Knew What It Could Cost You?

Read More about AI-Native Cybersecurity Human Risk Management
Why Every Cyber Strategy Fails Without This Element
Roberto Ishmael Pennino
22/10/2024

Why Every Cyber Strategy Fails Without This Element

Read More about AI-Native Cybersecurity Human Risk Management
Your Password Isn't Enough: Why Your Digital Life Needs Multifactor Authentication Today
Roberto Ishmael Pennino
21/10/2024

Your Password Isn't Enough: Why Your Digital Life Needs Multifactor Authentication Today

Read More about AI-Native Cybersecurity Human Risk Management
Is Your Cybersecurity Working From Home Too?
Roberto Ishmael Pennino
18/10/2024

Is Your Cybersecurity Working From Home Too?

Read More about AI-Native Cybersecurity Human Risk Management
Human Risk Management Gets Adaptive
Lev Lesokhin
08/10/2024

Human Risk Management Gets Adaptive

Read More about AI-Native Cybersecurity Human Risk Management
Your Cybersecurity Is Only as Strong as Your People
Roberto Ishmael Pennino
08/10/2024

Your Cybersecurity Is Only as Strong as Your People

Read More about AI-Native Cybersecurity Human Risk Management
The Email That Could Cost You Everything: Your Essential Guide to Recognizing Phishing in 2024
Roberto Ishmael Pennino
07/10/2024

The Email That Could Cost You Everything: Your Essential Guide to Recognizing Phishing in 2024

Read More about AI-Native Cybersecurity Human Risk Management
How Ready Is Your Workforce for a Real Phishing Attack?
Roberto Ishmael Pennino
01/10/2024

How Ready Is Your Workforce for a Real Phishing Attack?

Read More about AI-Native Cybersecurity Human Risk Management
What is Cybersecurity Human Risk Management? What You Need to Know
Lev Lesokhin
23/09/2024

What is Cybersecurity Human Risk Management? What You Need to Know

Read More about AI-Native Cybersecurity Human Risk Management
Engagement Strategies for Cybersecurity Human Risk Management
Lev Lesokhin
16/08/2024

Engagement Strategies for Cybersecurity Human Risk Management

Read More about AI-Native Cybersecurity Human Risk Management
Enhance Your Phishing Training With Outthink
Lavinia Manocha
02/08/2024

Enhance Your Phishing Training With Outthink

Read More about AI-Native Cybersecurity Human Risk Management
Adaptive Security Awareness Training for Frontline Workers
Lavinia Manocha
26/07/2024

Adaptive Security Awareness Training for Frontline Workers

Read More about AI-Native Cybersecurity Human Risk Management
The Role of Security Awareness Training After IT Outages
Lev Lesokhin
26/07/2024

The Role of Security Awareness Training After IT Outages

Read More about AI-Native Cybersecurity Human Risk Management
Human Risk Management's Eight Dimensions of Secure Behavior Segmentation
Lev Lesokhin
25/07/2024

Human Risk Management's Eight Dimensions of Secure Behavior Segmentation

Read More about AI-Native Cybersecurity Human Risk Management
State-Sponsored Phishing Attacks Target 40,000 Corporate Users: What This Means for Protecting Your Business
Lev Lesokhin
18/07/2024

State-Sponsored Phishing Attacks Target 40,000 Corporate Users: What This Means for Protecting Your Business

Read More about AI-Native Cybersecurity Human Risk Management
Adaptive Security Awareness Training: Unlearning and Relearning Routines
Lev Lesokhin
10/07/2024

Adaptive Security Awareness Training: Unlearning and Relearning Routines

Read More about AI-Native Cybersecurity Human Risk Management
Did You Think Your Password Was Secure? Let’s Talk Password Security
Lev Lesokhin
24/05/2024

Did You Think Your Password Was Secure? Let’s Talk Password Security

Read More about AI-Native Cybersecurity Human Risk Management
Rethinking Security Awareness: Towards a Cybersecurity Human Risk Management Framework
Lev Lesokhin
23/05/2024

Rethinking Security Awareness: Towards a Cybersecurity Human Risk Management Framework

Read More about AI-Native Cybersecurity Human Risk Management
Password Security: Why the UK is Banning Generic Passwords
Lev Lesokhin
17/05/2024

Password Security: Why the UK is Banning Generic Passwords

Read More about AI-Native Cybersecurity Human Risk Management
Instagram Security Awareness Training: A Step-by-Step Guide
Lev Lesokhin
10/05/2024

Instagram Security Awareness Training: A Step-by-Step Guide

Read More about AI-Native Cybersecurity Human Risk Management
Cybersecurity Human Risk Management Forum Kicks Off in London
Lev Lesokhin
18/04/2024

Cybersecurity Human Risk Management Forum Kicks Off in London

Read More about AI-Native Cybersecurity Human Risk Management
Gamification Can Enhance Security Awareness Training – Badges and Leaderboards Are Just the First Step
Rory Attwood
31/01/2024

Gamification Can Enhance Security Awareness Training – Badges and Leaderboards Are Just the First Step

Read More about AI-Native Cybersecurity Human Risk Management