
Shadow AI and Human‑Driven GenAI Risk: Why Organisations Need Human‑Centric AI Governance in 2026
Feb 28

Get in touch with our HRM Specialists
Generative AI has rapidly entered the workplace, but governance and security controls have not kept pace. Across industries, employees are adopting AI tools to improve productivity often using personal accounts or unapproved platforms outside organisational oversight. This phenomenon, commonly known as “shadow AI,” mirrors earlier shadow IT trends but introduces significantly greater risk because conversational interfaces encourage direct sharing of sensitive information. Surveys show that a majority of enterprise employees using generative AI have entered confidential company data into public systems, including customer details, financial information, and proprietary project material.
The scale of this exposure is growing quickly. Corporate data submissions to AI tools increased dramatically in recent years, with sensitive content such as source code, research data, and customer information increasingly appearing in prompts and file uploads. At the same time, many employees access these tools through personal accounts, obscuring activity from IT and security teams and creating compliance blind spots.
Because AI outputs can be reused, shared, or integrated into business processes, a single risky interaction can propagate widely across documents, codebases, or decisions. Effective GenAI risk management, therefore, requires shifting from purely technical defenses to human-centric governance that addresses how employees actually use AI in real workflows.
This article explores how everyday employee use of generative AI is creating new enterprise risks, from shadow AI adoption and sensitive data exposure to hidden policy violations. It explains why workers bypass approved tools, how routine workflows can become data-loss pathways, and what organisations can do to manage human risk in AI adoption.
How Generative AI Quietly Spread Across the Enterprise
Generative AI has entered organisations at a speed that governance structures were not designed to match. In many cases, adoption did not begin with formal IT rollouts but with employees and business units experimenting with publicly available tools to complete everyday tasks such as drafting documents, analysing data, or writing code. As a result, usage often became widespread before policies, monitoring systems, or risk assessments were introduced.
Enterprise telemetry illustrates how quickly this shift occurred. In a single year, GenAI usage increased by roughly 200%, while prompt activity grew by about 500%, with tools being used across multiple departments at once. These patterns indicate that generative AI moved rapidly from experimentation into routine operations.
Network data further shows the breadth of the ecosystem that employees are accessing. One report identified more than 6,500 distinct GenAI domains and over 3,000 AI applications observed in enterprise traffic, creating a highly fragmented landscape that is difficult to manage centrally.
Gartner research on GenAI risk underscores how much of this activity occurs outside formal oversight. A survey conducted by Gartner of cybersecurity leaders found that 69% of organisations suspect or have evidence of employees using prohibited public GenAI tools. This rapid uptake of unsanctioned tools is identified as a major contributor to security and compliance risk.
Generative AI is no longer experimental, it is already embedded in everyday work across departments. But widespread use alone is not the primary risk. The real issue begins with what employees are actually putting into these systems.
How Routine AI Use Is Exposing Sensitive Data
For most employees, generative AI tools feel like just another productivity application, not a place where sensitive information could escape the organisation. As a result, confidential material is often entered into these systems during normal work activities. Surveys indicate how common this behaviour has become. One large study found that 57% of employees who use GenAI admitted to entering confidential company data into public tools, including customer records, internal documents, proprietary plans, and financial information.
The types of information shared show that exposure is not limited to a single category of data. Employees reported entering personal or employee details (31%), project or product information (29%), customer data (21%), and financial information (11%). This suggests that multiple forms of sensitive business information are being handled through external AI systems.
Risk increases further when users upload files rather than short prompts. Documents, spreadsheets, and source code can contain large volumes of confidential material in a single interaction. Analysis of enterprise activity found that more than 20% of uploaded files to generative AI tools included sensitive corporate data.
In the same study by Gartner, it states that employees are using public GenAI tools in ways existing policies and controls were not designed to manage, including unsafe handling of sensitive information. The research identifies unmanaged human interaction with GenAI as a growing driver of organisational risk.
Much of this sensitive data exposure happens through tools organisations may not even know are being used. To understand the scale of the problem, it is important to look beyond approved platforms and examine the hidden layer of AI activity taking place across the enterprise.
Shadow AI - The Invisible Layer of Enterprise Activity
Not all enterprise AI use happens through approved platforms. A significant portion occurs outside official oversight, a phenomenon commonly referred to as “shadow AI.” This includes employees using public or unsanctioned tools for work tasks without informing IT or security teams. Research suggests this behaviour is both widespread and deliberate. One survey found that about 59% of employees use unapproved AI tools at work, and many acknowledge keeping this usage hidden from their employers.
Personal accounts play a major role in enabling this activity. Instead of logging in through enterprise-managed systems, employees often access AI services using private email accounts, which bypass corporate logging, monitoring, and contractual protections. Security reporting indicates that roughly 68% of workers use personal logins for work-related AI tasks, effectively placing these interactions outside organisational visibility.
On 29 January 2026, a high-profile incident demonstrated how this risk can materialise even at the highest levels of cybersecurity leadership. Madhu Gottumukkala, the acting director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), reportedly uploaded sensitive government contracting documents into a public version of ChatGPT. Although the files were not classified, they were marked “For Official Use Only,” indicating they were not intended for public disclosure. The uploads triggered automated security alerts within federal systems and prompted an internal review by the Department of Homeland Security to assess potential exposure.
Because shadow AI activity is hidden, organisations often discover it only after problems occur. Enterprise telemetry shows measurable consequences, with companies reporting an average of more than 200 GenAI-related policy violations per month, including incidents involving source code and regulated data exposure.
Gartner identifies unmanaged use of public generative AI tools as a growing blind spot for security teams, noting that employees are adopting these tools faster than organisations can govern them. This creates situations where sensitive activities occur beyond established controls. Shadow AI does not emerge in a vacuum. Employees turn to unofficial tools for specific reasons, often tied to how work is structured and what resources are available to them. Understanding those motivations is key to addressing the problem effectively.
Why Employees Bypass Approved Tools
When employees use unapproved AI tools, it is rarely an attempt to sidestep security controls. More often, it is simply the fastest way to get work done. Generative AI can dramatically reduce the time needed for writing, analysis, coding, and routine tasks, so people reach for whatever tool helps them move forward, especially when deadlines are tight or workloads are high. Surveys consistently show that productivity and efficiency are the main drivers behind AI use in the workplace.
Approved enterprise tools, however, do not always offer the same capabilities or ease of use as widely available consumer applications. If official solutions feel slower, more restricted, or poorly integrated into everyday workflows, employees naturally gravitate toward alternatives that allow them to complete tasks with less friction. Over time, this creates a gap between what organisational policy expects and what real work actually requires, a gap that becomes difficult to close through enforcement alone.
Limited training makes the situation worse. Only about 24% of employees report receiving mandatory guidance on safe AI use, leaving most people to rely on their own judgment about what is acceptable. Many are not even sure what the rules are: around 44% say they do not know whether their organisation has an AI policy at all, which makes consistent compliance unlikely.
As generative AI adoption has accelerated, organisations have struggled to provide clear, practical guidance at the same pace, leaving employees to make day-to-day decisions about tool use on their own, a situation that increases the likelihood of risky behaviour even when intentions are not malicious.
Even when the reasons are understandable, the technology itself introduces risks that traditional controls were never designed to handle. Generative AI changes how information is shared, processed, and stored, creating exposure pathways that did not previously exist.
Why GenAI Creates Unique Data Exposure Pathways
Generative AI creates a different kind of risk, mainly because of how people use it. Instead of filling out structured fields or uploading carefully selected data, employees interact with these tools by describing problems in their own words. That usually means sharing context, sometimes a lot of it, so the system can give a useful answer. This shift from structured input to open conversation has effectively widened the organisational attack surface, because every day problem-solving can now involve sending sensitive details outside the company without anyone intending to do so.
These systems also reward more information with better results. To refine a draft, debug code, or analyse a situation, users tend to include background such as internal processes, client details, or proprietary ideas. Security agencies refer to this pattern as “conversational oversharing,” where the design of the interface itself encourages disclosure that would rarely happen in traditional software.
What happens to that information afterwards is another concern. Public AI providers may retain prompts and responses for purposes like debugging, safety checks, or improving their models, meaning the data can persist outside organisational control long after the interaction ends. Much of this transfer occurs through routine work activities such as pasting text to draft reports, analysing documents, or generating code, effectively turning normal productivity tasks into a channel for data exfiltration.
The risk does not come only from standalone AI tools. Many everyday applications now include embedded AI features that may send user inputs to external services without triggering traditional monitoring systems, further reducing visibility into how data is being handled.
These pathways are not merely theoretical vulnerabilities. When sensitive information flows through them, the impact can be felt across legal, operational, and financial dimensions of the organisation.
Organisational Consequences of Human-Driven AI Risk
The risks associated with generative AI are no longer theoretical; they are showing up as real business problems. A major concern is that these risks are driven less by the technology itself and more by how employees use it. Rapid adoption has introduced new ways for sensitive information to leave the organisation, influence decisions, or create compliance exposure, often through routine work activities rather than deliberate misconduct.
One immediate consequence is regulatory and legal risk. When employees submit personal data, customer information, or other regulated material to external AI systems, organisations may unknowingly breach privacy obligations or contractual requirements. Because these interactions can occur outside approved platforms, they may not be visible until after the exposure has already happened.
Intellectual property is another area of concern. Employees frequently use AI tools to debug code, refine product ideas, or analyse internal documents, which can lead to proprietary information being shared externally. Global intellectual property bodies caution that AI interactions can become an unintended channel for disclosure of protected material.
Operational impacts can be just as significant. Decisions influenced by inaccurate or fabricated AI outputs can affect reports, forecasts, or communications, introducing risks that are difficult to detect until consequences emerge. At the same time, organisations are reporting a steady rise in policy violations linked to AI use, with enterprise telemetry indicating hundreds of incidents per month as adoption expands.
If the consequences are significant, the next challenge is identifying where the risk is most likely to originate. In practice, exposure is closely tied to patterns of behaviour rather than specific tools alone.
Behavioural Indicators of High-Risk AI Use
Not everyone who uses generative AI creates the same level of risk. What matters far more is how people use it. Gartner’s cybersecurity research points out that exposure increases when employees interact with AI tools outside approved controls, making behaviour a stronger predictor of risk than the technology itself.
One common pattern is heavy reliance on public AI tools instead of company-approved systems. These platforms often sit completely outside corporate visibility, so anything entered into them may leave the organisation without logging, monitoring, or contractual protection. Security reports show employees are accessing thousands of different AI services from corporate networks, many of which security teams don’t actively manage.
Risk also rises when employees use personal accounts for work. Logging into AI tools with private email addresses bypasses enterprise safeguards entirely, meaning organisations cannot see what data is being shared or how it is used. Surveys suggest this is a common practice across workplaces.
Another concern is uploading or pasting sensitive information without checking what it contains. Documents and codebases can hold far more confidential material than users realise, and studies show a significant portion of files submitted to AI tools include sensitive data.
Finally, problems escalate when AI outputs are used directly in important decisions without verification. Gartner notes that unmanaged human use of GenAI can influence business processes in ways organisations cannot easily control.
Recognising risky behaviours is only useful if organisations can respond to them effectively. The focus, therefore, shifts from identifying the problem to reducing exposure in ways that still allow employees to benefit from AI.
Reducing Shadow AI and Human Risk
Trying to eliminate shadow AI entirely is rarely effective. Employees can access public tools from any browser or personal device, so outright bans often push usage further underground rather than stopping it. Gartner’s cybersecurity research notes that generative AI adoption is moving faster than traditional control mechanisms, which means organisations need approaches that manage use rather than assume it can be prevented.
A more practical starting point is to ensure approved tools genuinely support how employees complete their day-to-day work. If an enterprise AI solution helps someone draft emails, analyse spreadsheets, prepare reports, or write code as quickly and conveniently as public tools, they are far less likely to look elsewhere. When the official option fits naturally into individual workflows, behaviour shifts without the need for constant enforcement. Clear policies also matter, but they must be specific enough to guide real decisions, which tools are allowed, what data can be shared, and who is accountable for safe use. Vague statements about “using AI responsibly” do little to change behaviour.
Training is another key piece, especially when it reflects real work scenarios rather than abstract rules. Surveys show employees respond better to concrete examples of safe and unsafe use, and organisations that provide this kind of guidance see stronger compliance. Experts emphasise that governance and education must develop alongside adoption for AI to be used safely at scale.
Finally, visibility is essential. Organisations cannot manage risks they cannot see, so many are shifting from blocking tools to monitoring how data moves to and from AI services. This approach recognises that generative AI is already embedded in workflows and focuses on reducing harm rather than trying to eliminate use altogether.
Tactical measures can reduce immediate risks, but long-term resilience requires a broader approach. Sustainable AI adoption depends on governance models that account for human behaviour, organisational culture, and evolving technology.
Toward a Human-Centric AI Governance Model
As generative AI becomes woven into everyday work, organisations are discovering that technical controls alone cannot address the risks. Governance increasingly needs to focus on how people actually use these tools for their habits, decisions, and the context in which AI fits into daily tasks. The NIST AI Risk Management Framework reflects this shift, emphasising that organisations should foster a “culture of risk management” in which policies, technical safeguards, and operational practices reinforce one another rather than operate in isolation.
In practical terms, this means moving away from attempts to prohibit AI entirely and toward managing its safe use. Because employees can access public tools independently, blanket bans tend to be ineffective and may simply drive usage out of sight. Guidance from industry and advisory bodies stresses that controls must align with real workflows; if security measures make work significantly harder, they are likely to be bypassed.
Transparency also plays a crucial role. Encouraging employees to disclose when and how they use AI enables organisations to understand exposure and provide support, whereas secrecy prevents effective risk management. Integrating AI oversight into existing cybersecurity and compliance programs helps avoid fragmented governance structures that are difficult to maintain.
The broader pattern across organisations is that risks arise primarily from human interaction with AI systems rather than from the technology alone, making behaviour and culture central to effective governance. Viewed this way, employees are not only the source of potential risk but also the key to managing it, as they are the ones who ultimately decide how AI is used in practice.
That's a wrap!
Generative AI has transformed employees from passive system users into active participants in data processing, knowledge creation, and decision support. The dominant vulnerability is no longer the software itself but how people interact with it under real-world pressures. Organisations that address AI risk through human-centric governance, rather than purely technical controls, will be far better positioned to capture its benefits while avoiding invisible, large-scale exposure.
Sources
- https://www.probablypwned.com/article/netskope-shadow-ai-genai-data-violations-2026
- https://www.businesswire.com/news/home/20250804371445/en/Menlo-Securitys-2025-Report-Uncovers-68-Surge-in-Shadow-Generative-AI-Usage-in-the-Modern-Enterprise
- https://www.gartner.com/en/newsroom/press-releases/2025-11-19-gartner-identifies-critical-genai-blind-spots-that-cios-must-urgently-address0
- https://www.techmonitor.ai/ai-and-automation/57-enterprise-employees-input-confidential-data-ai-tools
- https://dataconomy.com/2025/02/28/57-percent-of-employees-expose-sensitive-data-to-genai
- https://www.axios.com/2025/07/31/workers-company-secrets-chatgpt
- https://outthink.io/gartner-cybersecurity-insights/
- https://cybernews.com/ai-news/ai-shadow-use-workplace-survey
- https://www.businesswire.com/news/home/20250226490609/en/TELUS-Digital-Survey-Reveals-Enterprise-Employees-Are-Entering-Sensitive-Data-Into-AI-Assistants-More-Than-You-Think
- https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
- https://cloudsecurityalliance.org/artifacts/security-implications-of-chatgpt
- https://www.wipo.int/about-ip/en/artificial_intelligence
- https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
- https://assets.kpmg.com/content/dam/kpmg/xx/pdf/2023/ai-governance-framework.pdf
- https://www.linkedin.com/pulse/executive-guide-ai-governance-building-trust-from-data-himanshu-patni
- https://www.scworld.com/news/shadow-ai-on-the-rise-sensitive-data-input-by-workers-up-156
- https://www.telusdigital.com/about/newsroom/telus-digital-survey-reveals-enterprise-employees-use-of-shadow-ai
- https://www.ndtv.com/world-news/trumps-indian-origin-cyber-chief-madhu-gottumukkala-uploaded-critical-files-on-chatgpt-report-10903996/amp/1



























































































