
Cybersecurity
What are the risks of Shadow AI and how leaders can build a culture of safe AI use?
One of the biggest blunders in recent history is from back in March/April of 2023, when Samsung engineers in South Korea accidentally uploaded proprietary source code to ChatGPT. This quickly prompted a global temporary ban on generative AI on company devices, and resulted in internal investigations and tightening of internal controls. A classic accidental, avoidable case of Shadow AI that, luckily, didn’t result in large-scale losses.
But, in December of 2024, the Italian data protection agency finally closed an investigation into the use of personal data through AGI applications. This resulted in them fining OpenAI 15 million euros after they found that the company combed through and used people’s personal data to train the LLM. They did this without meeting the legal requirements and while violating transparency principles and avoiding any obligation related to information regarding its users.
If Samsung hadn’t initiated the ban or hadn’t picked up on the error, what’s to say that proprietary information wouldn’t have been part of the data breach used to train the LLM? When company data is at risk, having the right tools around you is paramount.
Employees are already bringing their own AI tools to work according to Microsoft’s 2024 Work Trend Index. Many surveys show that a large group of workers admit to sending sensitive work data into public AI platforms without approval from their employers.This article will address these potential issues and help you understand what Shadow AI is, and the driving factors behind it, as well as ways to combat them.
What is Shadow AI?
Shadow AI is the unauthorised or ungoverned use of AI tools inside business workflows. This includes employees using personal LLM accounts, like ChatGPT, to draft emails, browser AI extensions acting on corporate data, and AI features inside mainstream SaaS products, like Canva, Notion, or Grammarly, being used to process company information. It’s not the same as hostile external actors and is more often than not well-intentioned employees trying to work faster or be helpful. Most industries don’t have restrictions on software tools their employees use, and because of this, employees tend to use the tools they’re comfortable with, without the proper understanding of how that could affect the workplace. The harm is more frequently caused by negligence of a well-intentioned employee than it is by an outside attack.
Is Shadow AI growing too quickly?
There are several reasons why Shadow AI is a rising problem in the workplace, and most of them stem from a lack of understanding and guidance, which is easily fixable.
Here are the top 5 reasons:
- Accessibility: Many AI tools are free, frictionless, and very easy to use or install. So, employees can sign up in seconds and get the benefit they’re looking for. Remember, close to 75% of workers bring their own AI tools.
- Embedded features: AI is popping up inside everyday SaaS applications, like Canva and Grammarly, and detection by IT companies hasn’t caught up yet.
- Productivity pressure: Teams are generally measured on speed and output, and using AI tools helps shorten cycles and give employees more wins in the end.
- Cultural factors: Hybrid or remote working environments make it harder to micromanage employees, and the temptation to use shortcuts is higher.
- Fear of missing out (FOMO) and talent dynamics: Nobody wants to fall behind, and if everyone uses these tools to be more efficient, not using them could mean losing your job.
What are the risks of Shadow AI?
I’m sure there are a few risks you can think of, right?, Here are a couple of the main ones we know about:
Data leakage and IP exposure
Every unsupervised AI query is a potential breach. When employees feed customer lists, technical diagrams, or proprietary code into a public AI, that data can be stored indefinitely and even surface in responses to others. Like when Samsung’s semiconductor engineers inadvertently leaked critical code and confidential notes to ChatGPT, which was information the company could not retract. Such leaks may violate NDAs or export controls and give competitors a direct look at trade secrets. In effect, each prompt becomes an even bigger risk because the data may also be used to train AI models that others access.
Regulatory non-compliance
Shadow AI often sidesteps the ways data is governed, like when employees input personal or sensitive data into AI without proper consent. Under the EU’s General Data Protection Regulation (GDPR), mishandling personal data can trigger fines up to 4% of global turnover, and upcoming laws like the EU AI Act will impose strict transparency and audit requirements on AI systems, which is almost impossible to enforce on rogue tools. Non-compliance might also mean violating sector rules, like HIPAA, if patient data is exposed via AI. Regulators have shown that they will issue multi-million-euro penalties for breaches like these, which leaves your business open to huge risks.
Hallucinations and misinformation
AI tools are known to “hallucinate” by producing plausible but incorrect information, and users have been known to do the same when they assume the information is correct. If employees act on these outputs unchecked, business decisions may be based on false premises. According to Gartner, these types of hallucinations can compromise your decision-making and, in turn, harm the company. Imagine an employee using ChatGPT for compliance advice or market analysis and fabricating an answer that could misinform strategy, lead to regulatory filings based on fake data, or even result in legal liability.
Embedded bias
AI models often reflect biases in their training data, which can inadvertently skew outcomes. One Stanford study found that when ChatGPT generated hypothetical resumes, it systematically created resumes for younger women with less experience, and the men as older with more experience. In a recruitment or credit scoring setting, biases like that will likely lead to discrimination or unfair decisions. Because Shadow AI is unsupervised, these biases may go unchecked, potentially exposing the organisation to ethical and legal risks.
Lack of understanding
Most generative AIs are “black boxes” with internal reasoning that’s opaque, even to experts. IBM warns that without transparency, organisations struggle to audit how AI arrived at a conclusion. In a regulated environment, if something like an automated report or analysis is used in a decision and turns out to be wrong, the company may be unable to explain or justify it to auditors or customers.
The missing audit trail of AI decisions means there’s no evidence of human rationale in the process, which can undermine compliance with laws that require accountability for automated decisions.
Contaminating the supply chain
Shadow AI could carelessly introduce third-party code or dependencies into a system without any vetting process. A developer might use an unverified open-source AI code assistant to auto-generate code, which could pull in latent vulnerabilities or even code snippets that belong to someone else.
Content produced by generative AI might also infringe on copyrighted works or violate open licences. Businesses should move forward with caution and get the training needed to avoid third-party licence violations.
Reputation damage and cost
Shadow AI incidents can be incredibly costly. Aside from the regulatory fines, there are direct remediation costs like forensics, legal fees, or credit monitoring, and other soft costs. According to a Cybsafe and National Cybersecurity Alliance (NCA) study of over 7,000 people across multiple demographics, 38% of workers share sensitive work information with AI without their employer knowing, and more than 50% of participants hadn’t received any training on safely using AI. A PR fallout from a leaked secret or AI-generated mistake could destroy a brand.
Larger target area
Unmanaged AI tools can create blind spots for security teams, especially since any external AI service that connects to corporate data becomes a new attack possibility, according to the CSA. So, each Shadow AI tool is another way for hackers to exploit the business, and training against that is vital.
Equip your workforce with the judgement they need to use AI responsibly.
Download the AI Literacy ReportHow is governing Shadow AI different from Shadow IT?
Shadow IT is using unsanctioned software or cloud services, like personal Dropbox or unapproved SaaS apps, outside of the employer’s knowledge, and it’s been on everyone’s radar for a while. On the other side of the spectrum, Shadow AI is a lot more nuanced and comes with unique challenges. Shadow AI is a lot harder to see because a ChatGPT query or using an embedded AI feature doesn’t leave much of a trail, so they’re harder to follow than unauthorised apps trying to be allowed into a system designed to keep them out.Palo Alto Networks explains that Shadow AI is “a GenAI security risk focused on unauthorised AI tools”, which can impact decisions in unpredictable ways. You’ll need access to various tools and controls, because new AI tools are coming out daily and operate at the speed of conversational prompts, far outpacing the change controls that worked for IT in the last decade.
Why trying to stop Shadow AI is unrealistic
When you understand human nature, you’ll know that despite the dangers, an outright ban on all AI tools isn’t a winning strategy. When something gets banned, it tends to be driven underground, which makes it even harder to control. Another reason not to ban AI is that it can be highly productive, with 97% of workers from one study believing that AI boosts their productivity levels and makes their job easier.
A blanket prohibition risks a backlash of resentment or attrition among staff members who might see their peers and competitors gaining an edge with AI and not wanting to fall behind. Also, tech moves too fast to police perfectly, with new chatbots, plugins, and AI features in SaaS apps appearing every week. Even if you block today’s tools, another one will pop up tomorrow. As the Cloud Security Alliance bluntly puts it, culture and governance is the solution, rather than elimination. In short, trying to stamp out Shadow AI entirely will only stifle innovation and create conflict. The goal should be enabling these tools safely, rather than suppressing them, so that employees can continue to use AI while equipping them with guidelines and proper judgement instead of pushing the practice underground.
Equip your workforce with the judgement they need to use AI responsibly.
Download the AI Literacy ReportA people-first approach to governing Shadow AI
Managing Shadow AI effectively means you need to empower your people with the right tools, policies, and training they need. Here are four key focus areas:
1. Define responsible use guardrails
First, create clear policies and an acceptable use framework for AI, like an AI Acceptable Use Policy that classifies AI tools into Approved, Limited, or Prohibited categories and spells out which types of data may be used in AI prompts. These guardrails should be co-created with employees, so they address real needs. Rather than blanket bans, specify that employees can use certain approved AI services, like an enterprise ChatGPT or internal analytics model, while defining what must never be uploaded, like unredacted PII, IP code, etc. Clarity like this makes rules more enforceable and understandable.
2. Increase visibility into usage
You can’t secure what you can’t see, essentially. Invest in monitoring and discovery tools that identify AI traffic and shadow apps, like deploying scanners and CASBs to flag unusual connections to AI endpoints, or specialised Data Loss Prevention (DLP) that inspects data leaving the network for AI patterns. You could use identity logs and browser plugin analytics to uncover personal AI accounts being accessed on corporate devices as well. You need to map the entire “Shadow AI” attack surface so you can apply controls everywhere you need to.
3. Approve and provision trusted AI tools
Rather than leaving your staff with consumer-grade solutions, give them some sanctioned alternatives that meet security and compliance needs, like the organisations that are rolling out enterprise-grade LLMs. An “allow list” of approved AI tools gives employees safe options while discouraging rogue solutions. If deep customisation is needed, think about hosting private LLMs or fine-tuning models on your own data. By giving employees a secured AI platform with clear usage logs and data controls, you steer Shadow AI toward a controlled environment.
4. AI-focused awareness and training
Always remember that people are your first line of defence; train your staff on AI fundamentals so they can judge what is safe to share and when to be sceptical of outputs. IBM found that 60% of employees said hands-on training would boost their ability to use AI effectively. You should educate staff on privacy risks, like what kinds of data must not go into any AI tool, the concept of hallucinations, and how to use approved tools properly. By focusing on people and processes, organisations can turn an uncontrollable problem into a managed opportunity, where employees stay productive with AI, but do so within a framework that protects the business.
In the end, human-centred risk needs human-focused controls
We won’t solve Shadow AI by pointing fingers or through technology alone. What we need is education. Companies that include AI literacy and ethical guidelines in their culture will manage Shadow AI far better than those that enforce blanket bans. If you start by teaching your staff the judgment and digital skills needed to use AI responsibly, you’re less likely to put the business at risk and you create a culture that understands these risks already. This protects against the compliance and security pitfalls we’ve discussed, but it also turns AI into a competitive asset. If a workforce is confident with AI, it’s a more innovative and efficient workforce. Adopting AI literacy today is an excellent business advantage in the future, and great insurance against future pitfalls related to Shadow AI.











