Cybersecurity

Inside leadership’s misplaced sense of control over AI in the workplace

25 November 2025 · 6 min read

Artificial Intelligence (AI) is embedded in everyday workflows, often invisibly. Employees are experimenting, automating, and optimising in ways that blur the boundary between ingenuity and recklessness. The spreadsheet that once required manual formulas now runs on AI prompts, and the presentation that used to take days can be drafted in minutes with the help of generative tools. Yet, many leaders still picture AI as a contained technology; used sparingly, monitored closely, and limited to authorised tools.

That perception is outdated. The real challenge today isn’t whether organisations are “using” AI, but how deeply it’s woven into decision-making, creativity, and communication, often without leadership visibility. The gap between what leaders think they know about AI use and what’s really happening at work is widening fast.AI literacy is now a baseline skill, like password discipline in the early days of cybersecurity. Without it, even the most security-aware organisations risk blind spots that compromise data and performance. Controlling this starts not with policy, but with understanding.

What SoSafe learnt from talking to CISOs

Conversations with CISOs and senior security leaders across Europe reveal a consistent sense of unease about how much AI has already escaped traditional governance structures.“AI is already here. The question now is whether our employees know how to use it responsibly,” a leader pointed out during the closed-door roundtable hosted by The SASIG Group and SoSafe. Here’s what SoSafe has observed through these discussions and behavioural insights:

Misunderstanding AI’s nature

Many leaders assume employees use AI like Google, typing queries and receiving answers. They don’t always realise that generative tools retain and learn from prompts, which may include sensitive company data.

Lack of employee awareness

Employees rarely know what can safely be entered into AI tools. According to a 2024 Cybsafe/NCA study, 38% of employees admit to sharing confidential data with AI platforms without approval.

Missing critical thinking skills

CISOs agree that staff must learn to evaluate AI outputs critically. Generative models can hallucinate up to 82% of the time but most employees lack the skills to verify outputs.

Overestimating technical barriers

Some executives still think AI literacy requires coding expertise. In reality, it’s about understanding how AI tools process information and where biases or risks arise.

Compliance detachment

The EU AI Act is viewed by many as a compliance checklist, not an educational priority. This reveals a critical disconnect between regulatory compliance and human readiness.

Across the board, SoSafe found that while leaders want responsible AI adoption, few have built the structures to monitor or nurture it. It’s not that they lack intent; they lack visibility.

Equip your workforce with the judgement they need to use AI responsibly.

Download the AI Literacy Report

The state of AI in the workplace

Data paints a striking picture. 92% of companies plan to increase AI investments, yet only 1% describe their AI rollouts as mature. Meanwhile, nearly all employees are familiar with generative AI, and adoption is far higher than leaders assume. McKinsey’s “Who Is Using AI at Work?” found that employees report three times higher usage rates than management believes.

This gap in perception is widening precisely because AI is so accessible. Employees don’t need IT approval to use ChatGPT, Notion AI, or Copilot. Most simply log in with personal accounts. Microsoft’s 2024 Work Trend Index notes that 78% of employees bring their own AI tools to work, and almost 60% rely on unmanaged apps.

With accessibility comes risk. Over 50% of employees cite cybersecurity, inaccuracy, or privacy concerns when using AI, but lack training on how to mitigate them. And while 71% of employees trust their employer to deploy AI safely, only a small fraction of employers provide guidance that matches that trust.In short, AI is already transforming work faster than governance can keep up. Policy updates lag behind daily practice. Training is optional when it should be foundational, and as AI accelerates, the trust gap between leaders and employees becomes a potential security breach.

What’s already out of leaders’ control

Leaders often talk about “controlling” AI, but control is an illusion in this context. Even the most rigorous policies can’t contain tools designed for rapid, decentralised use.

Here’s what’s already outside the perimeter:

  • Shadow AI: Employees are bringing personal AI accounts into the workplace by using them for brainstorming, translation, summarisation, or coding assistance.  78% of companies already show signs of unapproved AI use, according to Microsoft’s 2024 survey.
  • Consumerisation of AI: Tools like ChatGPT and Gemini evolve monthly, often faster than IT departments can evaluate them. This makes traditional enforcement nearly impossible.
  • Data leaks through good intentions: Employees often paste snippets of real data into prompts, thinking it’s harmless. But those inputs can be stored or reproduced in future outputs, potentially breaching confidentiality.
  • Vendor dependency: Many organisations assume that if a vendor labels an AI product as “secure”, it must be so. But trust without verification is risky, especially when model training or data storage locations are opaque.
  • Cultural pressure to innovate: Teams feel encouraged to “move fast with AI”, often without understanding its implications, and innovation races can quickly turn into compliance headaches.

The lesson? Control achieved through restriction is fragile. Real control begins with comprehension and understanding of how AI behaves, where data ends up, and how humans interact with it.

What leaders can (and must) take control of

While total control is impossible, meaningful influence is not. Leaders can shape how their workforce uses AI safely by replacing fear with literacy.

1. Build AI literacy programmes

Training shouldn’t focus on coding or algorithms, but on human understanding: how AI tools process information, what “hallucination” means, and how to recognise bias. Teach employees about prompt design, data awareness, and critical thinking. These basics prevent most unsafe behaviour.

2. Create clear, evolving guidelines

Adopt an AI Acceptable Use Policy, as outlined by FairNow, that evolves alongside technology. Define what data can be used, where tools are permitted, and when human review is mandatory. Clarity reduces misuse far more effectively than fear.

3. Embed continuous learning

AI training isn’t a one-off module. It should live within the organisation’s existing learning and development ecosystem, refreshed as models evolve. Regular refresher sessions keep knowledge relevant and encourage curiosity rather than compliance fatigue.

4. Share accountability

Safe AI use isn’t an IT-only issue. Managers should model responsible prompting, review outputs, and encourage open discussions about mistakes. In psychologically safe teams, employees are more likely to raise red flags before problems escalate.

5. Create feedback loops

Use monitoring tools and employee surveys to review how AI is being used across departments. Where gaps emerge, adapt policies in real time. Treat AI literacy as a dynamic, data-driven process, not a static rulebook.Ultimately, control doesn’t come from command. It comes from competence. Leaders who enable understanding empower their teams to use AI responsibly, without stifling innovation or creativity.

From command to curiosity: the leadership shift AI demands

The rise of generative AI has created a leadership paradox. On one hand, AI can enhance productivity, creativity, and speed. On the other hand, it introduces unpredictable risks. Trying to control AI through restriction alone is like managing the internet in the 1990s: you can’t block progress, only guide it.

AI literacy, then, is the new cyber hygiene. It’s as fundamental as knowing how to spot a phishing email or manage a password. The leaders who recognise this will move from reactive enforcement to proactive empowerment, and they’ll see training not as a compliance exercise but as fundamental learning.

Forward-looking leaders are already taking this path. They’re sitting in on AI workshops alongside junior staff, asking questions, experimenting safely, and acknowledging that awareness, curiosity, and broadening employees’ skill sets are core leadership competencies.In this new landscape, understanding is control, and curiosity is your organisation’s best defence.

Explore SoSafe’s AI literacy resources and build the foundation for safe innovation. Give employees the skills to recognise bias, question outputs, and protect sensitive data while keeping your organisation compliant. Start building AI readiness now.

Get your AI literacy toolkit

Do you want to stay ahead of the cyber game?

Sign up for our newsletter to receive the latest cyber security articles, events, and resources. No spam, only content that truly matters.

Newsletter visual

Experience our products first-hand

Use our online test environment to see how our platform can help you empower your team to continuously avert cyber threats and keep your organization secure.

The Forrester Wave™ Strong Performer 2024: Human Risk Management Solutions

This page is not available in English yet.

This page is not available in your language yet. You can continue in English or return to the US homepage.

This page is not available in your language yet. You can continue in English or return to the Aus homepage.

Diese Seite ist noch nicht in Ihrer Sprache verfügbar. Sie können auf Englisch fortfahren oder zur deutschen Startseite zurückkehren.

Cette page n’est pas encore disponible dans votre langue. Vous pouvez continuer en anglais ou revenir à la page d’accueil en français.

Deze pagina is nog niet beschikbaar in uw taal. U kunt doorgaan in het Engels of terugkeren naar de Nederlandse startpagina.

Esta página aún no está disponible en español. Puedes continuar en inglés o volver a la página de inicio en español.

Questa pagina non è ancora disponibile nella tua lingua. Puoi continuare in inglese oppure tornare alla home page in italiano.