
Cyberthreats
Tackle unsafe prompting habits in your workplace before they turn into a data security problem
In a common scenario, an employee receives an urgent, confidential email with several actions to complete. To save time, they paste the email, or parts of it, into ChatGPT or another LLM. The shortcut speeds up their work, but it also sends sensitive company information outside the organisation’s controlled environment. In some cases, the LLM in use, or others connected to it, could even be trained on that data.
Every AI prompt increases the organisation’s attack surface, and studies indicate that around 8.5% of employee prompts contain sensitive or regulated information.

This article outlines how everyday prompting habits create hidden security gaps and what leaders can do to establish safe and reliable AI use across their organisation.
What is prompting?
Prompting means entering a query or instruction into a generative AI tool to receive an output. Asking ChatGPT to summarise a report or pasting a paragraph and requesting bullet points are both examples. In simple terms, a prompt is a request made to an LLM or generative AI system.
Prompts usually contain an instruction or question and may also include context, inputs, or examples. These elements help guide the model. Strong prompts are clear, specific, and give the AI a defined task. Effective prompting can involve specifying format, tone, or context, sometimes referred to as prompt engineering. For example, instead of saying “write about our project”, a more precise prompt would be “summarise the Q3 project report in two bullet points”.
However, even a well-crafted prompt can still contain confidential information, which makes careful prompt design essential.
The hidden risk of everyday AI use
Employees often paste internal data into AI tools for convenience, which introduces blind spots in the organisation’s security controls.
A report showed that:
- 8.5% of workplace AI prompts contained sensitive information.
- Of these, 46% involved customer data such as billing details and login credentials.
- While 27% included employee information like personal IDs and payroll. The rest contained legal, financial, or security-related details.
Routine AI use can unintentionally expose emails, contracts, source code, or financial data if it is not carefully controlled.
Widespread data leakage
One report found that 77% of employees admitted to pasting company information into AI prompts, and 82% had done so from personal accounts. Each query can act as a small data exfiltration event that sits outside traditional Data Loss Prevention (DLP) controls.
Unsanctioned AI tools
Many employees use free or personal AI accounts. One analysis showed that about two-thirds of ChatGPT users rely on free plans, which often reuse input data. In the same study, 54% of sensitive prompts were submitted through ChatGPT’s free tier. Using consumer chatbots without organisational oversight means data may be stored or processed externally, bypassing company governance entirely.
False sense of privacy
Many employees assume AI chats are private and temporary. One survey found that 24% believe their prompts are erased, and 75% said they would still use AI even if every prompt was saved. If chats are retained, the likelihood increases that the chatbot could be trained on that data.

Credentials and secrets
It is not uncommon for employees to paste real secrets into AI tools. In one study, 19% of professionals admitted to entering actual login credentials, which could then appear in AI outputs or logs.
Regulatory and audit risks
Sending personal or regulated data to an external AI system can violate GDPR, HIPAA, or other confidentiality requirements. Even data that appears anonymised can sometimes be
re-identified. Once information is sent to an AI tool, the organisation loses the audit trail and cannot reliably delete or retract it.
Training and culture gap
Many organisations still lack clear AI guidance. In a study, 70% of workers reported receiving no formal training on safe AI use, and 44% said their employer has no AI policy at all.
When prompts become the leak surface
Generative AI introduces a new data exfiltration channel. In one study, 26% of professionals had already submitted sensitive company data to an AI tool, 19% had shared credentials, and 38% had entered proprietary product details. Most of these incidents involved free or personal AI services, which is consistent with findings that about two-thirds of ChatGPT users rely on the free tier. Free AI tools commonly reuse prompt data to train their models.
Modern chatbots can also retain information across sessions. Data that appears deleted in one interaction can resurface later due to model memory. This means every ungoverned prompt becomes an outbound data stream beyond corporate oversight. Traditional security controls such as email filters, network monitors, and DLP systems rarely monitor these channels, which increases the risk of untracked data exposure.
Discover how to build AI literacy and create a culture of responsible AI use across your organisation. Download our latest report on AI literacy and workforce training for secure, compliant AI adoption.
Prompt hygiene and reducing exposure
Prompt hygiene means treating every AI query as sensitive and taking the right precautions. Before submitting a prompt, users should remove or mask private information, such as replacing client names or numbers with placeholders like “CLIENT”. Yet 17% of workers say they never anonymise their inputs, which means a simple habit could prevent many avoidable leaks.
It also helps to rely on approved tools. Employees should use company-sanctioned AI platforms that do not retain prompts rather than free consumer chatbots.
A recent study found that 52% of workers had received no training on AI safety. Teaching staff to pause and check each prompt, much like proofreading an email, makes safe prompting an intuitive habit and significantly reduces the size of the leak surface.
Practical steps to secure AI use in the workplace
The most effective way to reduce AI-related risks is to align people, processes, and technology.
Organisations should focus on the following:
- Build AI literacy: Provide clear training on secure AI use, explain which data types are off-limits, and teach staff how to anonymise prompts. Short workshops, cheat sheets, or online modules can bridge the gap. When employees understand why certain information cannot be shared, they are more likely to follow best practices.
- Create clear policies: Define which AI tools are approved and what data is prohibited, such as customer PII, financial information, and source code. Use plain language and real examples, for instance: “never paste a client’s credit card number or internal code into a chatbot”. Clear policies enable employees to use AI confidently and safely.
- Use secure AI platforms: Standardise on enterprise-grade AI services rather than unsanctioned consumer chatbots. Business platforms typically allow data retention controls, while free tools often reuse user inputs for training. Using approved or on-premise systems keeps sensitive prompts off public infrastructure.
- Monitor and enforce: Apply DLP and content controls to AI interactions. Modern security tools can inspect text sent to chatbots or APIs, making it possible to flag prompts containing client IDs or internal project names. Real-time warnings can help prevent risky submissions at the moment they occur.
- Create a security-focused culture: Leaders should champion safe AI use and promote shared responsibility across security, IT, legal, and operational teams. Reinforce that safeguarding data benefits everyone. When employees trust that the organisation manages AI responsibly, they are more likely to follow the guidelines.
When these measures work together, organisations can close the leak surface and enable teams to use generative AI with far greater confidence.
The takeaway for leaders
Generative AI can make people significantly more productive, but only when it is used responsibly. The real insight is that prompting risks stem as much from people and processes as from technology. Employees need to learn to pause and think before they prompt.
Instead of banning AI tools, leaders should build AI literacy, establish clear policies, and introduce simple safeguards. This creates a human-centred approach that lets staff use AI confidently without putting sensitive information at risk. Over time, prompt safety becomes instinctive. By embedding secure prompting into everyday workflows, organisations can realise the benefits of AI while keeping their data protected.
Check out resources to fast-track safe AI adoption across your workforce
Explore our expert resources on rolling out safe and responsible AI use across your organisation. You will find Gartner’s analysis, the AI literacy guide, the webinar replay with actionable steps, and more.












