Cyberthreats

AI delegation is a security decision. Treat it like one.

Rob Daly Rob Daly · 11 March 2026 · 5 min read

The most consequential AI security decision made in your organisation this week probably wasn’t made by your security team. It was made by someone who wanted their agent to work better, so they gave it a bit more access.

That’s the risk I keep coming back to. Not because the offensive threat isn’t real, it is, but because attackers usually succeed when they find access already waiting for them. With AI agents embedded in workflows, the entry point has shifted. It’s no longer just the inbox. It’s every permission an agent has been granted along the way.

The offensive side is already operational

Not that long ago, agentic AI still felt early, something threat actors were testing. That became harder to argue when reports described an AI system carrying most of the tactical workload inside a state-linked intrusion. That suggests it’s already operational, not experimental.

Scaling a cyber campaign used to mean adding more people behind keyboards. Agentic AI removes much of that grind. An agent can run phishing campaigns at a scale no human team could match, conduct social reconnaissance that would have taken analysts weeks, sustain multi-channel campaigns for months, and hand off cleanly to the next agent in a chain, all without anyone actively steering it. We’ve seen the tactics before but never at this pace and scale. 
Most incident response models assume there’s a bit of breathing room between signal and consequence. Something looks off, it gets triaged, and someone steps in. Agent-driven execution reduces that gap. Once conditions are met, actions trigger automatically and in parallel. When trying again is cheap, attackers don’t need a perfect campaign, they can run lots of decent ones, see what works, and keep adjusting.

Threat intelligence has to work at that speed.

The question is: When something new appears, can you adjust controls fast enough to make the next attempt harder?

That’s the external pressure. It’s not the only one.

Delegation expands authority

The same pattern that created Shadow IT is now playing out with AI agents, except the permissions being granted informally aren’t just the access to an unsanctioned app. They’re access to live systems, inboxes, and decision workflows. Most deployments start cautiously. Draft a response. Pull some data. Then it feels natural to let it update the record, trigger the workflow, or finish the job on its own. 

Authority doesn’t happen overnight. It builds through small successful decisions. An agent did such a good job with a response to a message, why not broaden the permission to let it send too. The output improves, so the change stays. 

Before long, agents are running under permission models, granted by trusting humans, designed with human behaviour in mind. Humans hesitate, get interrupted, and sanity-check decisions. Agents simply execute and continue, moving through connected systems with whatever reach their identity allows. If one of those agents is compromised, the damage is defined by that reach. And the more automated the workflow, the faster it unfolds. Because it’s using valid credentials and authorised actions, the activity can look legitimate while it’s happening.

Least privilege for agents therefore is a critical design decision. From where I sit, that means being deliberate about what an agent can actually do. If it only needs to complete a specific task, that’s all it should be able to do, for as long as it needs to do it. And if we need to pull that access back, we should be able to do it without the whole workflow falling over. 

I’m wary of how easily “can see” turns into “can act”. If we don’t make those boundaries explicit, the system won’t enforce them for us.

I think about this in terms of decision rights. Teams are usually comfortable letting an agent execute while keeping judgement with a person. But as workflows expand, execution starts to include choice. The agent isn’t just doing the task, it’s deciding how to do it within its permissions. At that point, you’ve shifted more authority than you intended, even if nothing looks dramatically different on paper. Most organisations haven’t mapped where that line sits. Until they do, it keeps moving.

Human-in-the-loop is not a safety net

A lot of organisations say, “we keep a human in the loop.” I don’t doubt the intention. When decisions are generated faster than a person can properly review them, the focus shifts to keeping things moving. The business still expects progress, so approvals become quicker and more routine. And if the reviewer can’t clearly see what the agent did, which tools it used, or what it was about to do next, that review doesn’t help either. At that point, having a human in the loop doesn’t add real control. If you can’t reliably catch issues at the end of the workflow, you have to be more deliberate about how authority is set up at the start.

Think about what happens when an AI agent is triaging customer escalations, flagging contracts for review, or approving outbound communications. At a modest scale, a single agent can surface more decisions in an hour than a reviewer can meaningfully assess in a day. The first thing that goes is scrutiny. The second is accountability. If everything gets approved, nothing is really being reviewed. Which means the control has to move earlier. Not to the review queue, but to the design of the system itself.

From reaction to design

For years, security awareness has focused on detection at the end. Spot the phish. Report the anomaly. That made sense when the main risk entered through a click.

What I’m seeing now is different. As authority moves into workflows, the real risk shifts upstream. It’s no longer just about whether someone clicks a bad link. It’s about how delegation is set up, what permissions are granted, and what an agent is allowed to do once it’s connected to real systems. Those choices determine what the system can do on your behalf. If they’re treated as routine, access expands step by step without anyone explicitly deciding to widen it.

Part of the issue is how differently people experience these tools. Those who rarely use them underestimate them. Those who use them every day can start to trust them too quickly. In both cases, delegation decisions get made casually. The solution is  to make sure delegation is intentional.

Agents defending against other agents is where this is heading. But even then, those systems will operate within the limits we set.The organisations that adapt won’t just be faster at detecting attacks. They’ll have built environments where delegation is treated with the same rigour as any other security control. That means solving both sides of the problem: the technical design of permissions and the human awareness of what those permissions allow.

About the author

Rob Daly
CTO

Rob Daly is Chief Technology Officer at SoSafe, where he leads the technology, product and design vision. With two decades of experience building and scaling technical teams across startups and global organisations, his work spans security, AI adoption and the human side of technology.

Learn more

You might also be interested in:

Do you want to stay ahead of the cyber game?

Sign up for our newsletter to receive the latest cyber security articles, events, and resources. No spam, only content that truly matters.

Newsletter visual

Experience our products first-hand

Use our online test environment to see how our platform can help you empower your team to continuously avert cyber threats and keep your organization secure.

The Forrester Wave™ Strong Performer 2024: Human Risk Management Solutions

This page is not available in English yet.

Diese Seite ist noch nicht in Ihrer Sprache verfügbar. Sie können auf Englisch fortfahren oder zur deutschen Startseite zurückkehren.

Cette page n’est pas encore disponible dans votre langue. Vous pouvez continuer en anglais ou revenir à la page d’accueil en français.

Deze pagina is nog niet beschikbaar in uw taal. U kunt doorgaan in het Engels of terugkeren naar de Nederlandse startpagina.

Esta página aún no está disponible en español. Puedes continuar en inglés o volver a la página de inicio en español.

Questa pagina non è ancora disponibile nella tua lingua. Puoi continuare in inglese oppure tornare alla home page in italiano.