The following article originally appeared on S McCallum’s blog It is republished here with the author’s permission.
AI agents and rogue traders pose similar insider threats to employers.
Specifically, we can expect companies to deploy agentic AI at scale and with insufficient oversight. This creates the conditions for a certain flavor of long-term problems, which in turn creates new risk exposure for both the companies involved and anyone dealing with them. A bot and a rogue trader can do significant, and sometimes existential, damage to companies that employ them.
The main difference is scale: rogue traders operate in investment banks, while proxy AI will be deployed across a wide range of companies and industry sectors. Proxy AI may therefore create more problems than rogue traders and put a greater amount of capital at risk.
I call this risk exposure ROT — Rogue Operator Threat — and this document is a brief explanation of what it is and how to address it.
(I almost called it a RAT, with the A standing for “agent,” but then I realized it applied to any kind of automated system. So I expanded the scope to include “operator.”)
To set the stage, let’s take a trip to the trading floor:
Understanding the rogue trader
Scandals of rogue traders follow the same story:
- The trader incurs losses due to bad trades.
- They hide those losses while making new trades in an attempt to recover.
- New deals also lose money, digging a deeper hole.
- Repeats.
This cycle continues until they are caught, at which point the bank suffers a significant loss (sometimes in the billions of dollars) and the trader faces legal repercussions.
The story of Barings Bank provides a concrete example. For three years, trader Nick Leeson had been recording fraudulent trades in an attempt to cover his mounting losses. This only became apparent when the Kobe earthquake turned the markets against its recent positions and the losses could no longer be hidden. Leeson’s £800 million ($1.3 billion) hole sent Barings bankrupt after just three days.
Here you will wonder: How can a professional trading operation allow so many bad trades to go undetected? How can a trader falsify records? Aren’t trading floors high-tech operations full of electronic audit trails?
The answer is: it’s complicated.
Trades keep records, yes. But no system is perfect. Every time a fraudulent trading scandal emerges, it turns out that there are lapses in risk controls. A sufficiently motivated trader – especially a trader desperate to hide his mistakes – found these loopholes and exploited them, continuing his losing streak in plain sight until he could bring in real money to fill the fake ledgers.
However, the word “even” never occurs. For this reason, employers face financial, reputational, and sometimes legal problems.
ROT threat to AI agent
As with a trader, the AI agent works on behalf of the parent company and is given space to work independently so that it can accomplish its tasks.
The danger is that in the rush to deploy agentic AI, these companies are likely to give robots more leeway than is necessary. We have already seen cases where robots have been able to do this Delete emails and Scan production database. There are undoubtedly other stories that did not appear in the news.
At least these issues are detected in real time. Businesses facing ROT are exposed to additional long-term problems as the bot is able to accumulate losses or inflict greater damage over a long period. In these cases, problems will only be detected by accident and/or too late.
Consider, for example, an agent who creates false data records to reflect (non-existent) sales orders. This can continue until some external event, such as investor due diligence or a budget review, forces someone to double-check those records against reality.
ROT avoidance: mitigate the threat
How can you narrow your exposure to downside risks of ROT? Preventive measures are key. Strong risk controls, narrow scope of authority, and monitoring can detect problems from rogue operators long before they become an existential threat.
In light of scandals of rogue traders, trading shops have been known to tighten risk controls as well as segregation of duties to create a system of checks and balances. (This prevents traders from recording their fake trades.) Companies also ask traders to take time off, as fraudulent activity may arise when the perpetrator is not there every day to keep the system running.
By adapting these insights to agent AI, the company can monitor and limit the scope of bot activity (for example, requiring human approval to place more than 10 requests per hour). It can also periodically purge the agent’s memory so that too many evolving behaviors don’t accumulate, or swap in entirely new bots to pick up where previous bots left off. According to the usual refrain fromNever let robots run unattended“, This company can hire people to verify everything the robot does. Trust, but verify.
This will not prevent the AI agent from making mistakes. But sufficiently frequent guardrails and checks should limit the scope of damage the robot inflicts. As with a rogue trader, the ROT problem is not about a single mistake; It’s about letting errors get out of control undetected.







