Bitget and SlowMist Identify Security Risks as AI Trading Agents Execute Autonomous Transactions
AI Trading Agents Create New Security Risks, Report Warns

Bitget and SlowMist Map Emerging Security Risks as AI Agents Begin Executing Trades

VICTORIA, Seychelles — Bitget, recognized as the world's largest Universal Exchange, has partnered with blockchain security firm SlowMist to publish a comprehensive joint research report. This document examines the significant security risks emerging as artificial intelligence systems transition from analytical roles to autonomous trade execution in financial markets.

The Shift to Agentic Trading

The report identifies a fundamental transformation occurring in trading technology. As AI systems move beyond advisory functions into direct execution of trades, they enter what researchers term an "agentic phase." This evolution creates entirely new categories of security vulnerabilities that traditional financial security models were never designed to address.

"AI is no longer just interpreting markets, it's actively participating," explained Gracy Chen, CEO of Bitget. "This fundamental shift changes the entire nature of risk assessment. The critical question has evolved from how intelligent these systems are to how safely they can be permitted to operate within financial ecosystems."

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Immediate and Irreversible Consequences

According to the research findings, once AI systems move from advisory roles into execution capabilities, errors and security exploits are no longer isolated incidents. They can trigger immediate, irreversible financial consequences. In cryptocurrency markets particularly, where transactions settle almost instantly, a compromised or misdirected AI agent can execute damaging trades faster than any human intervention could possibly prevent.

The report emphasizes that agent-based trading systems introduce novel attack surfaces across multiple technological layers. These vulnerabilities range from model inputs and decision-making processes to execution pathways and capital management protocols.

New Attack Vectors and Systemic Vulnerabilities

Specific risks identified in the research include:

  • Prompt injection attacks that can influence AI decision-making processes
  • Malicious plugins that can alter agent behavior and trading patterns
  • Over-permissioned APIs that can expose capital to unintended or unauthorized actions

These vulnerabilities are further compounded by the always-on nature of autonomous trading agents, which operate continuously without direct human oversight or intervention. The report frames these not as isolated technical issues but as systemic challenges requiring comprehensive security redesign.

Architectural Solutions and Security Frameworks

Bitget's proposed approach reflects this paradigm shift in security thinking. The platform has implemented a multi-layered architecture that separates intelligence functions, execution capabilities, and asset authorization into distinct, isolated components. This design significantly reduces the likelihood that any single point of failure could trigger unintended or harmful trades.

The system incorporates permission structures based on least-privilege access principles, with transaction simulation and verification processes introduced before any execution is finalized. These controls ensure that even as AI agents operate with increasing autonomy, their operational scope remains clearly defined and appropriately constrained.

Closed-Loop Security Models

SlowMist's analysis reinforces the necessity for what researchers term a "closed-loop security model." This approach addresses risks comprehensively before, during, and after trade execution. The framework incorporates continuous monitoring systems, bounded permission structures, and verifiable transaction flows that move security from a reactive process to an embedded system design principle.

The findings point toward a broader financial reality where AI agents become increasingly integrated into trading operations, asset management strategies, and on-chain activities. As this integration deepens, the traditional boundary between user intent and system execution becomes increasingly abstract and difficult to define.

Pickt after-article banner — collaborative shopping lists app with family illustration

In this emerging environment, system reliability will no longer be determined solely by performance metrics or profitability, but by how effectively these intelligent systems can operate within carefully controlled and continuously monitored limits. The research serves as both a warning and a roadmap for financial institutions navigating the complex intersection of artificial intelligence and autonomous trading technologies.