BoundBound Docs
Concepts

Legal Context

How existing legal frameworks reinforce the containment approach.

The Authorization Gap

Existing legal frameworks assume that authorization implies human judgment. Agents break that assumption.

Under frameworks like the UK Payment Services Regulations, a payment is "authorized" only if the payer grants explicit consent via Strong Customer Authentication. But this legislation assumes human judgment is exercised at the moment of authentication.

When an AI agent autonomously navigates a checkout flow using a pre-authorized wallet, the transaction is technically authenticated — yet no human exercised situational judgment about that specific purchase.

Fraud Liability Gap

If an agent is socially engineered into authorizing a transfer, the transaction may be deemed legally valid because the correct cryptographic credentials were used. Legal concepts like "gross negligence" and "deception" are calibrated for human psychology and break down when applied to autonomous code.

EU Product Liability Directive

The EU's revised Product Liability Directive (2024/2853) explicitly classifies software and AI systems as "products" subject to strict liability, shifting the burden of proof onto the deployer.

The emerging legal consensus: outsourcing a decision to an AI does not outsource the liability.

Why This Reinforces Containment

If the deployer is legally liable for agent actions regardless of the agent's autonomy, then the deployer's primary risk management tool is not the agent's reputation — it is the containment architecture that bounds the agent's maximum impact.

Legal liability makes containment quality a fiduciary obligation, not just an engineering preference.

On this page