Layered Agent Sandbox Security Orchestrator
A proposal for controlled AI agent usage
A tool that puts coding agents inside locked-down containers with command controls and a full audit trail
March 2026
Before the ban, agents helped us with real work:
The ban made sense — agents shouldn't have access to things they don't need.
What if we could give them access to only what they need and block everything else?
Even if inner layers have gaps, the outer wall catches everything
LASSO configures all 3 layers with one command, monitors them from a dashboard, and logs everything.
| Action | Layer 1 — Instructions | Layer 2 — Config | Layer 3 — Sandbox |
|---|---|---|---|
| Access SSMS database | [WARN] Agent told not to | [BLOCKED] sqlcmd blocked | [BLOCKED] Port 1433 blocked |
| Read repo source code | [OK] Allowed | [OK] Allowed | [OK] Allowed |
| Run Python scripts | [OK] Allowed | [OK] Allowed | [OK] Allowed |
curl external API |
[WARN] Agent told not to | [BLOCKED] curl restricted | [BLOCKED] Network blocked |
rm -rf / |
[WARN] Agent told not to | [BLOCKED] rm blocked | [BLOCKED] Command blocked |
Read git log |
[OK] Allowed | [OK] Allowed | [OK] Allowed (audited) |
Green rows = safe actions that still work · Red-tinted rows = blocked at multiple layers
Minimal setup — Podman is already on our machines.
pip install works on our machines, or ask IT for a one-time setup.
Questions?