Clause 5 — Three-Layer Governance Architecture
5. Three-Layer Governance Architecture
(Normative)
5.1 The Claim
Safe, accountable, autonomous AI agent deployment requires three independently necessary governance layers. Each layer answers a question the others cannot answer.
5.2 Layer Definitions
Constitutional AI (Substrate). The AI model’s training-time values and character. Not a layer of the Intent Stack — the foundation that makes all other governance possible. Provides the behavioral floor: the model SHALL NOT perform harmful actions regardless of organizational context or process configuration. This layer is universal — it produces the same character regardless of deployment context. It is the contribution of AI model providers (notably Anthropic, whose Constitutional AI framework is the most extensively developed).
Intent Stack (Governance Context). Runtime governance specifying what is delegated, by whom, under what authority, with what constraints. The Intent Stack (v1.2, intentstack.org) defines four governance context layers (L4 Intent Discovery through L1 Runtime Alignment) and five Intent Primitives (Purpose, Direction, Boundaries, End State, Key Tasks) that constitute the irreducible governance content at every delegation interface. The Intent Stack answers: “Who delegated WHAT, under WHAT AUTHORITY, with WHAT CONSTRAINTS?”
BPM/Agent Stack (Execution Structure). Execution governance specifying how authorized work gets done with structure, roles, decisions, exceptions, and accountability. This specification. The BPM/Agent Stack formally governs three execution governance concerns — Orchestration, Integration, and Execution — that together specify the complete execution lifecycle for governed agent work. The BPM/Agent Stack answers: “HOW does authorized work get executed — with what process, what roles, what decision logic, what exception handling, and what audit trail?”
5.3 Orthogonality
The three layers are orthogonal: each addresses a concern that the others do not, and none duplicates the others’ content.
| Dimension | Constitutional AI | Intent Stack | BPM/Agent Stack |
|---|---|---|---|
| Temporal scope | Training time | Runtime (continuous) | Execution time (per-process) |
| Scope | Universal (all deployments) | Organizational (per-deployment context) | Operational (per-execution instance) |
| Governs | Model character | Delegation context | Execution structure |
| Nature | Trained values | Governed intent | Governed process |
| Disciplinary origin | AI safety research | Governance theory, deontic logic, fiber bundle mathematics | BPM discipline (ABPMP CBOK, BPMN, DMN) |
| Provides | Behavioral floor | Governance context (why, what, who, under what authority) | Execution infrastructure (how, with what roles, decisions, exceptions) |
| Governance concerns | Substrate (below layers) | Four layers: Intent Discovery (L4), Intent Formalization (L3), Specification (L2), Runtime Alignment (L1) | Three concerns: Orchestration, Integration, Execution |
The orthogonality of the Intent Stack and BPM/Agent Stack is a structural consequence of their independent disciplinary origins. The Intent Stack was invented from first principles through formal decomposition and mathematical validation. The BPM/Agent Stack is translated from an independent discipline (BPM) that developed without awareness of intent governance theory. Neither tradition contaminated the other. The clean stitching point (Clause 7) exists because the two concerns are genuinely independent — governance context and execution structure are orthogonal dimensions of the same deployment problem.
Preservation principle. When extending either specification, test whether the extension creates overlap with the other. If the extension addresses governance context, it belongs in the Intent Stack. If it addresses execution structure, it belongs in the BPM/Agent Stack. If it addresses the connection between them, it belongs in the stitching mechanism (Clause 7). Extensions that blur this boundary SHOULD be treated as architectural warnings.
5.4 Completeness
The claim is that all three layers are necessary and jointly sufficient for governed agent deployment:
- Without Constitutional AI, agents have no behavioral floor — organizational governance and process structure cannot prevent a fundamentally misaligned model from causing harm.
- Without the Intent Stack, agents have no governance context — execution structure cannot determine whether work was authorized, by whom, or under what constraints.
- Without the BPM/Agent Stack, agents have no execution structure — governance context cannot specify how work gets done with roles, decisions, exceptions, and accountability.
The three-layer architecture does not claim to address all concerns in AI deployment (data quality, model selection, infrastructure reliability, etc.). It claims to address the governance concerns: character, context, and execution structure.
5.5 Seven Governance Concerns
Together, the Intent Stack and BPM/Agent Stack address seven governance concerns across the full governance lifecycle:
| Concern | Specification | Question Addressed |
|---|---|---|
| Intent Discovery (L4) | Intent Stack | What does this principal actually intend? |
| Intent Formalization (L3) | Intent Stack | How do we represent this intent in machine-processable form? |
| Specification (L2) | Intent Stack | Given this intent, what shall we actually do? |
| Runtime Alignment (L1) | Intent Stack | Is what is happening aligned with what was intended? |
| Orchestration | BPM/Agent Stack | How do we coordinate multiple agents to execute this specification? |
| Integration | BPM/Agent Stack | How do governed agents connect to the systems they need? |
| Execution | BPM/Agent Stack | How does the actual work get done within governing constraints? |
The four governance context concerns (Intent Stack) are vertically composed as layers — each layer’s output constitutes governing input for the layer below. The three execution governance concerns (BPM/Agent Stack) operate within the governing context established by the Intent Stack’s four layers, receiving authorized work from Intent Stack L2 (Specification) and providing evidence back to Intent Stack L1 (Runtime Alignment).