Clause 4 — Terms and Definitions
4. Terms and Definitions
(Normative)
This clause defines terms used throughout this specification. Definitions here are normative: every normative clause that uses a defined term uses it in exactly the sense defined here. Terms used in normative clauses that are defined in this clause are rendered in bold at first use in each clause.
Terms are organized by concept cluster rather than alphabetically, because understanding the definitions requires understanding how terms relate to each other. An alphabetical index follows at the end.
For Intent Stack terminology (Intent Primitives, Intent Sources, governance interfaces, trust calibration, etc.), see the Intent Stack glossary. This glossary defines terms that are specific to this specification or that carry BPM discipline meaning that agent practitioners need explained.
1. Foundational Terms
BPM (Business Process Management)
The professional discipline of discovering, analyzing, designing, implementing, monitoring, and continuously improving business processes. BPM is codified in the ABPMP BPM Common Body of Knowledge (CBOK v4.0), standardized through OMG notation standards (BPMN, DMN, CMMN), and validated through decades of enterprise-scale operation across banking, healthcare, manufacturing, government, and insurance. BPM treats processes as organizational assets to be managed, measured, and improved — not as informal workflows that happen to exist.
In this specification, BPM provides the source discipline for execution governance. Every structural element in the BPM/Agent Stack — the governed activity model (Clause 8), process structure elements (Clause 9), and governance infrastructure (Clause 10) — derives from established BPM practice. The specification’s contribution is the formalization of the bridge between this proven discipline and AI agent architectures, not the invention of new governance patterns.
Distinguished from: Workflow automation (a technology category; BPM is a management discipline that may or may not use automation). Business Process Reengineering (a specific change methodology from the 1990s; BPM is the ongoing discipline). Task management (tracking individual work items; BPM governs the structure, roles, and decisions within which work occurs).
First appears in: Foreword.
BPM/Agent Stack
This specification — the execution governance layer for AI agent architectures. The BPM/Agent Stack formally governs three execution governance concerns: Orchestration (how multiple agents are coordinated), Integration (how governed agents connect to external systems), and Execution (how authorized work gets done within governing constraints). It is the second pillar of a two-specification governance architecture, complementing the Intent Stack’s governance context with execution structure derived from BPM discipline.
Distinguished from: A BPM platform (this specification does not propose building traditional BPM software). An agent framework (this specification provides the governance layer that frameworks like LangGraph, CrewAI, and Anthropic’s agent architecture are missing, not a replacement for them). The Intent Stack (which governs what is delegated and under what authority; the BPM/Agent Stack governs how authorized work gets executed).
First appears in: Foreword.
Three-Layer Architecture
The claim that safe, accountable, autonomous AI agent deployment requires three independently necessary governance layers: Constitutional AI (substrate — training-time values and character, provided by model providers), the Intent Stack (governance context — what is delegated, by whom, under what authority), and the BPM/Agent Stack (execution structure — how authorized work gets done with process, roles, decisions, exceptions, and accountability). Each layer answers a question the others cannot. None substitutes for either of the others.
Distinguished from: A software stack (these are governance layers, not technology layers). A hierarchy (the layers are orthogonal — each addresses independent concerns, not nested authority levels). Two-layer models (omitting any layer creates a specific governance gap: without Constitutional AI, no behavioral floor; without Intent Stack, no governance context; without BPM/Agent Stack, no execution structure).
First appears in: Introduction §I.3. Fully specified in Clause 5.
Execution Governance
The per-activity and per-process controls that operate after identity and authorization are established. Execution governance specifies how authorized work gets done: with what process structure, what responsibility assignments, what decision logic, what exception handling, and what audit trail. This specification decomposes execution governance into three concerns: Orchestration, Integration, and Execution.
Distinguished from: Governing intent (which is the Intent Stack’s domain — specifying what is authorized and under what constraints). Access control (which determines whether an agent may act; execution governance determines how an authorized agent acts). Observability (which monitors what happened; execution governance structures what happens so that monitoring is meaningful).
First appears in: Introduction §I.1.
Orchestration
One of three execution governance concerns claimed by this specification. How multiple agents are coordinated to execute specifications, including delegation level translation (cascading governance context from higher to lower delegation interfaces), swimlane-based responsibility assignment (which agent or human owns each activity), and knowledge provisioning as a governance act (what information each participant receives and why).
Distinguished from: Intent Stack governance layers, which address governing intent rather than execution coordination. Workflow automation (Orchestration is governance infrastructure for coordination, not a specific technology). Multi-agent communication (the mechanism by which agents exchange messages; Orchestration governs when, why, and under what constraints coordination occurs).
First appears in: Introduction §I.4. Specified in Clause 5 and throughout.
Integration
One of three execution governance concerns claimed by this specification. How governed agents connect to external systems — APIs, tools, MCP servers, databases, and external services — with typed system attributes on governed activities, governed access scope determined by governing intent, and governance context propagation through integrations. Integration governance ensures that an agent’s connections to external systems carry forward the governance constraints established at higher layers.
Distinguished from: The general software engineering use of “integration” (connecting systems). API management (a technology concern; Integration is a governance concern — under what authority and with what constraints does an agent use an external system?). MCP (Model Context Protocol — a mechanism for providing tools and context to agents; Integration governs how MCP access is itself governed).
First appears in: Introduction §I.4. Specified in Clause 5 and the Systems attribute in Clause 8.
Execution (Governance Concern)
One of three execution governance concerns claimed by this specification. How actual work is performed within the full governance context, including process instantiation (starting a governed process with assigned participants), live state management (tracking progress through the process), and governance-quality audit trails (structured evidence of what happened, who did it, and under what authority).
Distinguished from: Intent Stack L1 (Runtime Alignment), which assesses whether execution outcomes align with governing intent — a governance context concern, not an execution concern. The general concept of “executing code” or “running an agent” (Execution as a governance concern adds structure, roles, and accountability to the act of doing work). Task completion (which is a point event; Execution as a governance concern spans the full lifecycle from instantiation through evidence production).
First appears in: Introduction §I.4. Specified in Clause 5.
Orthogonality
The design property of the Intent Stack and BPM/Agent Stack: neither duplicates nor complicates the other. The Intent Stack governs why and under what authority. The BPM/Agent Stack governs how — with what process, roles, logic, and controls. They meet at a clean stitching point (Clause 7) where governance intent becomes actionable execution specification. Orthogonality is a structural consequence of the two specifications’ independent disciplinary origins — the Intent Stack was derived from governance theory and formal mathematical decomposition; the BPM/Agent Stack was translated from BPM discipline. Neither tradition contaminated the other.
When extending either specification, the test is whether the extension creates overlap with the other. If it does, the extension belongs in the stitching mechanism, not in either specification independently. Extensions that blur this boundary should be treated as architectural warnings.
Distinguished from: Independence (the specifications are not independent — they connect through the stitching mechanism). Complementarity (a weaker claim; orthogonality means the concerns are structurally perpendicular, not merely complementary). Modularity (a software engineering concept; orthogonality here is an architectural property of governance concerns).
First appears in: Foreword. Specified in Clause 5 §5.3.
Seven Governance Concerns
The complete set of governance questions addressed by the two companion specifications. Four governance context concerns are specified by the Intent Stack: Intent Discovery (L4 — what does the principal actually intend?), Intent Formalization (L3 — how is intent represented in machine-processable form?), Specification (L2 — what shall we actually do?), and Runtime Alignment (L1 — is execution aligned with intent?). Three execution governance concerns are specified by this specification: Orchestration, Integration, and Execution. Together, these seven concerns span the full governance lifecycle from intent discovery through execution evidence production.
Distinguished from: The Five Intent Primitives (which describe governance content — Purpose, Direction, Boundaries, End State, Key Tasks; the seven governance concerns describe governance questions that must be answered). The four Intent Stack layers (which are a subset — the governance context portion of the seven concerns).
First appears in: Introduction §I.4. Specified in Clause 5 §5.5.
Governance Configuration
The specific arrangement of governance elements at a deployment’s delegation interfaces. A governance configuration specifies: how the Five Intent Primitives are instantiated (explicit formal specification or implicit conversational context), where humans are positioned (throughout, at edges, at checkpoints, or absent), what quality gates apply (human judgment, automated evaluation, or hybrid), and what trust calibration is in effect. Different governance configurations produce different agent deployment patterns (agent species). The governance configuration is the structural explanation for why species differ.
Distinguished from: Configuration file (a software artifact; governance configuration is a structural property of a deployment). Security policy (which addresses access control; governance configuration addresses the full governance relationship). Settings (which are implementation details; governance configuration is an architectural pattern).
First appears in: Clause 6 §6.1.
2. The Process Discipline
ABPMP BPM CBOK
The Association of Business Process Management Professionals’ Common Body of Knowledge (version 4.0). This is the primary professional authority for the BPM discipline — the BPM profession’s equivalent of the PMBOK for project management or the BABOK for business analysis. The CBOK organizes BPM knowledge into nine Knowledge Areas: Process Modeling, Process Analysis, Process Design, Process Performance Management, Process Transformation, Process Organization, Enterprise Process Management, BPM Technologies, and Process Implementation. This specification draws primarily on Process Modeling (the governed activity model), Process Analysis (performance attributes), and Process Design (process structure elements).
Distinguished from: BPMN 2.0 (a specific notation standard; the CBOK is the broader body of professional knowledge). A textbook (the CBOK is a professional standards document maintained by a global professional association). ISO standards (the CBOK is a professional body of knowledge, not an international standard, though it references and builds on ISO standards).
First appears in: Foreword.
OMG (Object Management Group)
The international, open-membership technology standards consortium that publishes BPMN 2.0, DMN 1.0, and CMMN 1.0 — the three notation standards referenced normatively by this specification. Founded in 1989, the OMG is responsible for widely adopted modeling standards including UML (Unified Modeling Language) and has hundreds of member organizations across industry, government, and academia. When this specification references “OMG standard,” it means a formally adopted specification maintained through the OMG’s consensus-based standards process.
Distinguished from: A vendor (the OMG is a standards consortium, not a product company). W3C (which governs web standards; the OMG governs modeling and middleware standards). ISO (which is a national-standards-body consortium; the OMG is a technology-industry consortium, though some OMG standards are also adopted as ISO standards).
First appears in: Clause 3.
BPMN 2.0 (Business Process Model and Notation)
The OMG standard (formal/2011-01-03) for graphically representing business processes. BPMN provides a standardized notation — a visual language — for modeling processes with precisely defined semantics for activities, gateways, events, swimlanes, and flows. Every structural element in this specification’s process structure (Clause 9) derives from BPMN 2.0. The standard defines both a graphical notation (how process models look) and an execution semantics (what the graphical elements mean computationally), making it suitable for both human communication and machine execution.
BPMN 2.0 is the most widely adopted process modeling standard globally, supported by dozens of modeling and execution tools. It provides the structural vocabulary for process decomposition, exception handling, escalation routing, and activity-level governance.
Distinguished from: Flowcharting (informal, without defined semantics; BPMN elements have precise computational meaning). UML Activity Diagrams (a general-purpose software modeling notation; BPMN is specifically designed for business process modeling with richer process semantics). BPM (the discipline; BPMN is the notation standard used within BPM practice). BPMN 1.x (earlier versions with different semantics; this specification references BPMN 2.0 specifically).
First appears in: Foreword. Normatively referenced in Clause 3.
DMN 1.0 (Decision Model and Notation)
The OMG standard (formal/2016-06-01) for modeling and executing business decisions. DMN provides a framework for separating decision logic from process flow — decisions are modeled as reusable, testable artifacts rather than being embedded in process routing or (in the agent context) inferred by an LLM at runtime. DMN’s core construct is the decision table: a structured representation of inputs, outputs, and rules with defined evaluation semantics (hit policies). DMN also provides the FEEL expression language for specifying input and output expressions.
In this specification, DMN provides the foundation for the deterministic/probabilistic separation: decisions that require reproducible, auditable evaluation (compliance checks, classification, routing logic) use DMN decision models rather than probabilistic LLM inference.
Distinguished from: BPMN (which models processes; DMN models decisions that processes invoke). Business rules engines (implementation technology; DMN is a modeling standard). If/else logic in code (imperative programming; DMN decision tables are declarative and auditable). LLM judgment (probabilistic inference; DMN evaluation is deterministic — given the same inputs, the same decision is produced every time).
First appears in: Foreword. Normatively referenced in Clause 3.
CMMN 1.0 (Case Management Model and Notation)
The OMG standard (formal/2014-05-05) for modeling adaptive, knowledge-worker-driven work that does not follow a predetermined sequence. Where BPMN models structured processes (this happens, then this happens), CMMN models cases — situations where available actions depend on the case’s evolving state and on the judgment of the knowledge worker (or agent) managing the case. CMMN’s key constructs include cases (the overall container), stages (groupings of related activities), sentries (entry and exit criteria that guard when activities become available), and discretionary items (activities available at the knowledge worker’s discretion rather than by process prescription).
In agent discourse, Anthropic’s distinction between “workflows” (predefined paths) and “agents” (dynamic, model-directed execution) maps directly to the BPM distinction between BPMN-type structured processes and CMMN-type adaptive case management. This specification focuses on BPMN-type structured processes; CMMN-type adaptive patterns are identified as an open question (Clause 14, Q3).
Distinguished from: BPMN (structured, predetermined paths; CMMN is adaptive, knowledge-worker-driven). Ad hoc processes (unstructured; CMMN provides structure for adaptive work through sentries and stages). Agentic behavior (a technology implementation; CMMN is a governance model for adaptive work regardless of whether the worker is human or AI).
First appears in: Foreword. Normatively referenced in Clause 3.
Process
A structured, repeatable sequence of activities that transforms inputs into outputs to achieve a defined objective. In BPM discipline, a process is an organizational asset — something designed, documented, measured, and improved — not an informal series of steps that happens to occur. A process has defined entry conditions (when it starts), participants (who does the work), activities (what work is done), decision points (where the path branches), exception handling (what happens when things go wrong), and completion criteria (how you know it’s done).
In agent architectures, a process is the structural equivalent of a governed workflow — but with explicit responsibility assignment, typed decision routing, structured exceptions, and audit trail. The BPM/Agent Stack’s governed process model is what agents currently lack: the execution structure that makes agent work accountable, auditable, and improvable.
Distinguished from: Workflow (often used loosely to mean any sequence of steps; a process in BPM has formal governance attributes). Pipeline (a data engineering concept — serial data transformation; a process has branching, exception handling, and human involvement). Procedure (step-by-step instructions; a process is the organizational structure within which procedures execute). Task (a unit of work within a process, not the process itself).
Process Model
A representation of a process — the template or blueprint that defines the process’s structure, activities, decision points, roles, and flows. A process model is designed, versioned, and improved. It exists independently of any specific execution. Multiple process instances may execute from a single process model simultaneously, just as multiple runs of a program execute from a single codebase.
Distinguished from: Process instance (a specific execution of a process model — see below). Flowchart (an informal visual; a process model in BPMN has precise computational semantics). Documentation (which describes what should happen; a process model in BPMN 2.0 can be directly executed).
Process Instance
A specific, running execution of a process model. When a process model is instantiated, it becomes a process instance with its own state (which activity is currently executing), its own participants (which agents or humans are assigned), its own data (the specific inputs being processed), and its own history (the audit trail of what has happened in this particular execution). Multiple instances of the same process model may run concurrently with different participants, different data, and different paths through the process based on gateway conditions.
Distinguished from: Process model (the blueprint; an instance is a specific execution of that blueprint). Session (a Claude Code session is a conversational context; a process instance is a governed execution with formal lifecycle). Thread (a computing concept; a process instance has governance attributes — roles, decisions, exceptions, audit — not just execution state).
RFC 2119
An Internet Engineering Task Force (IETF) document that defines the obligation keywords used in technical specifications: SHALL (absolute requirement), SHALL NOT (absolute prohibition), SHOULD (recommended, with justified departure permitted), SHOULD NOT (not recommended), and MAY (optional). This specification uses these keywords with the meanings defined in RFC 2119 throughout its normative clauses (Clause 2 §2.2). The convention ensures that readers can distinguish between requirements, recommendations, and options without ambiguity.
Distinguished from: Casual language (in normative clauses, “shall” carries legal-style obligation force, not conversational intent). ISO obligation keywords (similar but not identical conventions). Programming language keywords (RFC 2119 keywords govern specification compliance, not code execution).
First appears in: Clause 2 §2.2.
3. The Activity Model
Activity
The fundamental unit of work in a process model. An activity is something that gets done — a step, a task, an action. In BPMN 2.0, activities are the building blocks of processes: each activity has defined inputs (what it needs to begin), outputs (what it produces), a performer (who or what does the work), and a type (User Task, Service Task, Business Rule Task, etc.) that determines how the work is accomplished. Activities are connected by sequence flows that define execution order and separated into swimlanes that define responsibility.
In this specification, the plain BPMN activity is extended into a governed activity — an activity carrying 21 typed governance attributes (Clause 8) that specify not just what work is done but who is accountable, what data flows through, what risks apply, and what governance documents are linked.
Distinguished from: Task (in many agent frameworks, “task” means a prompt-and-response exchange; an activity in BPM is a governed unit of work with typed attributes, defined inputs/outputs, and explicit responsibility). Step (informal; an activity has formal semantics). Function (a code concept; an activity is a process concept with governance attributes, not just execution logic).
First appears in: Foreword. Specified in Clause 8.
Governed Activity
An activity carrying the full set of BPM/Agent Stack governance attributes — 21 typed attributes organized into four attribute families (Role, Data Lineage, Performance, Risk) plus Governance and Documentation attributes, as specified in Clause 8. A governed activity contrasts with the fundamental units in current agent frameworks, which carry only mechanism attributes (prompt, tools, data/state) without governance structure.
A conformant Activity Model implementation SHALL carry all 21 attributes. Attributes MAY have null values where not applicable to a specific activity instance, but the attribute structure SHALL be present.
Distinguished from: A plain BPMN activity (which has type and performer but not the full 21-attribute governance model). A task in agent frameworks (which carries a prompt and tool access but no responsibility assignment, data lineage, performance tracking, or risk attributes). A function call (which has inputs and outputs but no governance structure).
First appears in: Clause 8.
Activity Attributes
The 21 typed governance attributes carried by every governed activity, organized into four attribute families and two additional categories:
| Family | Attributes | Derived From |
|---|---|---|
| Role (4 attributes) | Participant, Accountable Owner, Consulted, Informed | RACI matrix |
| Data Lineage (5 attributes) | Suppliers, Inputs, Outputs, Customers, Systems | SIPOC + infrastructure |
| Performance (5 attributes) | Cost, Work Time, Wait Time, Total Time, Value-Add | Value Stream Mapping |
| Risk (2 attributes) | Risk, Problems | ISO 31000 |
| Governance & Documentation (5 attributes) | Documentation, Attachments, Policy Links, Comments, Custom Fields | BPM CBOK |
Each attribute family brings a specific governance capability to agent execution that current frameworks lack entirely.
Distinguished from: Configuration parameters (which control behavior; activity attributes govern the organizational context of work). Metadata (which describes data; activity attributes govern execution). Tool parameters (which specify how to call a function; activity attributes specify who is accountable, what risks apply, and what governance documents are linked).
First appears in: Clause 8.
User Task
A BPMN 2.0 activity type that requires human action to complete. In agent architectures, a User Task represents a human-in-the-loop step: review, approval, judgment call, or any work that must be performed by a human rather than an agent. User Tasks are the process-level mechanism for implementing the Intent Stack’s trust calibration — lower-trust deployments have more User Tasks (more human checkpoints); higher-trust deployments have fewer.
Distinguished from: Service Task (automated execution; User Task requires human action). Manual Task (physical-world work outside any system; User Task is system-mediated human action). A prompt to the user (informal; a User Task has formal governance attributes, defined inputs and outputs, and accountability).
First appears in: Clause 8 §8.6.
Service Task
A BPMN 2.0 activity type representing automated system execution. In agent architectures, a Service Task maps to: agent tool invocation, API calls, MCP server interactions, and any work performed by an agent or system without requiring human action. Service Tasks are the default activity type for agent-executed work.
Distinguished from: User Task (which requires human action). Script Task (which specifically executes code; Service Task is broader — any automated system execution). A tool call (implementation mechanism; a Service Task is the governance wrapper that specifies who is accountable for the tool call, what inputs it receives, what risks apply, and what audit trail it produces).
First appears in: Clause 8 §8.6.
Business Rule Task
A BPMN 2.0 activity type that executes structured decision logic — specifically, DMN decision table evaluation. A Business Rule Task produces deterministic, reproducible, auditable results: given the same inputs, the same decision is produced every time. This is the process-level mechanism for the deterministic/probabilistic separation: decisions that require reproducible evaluation (compliance checks, classification, routing logic, threshold evaluation) use Business Rule Tasks, not LLM inference.
Distinguished from: Service Task (general automated execution; a Business Rule Task specifically executes decision logic). LLM inference (probabilistic; Business Rule Task evaluation is deterministic). Hardcoded conditions (which are embedded in code; Business Rule Tasks reference externalized, versioned decision models that are independently auditable).
First appears in: Clause 8 §8.6. See also: Decision Models in Clause 10 §10.3.
Script Task
A BPMN 2.0 activity type that executes code directly — a script, a program, a computation. In agent architectures, a Script Task maps to agent code execution: running Python, SQL queries, data transformations, or any computational work that is deterministic and does not require external service invocation.
Distinguished from: Service Task (which invokes external services; Script Task runs code locally). Business Rule Task (which evaluates decision logic; Script Task runs arbitrary code). A code cell in a notebook (informal; a Script Task has governance attributes and sits within a governed process).
First appears in: Clause 8 §8.6.
Send Task
A BPMN 2.0 activity type that dispatches a message to another participant. In agent architectures, a Send Task represents an agent sending output to another agent, human, or system. The Send Task completes when the message is dispatched — it does not wait for a response. Send Tasks participate in message flows between swimlanes.
Distinguished from: Receive Task (which waits for input; Send Task dispatches output). Service Task (which invokes a service and typically receives a response; Send Task is fire-and-forget). A function return (Send Task dispatches to a separate participant, not to the caller).
First appears in: Clause 8 §8.6.
Receive Task
A BPMN 2.0 activity type that blocks until a message arrives from an external source. In agent architectures, a Receive Task represents an agent waiting for input — a human response, another agent’s output, an external system’s callback, or any asynchronous delivery. The Receive Task completes only when the expected message is received.
Distinguished from: Send Task (which dispatches; Receive Task waits). A polling loop (implementation mechanism; a Receive Task is a governance-level construct that defines what the agent is waiting for and from whom). User Task (which requires human action; a Receive Task waits for a message that may come from any source).
First appears in: Clause 8 §8.6.
Manual Task
A BPMN 2.0 activity type representing work performed by a human in the physical world, outside of any system. In agent architectures, Manual Tasks are rare — they represent steps like “physically inspect the server” or “sign the printed document” that cannot be performed by an agent or through a system interface.
Distinguished from: User Task (system-mediated human action; Manual Task is physical-world action). Service Task (automated system execution). Offline work (informal; a Manual Task is formally tracked within the process even though its execution occurs outside the system).
First appears in: Clause 8 §8.6.
Subprocess
A BPMN 2.0 construct for governed decomposition — a nested process model with its own activities, gateways, events, and governance attributes, connected to the parent process through a governed interface. In agent architectures, a subprocess maps to sub-agent delegation: an agent spawns a sub-agent with defined inputs, expected outputs, boundary constraints, escalation triggers, and accountability.
A subprocess SHALL inherit all Boundary constraints from its parent process and MAY add additional constraints appropriate to its scope. A subprocess SHALL NOT relax any Boundary established by its parent. This is the process-level expression of the Intent Stack’s Boundaries monotonicity: constraints accumulate as authority is delegated downward, never diminish.
Distinguished from: A function call (which is a mechanism; a subprocess is a governance construct with its own complete process model). A child thread (which shares the parent’s context; a subprocess has its own governance interface with explicit inputs, outputs, and boundaries). Task decomposition in agent frameworks (which is typically informal; a subprocess has formal governance structure).
First appears in: Clause 8 §8.6 and Clause 9 §9.5.
4. Responsibility and Data Lineage
RACI Matrix
A responsibility assignment framework that classifies the involvement of participants in each activity into four roles: Responsible (performs the work), Accountable (owns the outcome and has authority to approve), Consulted (provides input before the work is done — two-way communication), and Informed (notified after the work is done or when decisions are made — one-way communication). The RACI matrix is an established management tool, widely used in project management, organizational design, and process management.
In this specification, RACI provides the Role attribute family for the governed activity model (Clause 8, §8.1). Every governed activity carries four role attributes derived from RACI: Participant, Accountable Owner, Consulted, and Informed. The critical governance contribution: in governed agent deployments, the Accountable Owner SHALL always be a human — agents may be Responsible for execution, but humans retain accountability.
Distinguished from: RASCI (a variant adding “Supportive” for team members who assist the Responsible party). DACI (Driver, Approver, Contributor, Informed — a decision-making variant). An org chart (which shows reporting relationships; RACI shows per-activity responsibility assignments). Permissions (which determine access; RACI determines accountability and communication obligations).
First appears in: Foreword. Specified in Clause 8 §8.1.
Participant
The Role attribute derived from RACI’s “Responsible” role. The agent role or human actor that actually executes a specific activity. In a governed agent deployment, the Participant performs the work — writes the code, makes the API call, drafts the document. Multiple agents may participate across different activities within the same process, each assigned to their respective swimlanes.
Distinguished from: Accountable Owner (who owns the outcome; the Participant does the work). Pool (a BPMN structural element representing a participant’s boundary; Participant here is a governance attribute on an activity). User (an informal term; Participant is a formally assigned governance role).
First appears in: Clause 8 §8.1.
Accountable Owner
The Role attribute derived from RACI’s “Accountable” role. The human or humans answerable when an execution step is questioned. In governed agent deployment, the Accountable Owner SHALL always be a human. Agents may be Responsible (they do the work), but humans retain Accountability (they own the outcome). This is a governance requirement, not a capability limitation — it ensures that every agent action has a human who can be asked “why did this happen?” and “was this acceptable?”
Distinguished from: Participant (who does the work; the Accountable Owner answers for the outcome). Manager (an organizational role; Accountable Owner is a per-activity governance assignment). Approver (who authorizes; accountability extends beyond approval to encompass outcome responsibility).
First appears in: Clause 8 §8.1.
SIPOC
A data lineage framework from Lean/Six Sigma practice that maps the flow of materials and information through a process by identifying five elements: Suppliers (who provides input), Inputs (what data or materials enter), Process (the transformation itself), Outputs (what is produced), and Customers (who consumes the output). SIPOC originated in Total Quality Management and is widely used in Six Sigma process improvement as a high-level process mapping tool.
In this specification, SIPOC provides the Data Lineage attribute family for the governed activity model (Clause 8, §8.2). Every governed activity carries five data lineage attributes derived from SIPOC: Suppliers, Inputs, Outputs, Customers, and Systems (an extension for the infrastructure dimension). The governance contribution: agent activities have explicit, typed data contracts rather than unstructured context injection.
Distinguished from: Data flow diagrams (a software modeling notation; SIPOC is a process management tool). ETL pipelines (an implementation pattern; SIPOC is a governance framework for understanding data lineage). Input/output specifications (SIPOC adds the relationship dimension — who provides input and who consumes output, not just what the data is).
First appears in: Foreword. Specified in Clause 8 §8.2.
Suppliers (SIPOC Attribute)
The Data Lineage attribute identifying upstream processes, agents, or humans providing input data to a governed activity. Suppliers establish explicit provenance — where the data came from and what governed its production. In agent architectures, a Supplier might be another agent’s output, a human’s specification, an MCP server’s response, or a database query result.
Distinguished from: Inputs (what enters the activity; Suppliers identify who or what provides that input). Dependencies (a software concept; Suppliers is a governance attribute that establishes data provenance and accountability).
Inputs (SIPOC Attribute)
The Data Lineage attribute specifying typed, vocabulary-controlled data entering a governed activity. Inputs SHALL be explicit data contracts — structured, typed specifications of what the activity receives — not unstructured context injection. In agent architectures, this replaces the current pattern of passing everything through a system prompt or context window without distinguishing governed input from ambient information.
Distinguished from: Context (the full information available to an LLM at inference time; Inputs are the specific, governed data elements an activity needs to do its work). Parameters (implementation-level; Inputs are governance-level data contracts). Prompt (Inputs are typed governance artifacts; the prompt is one mechanism for communicating them).
Outputs (SIPOC Attribute)
The Data Lineage attribute specifying typed, vocabulary-controlled data produced by a governed activity. Outputs SHALL be explicit deliverables — structured, typed specifications of what the activity produces — not unstructured LLM responses. In agent architectures, this replaces the current pattern of accepting whatever the agent generates without type checking or vocabulary compliance.
Distinguished from: Response (an LLM generates responses; a governed activity produces typed Outputs). Results (informal; Outputs are formally specified deliverables). Artifacts (may be informal; Outputs are vocabulary-controlled and typed).
Customers (SIPOC Attribute)
The Data Lineage attribute identifying downstream processes, agents, or humans consuming a governed activity’s outputs. Customers establish delivery accountability — who receives the output and what they expect. In agent architectures, a Customer might be the next agent in an orchestration chain, a human reviewer, a downstream process, or an external system.
Distinguished from: Users (informal; Customers in SIPOC identifies the specific downstream consumer of a specific activity’s output). Audience (a content concept; Customers is a process governance concept establishing delivery accountability). Informed (a RACI role for notification; Customer is a SIPOC role for output consumption).
Systems (Activity Attribute)
The Data Lineage attribute identifying MCP servers, APIs, tools, and external services that the agent invokes during a governed activity. Systems is an extension of the traditional SIPOC model (which focuses on the “Process” infrastructure element) to capture the specific infrastructure an agent interacts with. Each system SHALL be typed, governed, and auditable — the governance infrastructure must know what external systems an agent is accessing and under what authority.
Distinguished from: Tools (a mechanism concept in agent frameworks; Systems is a governance attribute that adds authorization, audit, and governance context to tool access). Integrations (a broader concept; Systems identifies the specific external services used by a specific activity).
First appears in: Clause 8 §8.2.
5. Process Structure
Gateway
A process element that controls the branching and merging of execution paths. Gateways are decision points in a process — where the path splits based on conditions (branching) or where multiple paths converge back together (merging). In BPMN 2.0, gateways have typed semantics: each gateway type has precisely defined behavior for how it branches and how it merges. This typing is a critical governance property — the routing logic at each decision point is explicit, auditable, and unambiguous.
In agent architectures, gateways replace the current pattern of letting the LLM decide “what to do next” at every decision point. Some decisions should be deterministic (use a gateway with Business Rule Task conditions); others genuinely require LLM judgment. The BPM/Agent Stack provides infrastructure for both; the choice at each gateway is a governance decision.
Distinguished from: If/else logic in code (imperative; gateways are declarative process elements with typed semantics). Router agents (an agent framework pattern that uses LLM inference for all routing; gateways distinguish between deterministic and adaptive routing). Decision points (informal; gateways in BPMN have precise computational semantics).
First appears in: Clause 9 §9.3.
Exclusive Gateway (XOR)
A gateway type where exactly one outgoing path is followed based on conditions. When a process reaches an exclusive gateway, the conditions on each outgoing sequence flow are evaluated, and the path whose condition is true is taken. If the conditions are deterministic (evaluated through a Business Rule Task or DMN decision table), the routing is reproducible and auditable. In BPMN notation, an exclusive gateway is represented by a diamond shape with an “X” marker.
In agent architectures, exclusive gateways model deterministic routing decisions — “if the document passes compliance review, proceed to publication; otherwise, route to revision.” The condition evaluation should typically be a Business Rule Task, not LLM inference, to ensure reproducibility.
Distinguished from: Inclusive gateway (where one or more paths may be taken; exclusive gateway takes exactly one). Parallel gateway (where all paths are taken; exclusive gateway takes only one). If/else (imperative code; an exclusive gateway is a declarative process element).
First appears in: Clause 9 §9.3.
Parallel Gateway (AND)
A gateway type where all outgoing paths are executed simultaneously. When used for branching, a parallel gateway starts all outgoing paths at once — no conditions are evaluated because all paths are taken. When used for merging (synchronization), a parallel gateway waits until all incoming paths have completed before the process continues. In BPMN notation, a parallel gateway is represented by a diamond with a “+” marker.
In agent architectures, parallel gateways model fan-out to multiple sub-agents with synchronization — “research these three topics simultaneously, then combine results when all are complete.”
Distinguished from: Exclusive gateway (one path; parallel gateway takes all paths). Inclusive gateway (one or more paths based on conditions; parallel gateway takes all paths unconditionally). Spawning multiple threads (implementation mechanism; a parallel gateway is a governance construct with defined synchronization semantics).
First appears in: Clause 9 §9.3.
Inclusive Gateway (OR)
A gateway type where one or more outgoing paths are executed based on conditions. Unlike the exclusive gateway (exactly one path) or the parallel gateway (all paths), the inclusive gateway evaluates conditions on each outgoing path and takes all paths whose conditions evaluate to true. At least one path must be taken. When merging, the inclusive gateway waits for all active incoming paths to complete. In BPMN notation, an inclusive gateway is represented by a diamond with an “O” marker.
In agent architectures, inclusive gateways model selective parallel execution — “run quality checks on all applicable dimensions: security if the change touches authentication, performance if the change touches the data layer, accessibility if the change touches the UI.”
Distinguished from: Exclusive gateway (exactly one path). Parallel gateway (all paths unconditionally). A conditional fan-out (informal; an inclusive gateway has formal merge semantics — it knows to wait only for paths that were actually activated).
First appears in: Clause 9 §9.3.
Event-Based Gateway
A gateway type where the process waits for one of several external events to occur, and the first event to arrive determines which path is taken. Unlike condition-based gateways (which evaluate expressions), event-based gateways respond to things that happen: a message arrives, a timer fires, a signal is received. Only one event can “win” — the first event to occur activates its path and cancels the others.
In agent architectures, event-based gateways model reactive waiting — “wait for either the API response, the timeout, or a cancellation signal, and handle whichever comes first.”
Distinguished from: Exclusive gateway (which evaluates conditions; event-based gateway waits for events). Polling (implementation mechanism; an event-based gateway is a declarative wait-for-event construct). Race condition (an event-based gateway is designed for exactly this pattern — competing events with deterministic resolution).
First appears in: Clause 9 §9.3.
Swimlane (Pool and Lane)
The BPMN 2.0 mechanism for visual and semantic separation of responsibilities across participants in a process. A Pool represents a participant boundary — a distinct actor (organization, agent, human, system) in a process. Pools are separated from each other, and communication between pools uses message flows rather than sequence flows. A Lane is a subdivision within a pool, used to organize activities by role, department, or function within a single participant’s scope.
In agent architectures, swimlanes define which agent or human is responsible for each activity — the structural expression of the RACI matrix’s Responsible role. Responsibility SHALL be explicit (assigned by the process model), not inferred by the LLM at runtime.
Distinguished from: Roles (an organizational concept; swimlanes are process-level responsibility assignments). Permissions (access control; swimlanes define responsibility, not access). Agent assignment in orchestration frameworks (typically informal; swimlanes are formal, visual, and auditable).
First appears in: Clause 9 §9.1.
Milestone
A named checkpoint in a process that marks phase completion and optionally requires authorization before the process continues. In BPMN, milestones are typically represented as intermediate events or gateways that act as authorization gates. In agent architectures, milestones are governance checkpoints where human review, approval, or authorization is required — the operational expression of the Intent Stack’s governance interfaces within a process.
Distinguished from: Task completion (a milestone marks a phase boundary, not just the end of a single task). Status update (informational; a milestone may require authorization to proceed). Sprint boundary (an Agile concept; a milestone is a process-level authorization gate).
First appears in: Clause 9 §9.2.
Sequence Flow
A BPMN 2.0 element that defines explicit execution order within a single participant’s scope. Sequence flows are the arrows connecting activities, gateways, and events within a swimlane — they specify what happens next and optionally carry conditions that must be true for the flow to be followed. In agent architectures, sequence flows replace emergent execution ordering (where the LLM decides what to do next) with deterministic ordering where the process logic is known.
Distinguished from: Message flow (communication between participants; sequence flow is ordering within a participant). Data flow (movement of data; sequence flow is execution ordering). Control flow in code (imperative; sequence flow is declarative process ordering with optional conditions).
First appears in: Clause 9 §9.6.
Message Flow
A BPMN 2.0 element that defines communication between separate participants (across pool boundaries). Message flows carry typed payloads — structured data conforming to the controlled vocabulary (Clause 10.1) — between agents, between agents and humans, or between agents and external systems. Untyped, unstructured inter-agent communication SHOULD be treated as a governance gap.
Distinguished from: Sequence flow (execution ordering within a participant; message flow is communication between participants). API call (implementation mechanism; a message flow is a governance-level construct that specifies who communicates, what payload is sent, and what vocabulary governs the exchange). A prompt (unstructured; message flow payloads are typed).
First appears in: Clause 9 §9.6.
Governed Decomposition
The practice of breaking a process into subprocesses where each subprocess has its own complete governance interface — authorization scope, acceptance criteria, exception handling, boundary constraints, and audit trail. Governed decomposition ensures that delegation to sub-agents is structured and accountable rather than ad hoc. Each subprocess inherits all Boundary constraints from its parent and may add additional constraints but may never relax them (Boundaries monotonicity).
Distinguished from: Task decomposition (in agent frameworks, typically informal splitting of work; governed decomposition creates a formal governance interface at each level). Divide and conquer (an algorithm design pattern; governed decomposition is a governance pattern). Microservices (a software architecture pattern; governed decomposition is process-level governance of delegation).
First appears in: Clause 9 §9.5.
6. Events and Exception Handling
Event (BPMN)
An occurrence during process execution that affects the flow of the process. In BPMN 2.0, events have three positions in the process lifecycle — Start (triggers the process), Intermediate (occurs during the process), and End (terminates the process or a path) — and specific types that define the nature of the occurrence (Timer, Error, Escalation, Signal, Message, Compensation). Each event type has defined handling semantics: a Timer event fires at a scheduled time, an Error event is caught by an error handler, an Escalation event routes to a higher authority.
In agent architectures, the BPMN event taxonomy replaces the blunt instruments in current frameworks (generic retry, maxIterations, timeout) with structured exception handling where each failure type has a defined response. Different kinds of things can go wrong, and each kind requires a different response.
Distinguished from: Log events (observability records; BPMN events are process control constructs that affect execution flow). Exceptions in programming (code-level error handling; BPMN events span the full taxonomy of process occurrences including timers, messages, and signals — not just errors). Notifications (one-way information; BPMN events trigger process behavior).
First appears in: Clause 9 §9.4.
Timer Event
A BPMN event triggered by time — a specific date/time, a duration, or a recurring cycle. In agent architectures, Timer events model: timeout behavior (if the agent hasn’t responded in 30 seconds, escalate), scheduled re-execution (run this quality check every hour), and periodic polling (check for new data every 5 minutes).
Distinguished from: Wait/sleep (an implementation mechanism; a Timer event is a governance construct with defined behavior when the timer fires). Deadline (a project management concept; Timer events are process-level temporal triggers). Cron job (an operating system scheduler; Timer events are process-level, governed, and carry audit trail).
Error Event
A BPMN event representing a failure condition during process execution. In agent architectures, Error events provide typed error handling — tool failure, API timeout, model refusal, resource exhaustion, and data validation failure are structurally different failure modes requiring different responses. Error events SHALL have typed handling: an implementation SHALL NOT treat all errors as equivalent.
Distinguished from: Exception in code (which may be caught and handled silently; an Error event is a process-level occurrence that is visible in the audit trail). Retry (an implementation tactic; an Error event triggers a defined handling response which may or may not include retry). Bug (a defect; Error events handle expected failure modes).
Escalation Event
A BPMN event representing controlled elevation to a higher authority when the current participant cannot resolve a situation within its authorized scope. In agent architectures, Escalation events model: human-in-the-loop escalation when agent confidence is low, routing to a senior agent when a decision exceeds the current agent’s authority, and governance boundary enforcement when an agent encounters a situation outside its authorized scope.
The critical principle: an agent that cannot resolve a situation within its authorized scope SHALL escalate rather than improvise. Escalation is not failure — it is governance working correctly.
Distinguished from: Error event (failure handling; Escalation is controlled elevation, not failure). Notification (one-way information; Escalation transfers responsibility to a higher authority). Asking for help (informal; Escalation is a formal governance mechanism with defined triggers and defined recipients).
Signal Event
A BPMN event representing a broadcast notification — a message sent to all interested participants rather than to a specific recipient. In agent architectures, Signal events model cross-agent broadcasts: “the data refresh is complete” (all agents waiting on fresh data can proceed), “the deployment is locked” (all agents must pause deployment activities), or “a security incident has been detected” (all agents must enter restricted mode).
Distinguished from: Message event (point-to-point; Signal is broadcast). Notification (informal; a Signal event triggers process behavior in all listening process instances). Pub/sub (an implementation pattern; a Signal event is a process-level governance construct).
Message Event
A BPMN event representing point-to-point communication between specific participants. In agent architectures, Message events model structured inter-agent communication with typed payloads — one agent sending specific, governed data to another specific agent. Message events interact with message flows (Clause 9.6) to define the communication structure between participants.
Distinguished from: Signal event (broadcast; Message is point-to-point). An API call (implementation; a Message event is a governance construct that specifies what is communicated and between whom). Chat message (unstructured; Message events carry typed payloads conforming to the controlled vocabulary).
Compensation Event
A BPMN event that triggers rollback or undo logic when a completed activity needs to be reversed. In agent architectures, Compensation events model: undo operations (delete a created file, revert a code change), cancel operations (cancel an API request, revoke an issued credential), and cleanup operations (remove temporary resources, close connections). Compensation handlers define what “undoing” a specific activity means.
Distinguished from: Error handling (responding to failure; Compensation reverses successful but no-longer-wanted work). Rollback in database terms (a specific implementation; Compensation is a process-level concept that may involve multiple systems and actions). Ctrl+Z (an informal metaphor; Compensation is a governed process with its own audit trail).
Structured Exception Handling
The process-level approach to handling failures, escalations, and unexpected conditions using BPMN’s typed event taxonomy rather than generic retry loops. Each failure type (tool failure, API timeout, model refusal, resource exhaustion) is a distinct event type with a defined handling response. This contrasts with the blunt instruments in current agent frameworks — generic retry with maxIterations, or timeout-and-fail — which treat all exceptions as equivalent.
Structured exception handling includes: typed error events with per-type handlers, escalation events for conditions exceeding agent authority, compensation events for rollback when needed, and timer events for timeouts. The result is auditable, type-safe exception handling where the response to each kind of failure is explicitly defined.
Distinguished from: Try/catch (code-level; structured exception handling operates at the process level). Retry logic (one tactic within exception handling; structured exception handling is the full governance framework). Guardrails (preventive; exception handling is reactive — what happens when things go wrong despite guardrails).
First appears in: Clause 9 §9.4.
7. Decision Governance
Decision Table (DMN)
A structured representation of decision logic in DMN 1.0. A decision table consists of: input columns (the conditions being evaluated), output columns (the results produced), and rows (rules — each row specifies a combination of input conditions and the corresponding output). Decision tables separate decision logic from process flow, making decisions independently testable, versionable, and auditable.
In agent architectures, decision tables replace the current pattern of using LLM inference for decisions that should be deterministic. A compliance classification, an escalation routing rule, or a threshold evaluation should produce the same result every time given the same inputs — this requires a decision table, not probabilistic inference.
Distinguished from: Decision tree (a branching structure; a decision table is a matrix of inputs and outputs). Business rules engine (implementation technology; a decision table is a modeling construct). If/else chains (imperative code; decision tables are declarative and separately auditable).
First appears in: Clause 10 §10.3.
Hit Policy
A DMN concept that specifies how a decision table resolves when multiple rules match the same input. The hit policy is declared at the decision table level and determines the evaluation behavior. DMN defines several hit policies, two of which are most relevant to this specification:
| Hit Policy | Semantics | Agent Application |
|---|---|---|
| UNIQUE (U) | Exactly one rule matches any input; overlapping rules are an error | Classification — each input belongs to exactly one category |
| FIRST (F) | Rules are evaluated in priority order; the first match wins | Escalation routing — highest-priority matching rule determines the response |
| ANY (A) | Multiple rules may match, but all must produce the same output | Validation — multiple conditions may apply but the answer must be consistent |
| COLLECT (C) | Multiple rules may match; all outputs are collected as a list | Aggregation — gather all applicable responses |
| PRIORITY (P) | Multiple rules may match; the output with highest priority wins | Prioritized selection — multiple options, best one chosen |
| RULE ORDER (R) | Multiple rules may match; outputs listed in rule order | Ordered listing — sequence matters |
The hit policy makes explicit what is implicit in most decision logic: what happens when more than one rule applies? In agent architectures, this explicitness is critical for auditability — the evaluation strategy is declared, not hidden in code.
Distinguished from: Evaluation strategy in code (implicit and embedded; hit policy is explicit and declared). Conflict resolution (a broader concept; hit policy specifically governs multi-match behavior in decision tables). Precedence rules (a partial concept; hit policy covers the full range of multi-match behaviors, not just priority).
First appears in: Clause 10 §10.3.
Decision Requirements Diagram (DRD)
A DMN construct that models the dependencies between decisions — which decisions require the outputs of other decisions as inputs. A DRD shows how complex decisions decompose into simpler sub-decisions and how those sub-decisions connect to input data and knowledge sources. In agent architectures, DRDs make explicit the decision dependency chain that is often implicit: “before I can decide X, I need the results of decisions Y and Z.”
Distinguished from: Decision table (which models a single decision’s logic; a DRD models how multiple decisions relate). Flowchart (which models process flow; a DRD models decision dependencies). Dependency graph in software (which tracks code dependencies; a DRD tracks decision dependencies).
FEEL (Friendly Enough Expression Language)
The expression language defined by DMN for specifying input expressions, output expressions, and conditions in decision tables. FEEL is designed to be readable by business stakeholders while remaining formally executable — a middle ground between natural language (ambiguous) and programming languages (opaque to non-programmers). FEEL supports comparisons, ranges, boolean logic, string operations, date/time calculations, and list operations.
Distinguished from: SQL (a database query language; FEEL is specifically designed for decision table expressions). Python/JavaScript (general-purpose programming languages; FEEL is a domain-specific language for decision modeling). Natural language (ambiguous; FEEL is formally executable while remaining human-readable).
Deterministic/Probabilistic Separation
The structural principle that decisions requiring reproducible, auditable evaluation — compliance classification, threshold checks, routing rules, boundary enforcement — SHALL use deterministic decision models (DMN decision tables), while decisions requiring adaptive, context-sensitive judgment — observation, analysis, hypothesis generation, creative synthesis — SHOULD use LLM inference. This is not a prohibition of LLM judgment but a governance insistence that the right tool governs the right kind of decision.
The separation is critical because LLMs are probabilistic: given the same input, they may produce different outputs. For decisions where reproducibility and auditability matter, probabilistic variance is a governance deficiency, not a feature.
Distinguished from: Determinism vs. stochasticity (a mathematical concept; this is a governance principle about which decision-making mechanism is appropriate for which kind of decision). Anti-AI sentiment (the separation embraces LLM judgment for decisions where it excels; it constrains LLM judgment only for decisions that require deterministic evaluation). Rules-based vs. ML (a technology categorization; deterministic/probabilistic separation is a governance architecture that uses both, appropriately).
First appears in: Introduction §I.5. Specified in Clause 9 §9.3 and Clause 10 §10.3.
8. Performance and Risk
Value Stream Mapping (VSM)
A Lean manufacturing technique for analyzing the flow of materials and information through a process to identify value-adding steps versus waste (non-value-adding steps). Value Stream Mapping originated in the Toyota Production System and was formalized in the Lean manufacturing literature. It measures each step in a process against five dimensions: cost, work time (active processing), wait time (queuing and delays), total time (end-to-end lead time), and value-add classification (does this step add value for the customer?).
In this specification, VSM provides the Performance attribute family for the governed activity model (Clause 8, §8.3). Every governed activity carries five performance attributes derived from VSM: Cost, Work Time, Wait Time, Total Time, and Value-Add. These attributes enable performance analysis of agent processes — which steps are expensive, which are slow, and which add no value to the end deliverable.
Distinguished from: Process mining (automated discovery of processes from event logs; VSM is a manual analysis technique). Profiling (measuring code execution performance; VSM measures process-level performance including human and organizational factors). KPI dashboards (which display metrics; VSM is an analysis technique for identifying waste and improvement opportunities).
First appears in: Foreword. Specified in Clause 8 §8.3.
Cost (Performance Attribute)
The Performance attribute measuring the direct cost per execution of a governed activity. In agent architectures: token cost, API cost, compute cost, and external service cost per execution. Cost tracking at the activity level enables Value Stream Analysis — identifying which steps in a process are disproportionately expensive relative to the value they add.
Work Time (Performance Attribute)
The Performance attribute measuring active processing time — the duration during which the agent or human is actually performing work on the activity. Work Time excludes queue time, dependency waits, and overhead. In agent architectures: the wall-clock time the agent spends actively processing, excluding time waiting for human input, API responses, or upstream activities.
Distinguished from: Wait Time (time spent not working — queuing, waiting for dependencies). Total Time (the end-to-end duration including both Work Time and Wait Time). CPU time (a compute metric; Work Time is a process metric).
Wait Time (Performance Attribute)
The Performance attribute measuring non-productive time — queue time, dependency waits, human-in-the-loop latency, and external API response time. Wait Time is the gap between when an activity could start and when it does start, plus any pauses during execution. In Lean terminology, Wait Time is a primary source of waste — it adds to lead time without adding value.
Distinguished from: Work Time (active processing; Wait Time is inactive). Idle time (a system metric; Wait Time is a process metric that includes organizational delays). Latency (a networking concept; Wait Time encompasses all non-productive delays including human and organizational factors).
Total Time (Performance Attribute)
The Performance attribute measuring end-to-end duration of a governed activity, including Work Time, Wait Time, and all overhead. In Lean terminology, this is “lead time” — the elapsed time from when a work item enters the activity to when it exits. Total Time is the metric the customer experiences: how long the whole thing took.
Distinguished from: Work Time (just the active processing portion). Processing time (often used synonymously with Work Time). Cycle time (sometimes means Total Time, sometimes means Work Time — Total Time avoids this ambiguity).
Value-Add (Performance Attribute)
The Performance attribute classifying whether a governed activity adds value to the end deliverable or constitutes overhead, rework, or waste. In Lean thinking, “value-add” is defined from the customer’s perspective: does this step transform the work product in a way the customer would pay for? Steps that the customer would not value — internal approvals that add no quality, redundant reviews, format conversions, unnecessary data transformations — are non-value-add (waste).
In agent architectures, Value-Add classification enables identifying which steps in an agent process are essential and which are organizational overhead. This is particularly important as organizations add governance checkpoints — each checkpoint should be assessed for whether it adds genuine governance value or is ceremonial.
Distinguished from: Quality (a broader concept; Value-Add specifically asks “does the customer value this step?”). Efficiency (how well work is done; Value-Add asks whether the work should be done at all). ROI (a financial measure; Value-Add is a per-step process classification).
ISO 31000
The international standard for risk management, published by the International Organization for Standardization. ISO 31000 provides a framework for identifying, analyzing, evaluating, and treating risks. The framework defines a systematic process: risk identification (what could go wrong?), risk analysis (how likely and how severe?), risk evaluation (does this risk warrant action?), and risk treatment (what controls mitigate the risk?).
In this specification, ISO 31000 provides the Risk attribute family for the governed activity model (Clause 8, §8.4). Every governed activity carries two risk attributes: Risk (failure modes, likelihood, severity, and controls in place) and Problems (active issues, edge cases, or failure patterns currently affecting the activity). Activities with high-severity risk ratings SHALL have documented mitigation controls.
Distinguished from: NIST AI RMF (an AI-specific risk framework; ISO 31000 is a general risk management standard applicable to any domain). Risk register (a project management tool; ISO 31000 is the framework within which risk registers are maintained). Security assessment (one type of risk analysis; ISO 31000 encompasses all types of organizational risk).
First appears in: Foreword. Specified in Clause 8 §8.4.
9. Governance Infrastructure
Controlled Vocabulary
A centralized repository of authorized terms with definitions, types, visibility settings, and enforcement, maintained for each execution domain. In agent architectures, a controlled vocabulary prevents semantic drift across agents, sessions, and execution contexts. When Agent A’s output becomes Agent B’s input, both SHALL use the same terms for the same concepts. Without vocabulary constraints, LLMs — as probabilistic text generators — may refer to the same concept differently across interactions, accumulating semantic inconsistency that degrades process integrity.
Distinguished from: Glossary (informational; a controlled vocabulary is normative and enforced). Taxonomy (a classification hierarchy; a controlled vocabulary is a flat list of authorized terms with definitions). Ontology (a formal knowledge representation; a controlled vocabulary is simpler — authorized terms and their definitions, without formal logic or reasoning capabilities). Prompt engineering (an informal practice; a controlled vocabulary is governance infrastructure that constrains what terms agents may use).
First appears in: Clause 10 §10.1.
Policy Linkage
The ability to link governance documents — regulatory requirements, corporate policies, operational procedures, compliance standards — to any process element (activity, gateway, subprocess, or entire process) so that the relevant governance constraints are available at point of execution. Policy linkage replaces the current agent framework pattern of conflating governance constraints with execution instructions in system prompts. Instead, governance documentation is separated from execution instructions and linked with per-step precision.
The governance contribution: an agent does not need to “know” the full regulatory landscape. It needs access to the specific policies that govern this step.
Distinguished from: System prompt instructions (which conflate governance with execution; policy linkage separates them). Compliance documentation (which exists somewhere; policy linkage makes it accessible at the point of execution). RAG (Retrieval Augmented Generation — a mechanism for retrieving information; policy linkage is a governance construct that ensures the right governance documents are linked to the right process elements).
First appears in: Clause 10 §10.2.
Governance Scope Boundary
An organizational container for related process artifacts that defines permission boundaries and inheritance. In agent architectures, governance scope boundaries establish domain-scoped authority: a “Finance” scope contains finance processes with finance-specific vocabulary, policies, and access controls; a “Customer Support” scope operates within different constraints. Scope boundaries prevent cross-domain contamination and enforce the principle that agent authority is bounded — an agent authorized for customer support processes cannot modify finance processes.
Distinguished from: Security perimeter (an infrastructure concept; governance scope boundaries are organizational governance containers). Namespace (a code organization concept; governance scope boundaries carry governance semantics including permission and inheritance). Organizational boundary (which may be informal; governance scope boundaries are formal governance containers with defined permissions).
First appears in: Clause 10 §10.4.
Audit Trail
The complete history of all changes to every process element, with attribution and rollback capability. In agent architectures, the audit trail records: who did what, when, under what authority, with what inputs, producing what outputs, and whether the action was aligned with governing intent. The audit trail is a primary output of execution governance — it provides the structured evidence that flows upward through the stitching mechanism (Clause 7.3) to support Intent Stack L1 (Runtime Alignment) alignment assessment.
The audit trail SHALL be append-only for governance-critical events. Process modifications SHOULD be version-controlled with full attribution.
Distinguished from: Logs (operational records; an audit trail is governance evidence with attribution and rollback capability). Observability (monitoring what happens; an audit trail provides accountability for what happened). Version history (tracking changes; an audit trail records not just what changed but who changed it, under what authority, and whether it aligned with governing intent).
First appears in: Clause 10 §10.5.
Derived Documentation
Documentation auto-generated from the process model rather than manually authored. The process model is the source of truth; documentation is a derived projection. When the process model changes, documentation regenerates automatically. This is a direct instance of the Intent Stack’s “source state over derived state” principle: maintain the source (the process model), derive artifacts (documentation, reports, compliance summaries) on demand.
Distinguished from: Written documentation (manually authored; derived documentation is auto-generated). API documentation (generated from code; derived documentation is generated from process models). Reports (which may be manual; derived documentation is an automatic projection of the process model into narrative form).
First appears in: Clause 10 §10.6.
10. Agent Deployment Patterns
Agent Species
An empirically observed cluster of agent deployment patterns sharing governance configuration characteristics. The term “species” (adapted from Jones, 2026) captures the observation that production agent deployments naturally cluster into distinct patterns — not because anyone designed a taxonomy, but because certain governance configurations are repeatedly useful. This specification identifies five species (Clause 6): Coding Harness (Individual), Coding Harness (Project-Scale), Dark Factory, Auto Research, and Orchestration Framework. Each species is explained by its governance configuration across the Five Intent Primitives, the position of humans, and the degree of BPM/Agent Stack contribution needed.
Distinguished from: Agent type (a classification by capability; agent species is a classification by governance configuration). Framework (a technology choice; species is a deployment pattern). Architecture (a system design; species is a governance characterization of how a deployment is structured).
First appears in: Introduction §I.2. Specified in Clause 6.
Coding Harness (Individual)
An agent species where a single LLM agent operates as a developer substitute. The human provides tasks, the agent executes, the human reviews output. The simplest agentic pattern. Governance configuration: informal intent communication, human judgment throughout, single delegation interface, minimal BPM/Agent Stack contribution. The RACI matrix is trivial (developer = Accountable, agent = Responsible). Process structure adds overhead without proportional value at this scale. The simplicity is the feature.
Distinguished from: Coding Harness (Project-Scale) (which adds a planner agent managing multiple executors — structurally different because the agent is the manager, not the human). An IDE plugin (a specific implementation; the Coding Harness is a governance characterization of a deployment pattern).
First appears in: Clause 6 §6.2.1.
Coding Harness (Project-Scale)
An agent species where a planner agent decomposes project-level work and manages executor agents. The agent is the manager, not the human. The human operates at edges and checkpoints. This is the governance complexity threshold — the point at which informal governance breaks down and structured process governance becomes necessary. The BPM/Agent Stack’s Orchestration concern enters here: swimlanes for planner-executor responsibility separation, gateways for task routing, and subprocesses for governed decomposition.
Distinguished from: Coding Harness (Individual) (single agent, single interface, human throughout; Project-Scale has multiple agents, N+1 interfaces, and humans at edges). Orchestration Framework (specialized agents with per-handoff review; Project-Scale has a planner managing general-purpose executors).
First appears in: Clause 6 §6.2.2.
Dark Factory
An agent species with near-zero human involvement between specification input and evaluation-passing output. Humans are at the edges only: design and intent at the top, evaluation and review at the end. The middle is autonomous. The critical quality gate is Intent Stack L2 (Specification) — if intent is not formalized well at L2, the dark factory produces the wrong thing correctly. The BPM/Agent Stack contribution is full: governed activities, typed gateways, structured exception handling, escalation events, subprocess decomposition, and audit trail.
The name comes from manufacturing — a “lights-out factory” that operates without human presence on the production floor.
Distinguished from: Fully autonomous AI (a dark factory has humans at the edges for design and evaluation; it is autonomous in the middle, not everywhere). Automated testing (one component; a dark factory is an entire governed production pipeline). Batch processing (a computing concept; a dark factory is a governance pattern for agent autonomy within bounded scope).
First appears in: Introduction §I.2. Specified in Clause 6 §6.2.3.
Auto Research
An agent species where an agent optimizes for a metric through iterative experimentation — not producing software but climbing a hill toward a more optimal value. Descended from classical machine learning techniques and formalized by Karpathy (2026). The frozen metric IS the Boundaries primitive — the agent cannot modify its own evaluation function. The constrained action space IS boundary enforcement. Auto research works BECAUSE the agent cannot relax its own constraints.
Distinguished from: Research by an agent (general information gathering; Auto Research is a specific optimization loop pattern). A/B testing (comparing alternatives; Auto Research iteratively improves toward a metric). Machine learning training (which optimizes model weights; Auto Research optimizes artifacts or configurations toward a metric using an agent, not gradient descent).
First appears in: Introduction §I.2. Specified in Clause 6 §6.2.4.
Orchestration Framework
An agent species where multiple LLMs with specialized roles coordinate through handoffs — researcher, writer, editor, reviewer. Heavy human involvement at every transition point. The BPM/Agent Stack contribution is at its maximum here because orchestration IS process management: swimlanes for role separation, gateways for routing decisions, events for exception handling, message flows for governed handoffs, subprocesses for governed decomposition, controlled vocabulary for semantic consistency, and decision models for deterministic routing logic.
Distinguished from: A single multi-tool agent (which has one role and many tools; an orchestration framework has multiple specialized agents). Coding Harness (Project-Scale) (which has a planner managing general executors; an orchestration framework has specialized agents coordinating through handoffs). LangGraph/CrewAI (specific implementations; Orchestration Framework is a governance characterization of the deployment pattern).
First appears in: Introduction §I.2. Specified in Clause 6 §6.2.5.
Governance Complexity Gradient
The observation that the five agent species, when arranged by BPM/Agent Stack contribution, form a monotonic gradient from minimal governance infrastructure (Individual Coding Harness) to maximum governance infrastructure (Orchestration Framework). The gradient has a structural interpretation: BPM/Agent Stack contribution scales with the number and complexity of governance interfaces in the deployment pattern. Species with few interfaces need minimal execution governance; species with many interfaces need rich execution governance. This gradient MAY serve as a proxy measure for governance complexity.
Distinguished from: Difficulty (governance complexity is structural, not experiential). Maturity model (which implies progression; the gradient is descriptive — some deployments genuinely need minimal governance). Overhead (governance complexity is not overhead — it is the necessary structure for the deployment pattern’s governance requirements).
First appears in: Clause 6 §6.3.
11. Architecture and Connection
Stitching Mechanism
The structural connection between the Intent Stack and BPM/Agent Stack — the bidirectional interface where governance context becomes actionable execution specification. The primary stitching point is the Key Tasks primitive (fifth of the Five Intent Primitives) operationalized at Intent Stack L2 (Specification). Key Tasks defines what work is authorized; L2 translates that authorization into actionable direction; the BPM/Agent Stack’s governed process model IS the execution specification that implements that direction.
The stitching mechanism is bidirectional: intent flows downward from governance context through Key Tasks into governed processes, and structured evidence flows upward from execution through the audit trail into Intent Stack L1 (Runtime Alignment) for alignment assessment.
Distinguished from: An API (implementation mechanism; the stitching mechanism is an architectural connection between two governance specifications). A plugin interface (extensibility mechanism; the stitching mechanism connects two peer specifications, neither subordinate). Integration (the stitching mechanism is the governance architecture’s structural joint, not a technology integration).
First appears in: Clause 7.
Key Tasks (as Stitching Primitive)
The fifth Intent Primitive, defined by the Intent Stack as the authorized scope of work at a governance interface. In the context of the stitching mechanism, Key Tasks is the primary structural joint connecting governance context to execution structure. Each authorized Key Task, as operationalized through Intent Stack L2 (Specification), becomes the entry point for a governed process in the BPM/Agent Stack. Key Tasks defines WHAT work is authorized; the BPM/Agent Stack defines HOW that authorized work gets executed.
For the full definition of Key Tasks as an Intent Primitive, see the Intent Stack glossary.
Distinguished from: Tasks in agent frameworks (prompt-response exchanges; Key Tasks is a governance primitive that defines authorized scope). A to-do list (Key Tasks is a governance mechanism defining authorized work, not a work plan). Work breakdown structure (a project management artifact; Key Tasks is an Intent Primitive at every governance interface).
First appears in: Clause 7 §7.2.
Boundary Propagation
The mechanism by which the Boundaries primitive (third of five Intent Primitives) propagates from the Intent Stack through governance interfaces into BPM/Agent Stack execution through multiple channels: policy links, vocabulary constraints, gateway conditions, event triggers, and activity restrictions. Boundary propagation is a secondary stitching mechanism that operates in parallel with the primary Key Tasks stitch.
Distinguished from: Inheritance in code (Boundary propagation is governance inheritance, not object-oriented inheritance). Access control lists (implementation mechanism; Boundary propagation is governance-level constraint inheritance). Rule propagation in business rules engines (technology-specific; Boundary propagation is architecture-level governance).
First appears in: Clause 7 §7.4.
Monotonic Accumulation
The structural property of the Boundaries primitive: constraints can only be added, never removed, as they propagate through governance interfaces. A subprocess SHALL NOT authorize what its parent process prohibits. A downstream delegation interface SHALL NOT relax constraints established by an upstream interface. Monotonic accumulation is what makes Boundaries the only Intent Primitive where Constitutional Intent always overrides — once a constraint is established, it cannot be removed by any lower-level authority.
In the BPM/Agent Stack, monotonic accumulation is enforced at every subprocess boundary: each subprocess inherits all Boundary constraints from its parent and may add additional constraints but may never relax them.
Distinguished from: Additive permissions (where permissions are granted; monotonic accumulation is about constraints — things that must NOT happen). Scope narrowing (an informal concept; monotonic accumulation is a formal structural property with mathematical grounding in the Intent Stack’s join-semilattice model). Access control inheritance (typically allows override; monotonic accumulation does not).
First appears in: Foreword. Specified in Clause 7 §7.4 and Clause 9 §9.5.
Evidence Return Path
The upward flow through the stitching mechanism from BPM/Agent Stack execution to Intent Stack L1 (Runtime Alignment). The BPM/Agent Stack SHALL provide structured evidence — audit trails, performance metrics, exception records, decision logs, completion artifacts, and boundary compliance records — sufficient for alignment assessment. Evidence quality is a primary execution governance concern: execution that produces poor evidence is governance-deficient regardless of outcome quality.
Distinguished from: Reporting (informational; the evidence return path provides governance evidence for alignment assessment). Logging (operational records; the evidence return path provides structured, typed governance evidence). Feedback loop (implies correction; the evidence return path enables alignment assessment, which may or may not lead to correction).
First appears in: Clause 7 §7.3.
Holdout Principle
A validation mechanism adapted from machine learning’s holdout set methodology: acceptance criteria that the implementing agent never sees during execution. If the implementing agent can see the acceptance criteria, it can optimize for passing them rather than genuinely respecting the governance boundary. Keeping validation criteria external ensures the implementation serves the governance intent, not the evaluation’s specifics.
In this specification, the holdout principle applies to: IP classification boundaries (the authoring agent never sees the leakage detection scenarios), process compliance boundaries (the executing agent never sees the compliance test cases), and delegation authority boundaries (the agent never sees the authority-exceeding test patterns).
Distinguished from: Test suite (which the developer can see and optimize for; holdout criteria are hidden from the implementer). Quality gates (which are visible checkpoints; the holdout principle specifically requires invisibility to the implementer). Acceptance testing (which may be visible; the holdout principle requires the criteria to be external and hidden).
First appears in: Clause 12 §12.1.
Information Scent
Expert-detectable signals in published material indicating depth behind what is shown. The concept comes from information foraging theory (Pirolli and Card, 1999): just as animals follow scent trails to food, information seekers follow signals of information quality to deeper knowledge. In this specification, information scent is a design property of the IP boundary — the Publishable tier of the three-tier IP classification is explicitly designed to create expert-detectable scent. Domain experts examining the public material should detect that there is substantial depth behind what is shown, leading to engagement rather than reproduction.
Distinguished from: Marketing (information scent is detected by experts, not created for mass appeal). Documentation quality (which aims to be complete; information scent aims to signal depth). SEO (which optimizes for search engines; information scent optimizes for expert recognition of depth).
First appears in: Clause 13 §13.3.
12. Context, Memory, and Intent
Context (Structural Concept)
The information available to an LLM at inference time — system prompt, conversation history, tool results, retrieved documents, MCP server outputs. Context is the vehicle for intent communication, not intent itself. You can have rich context with zero governance (the current state of most agent deployments). The Intent Stack’s contribution is structuring what goes into context so it carries governed intent, not just information.
Each agent species has a different context architecture: individual coding harness uses human-curated conversational context; dark factory uses formal specification as context; auto research uses frozen metric plus research direction; orchestration framework uses per-role context at each handoff. The context architecture is a consequence of the governance configuration, not an independent design choice.
For the full treatment of context’s role in governance, see Clause 11.
Distinguished from: Intent (the governance content communicated through context). Memory (the mechanism for persisting context across sessions). Knowledge (a broader concept; context is specifically what is available at inference time). Prompt (one component of context; context includes conversation history, tool results, and retrieved documents as well).
Memory (Structural Concept)
The mechanism for information persistence across sessions — conversation history, vector stores, knowledge bases, auto-memory. Memory is mechanism, not governance. The critical distinction: how information persists (memory) versus what should be remembered, with what authority, under what constraints (governance). Native memory mechanisms handle persistence; governance infrastructure handles what the memory system should capture and how captured information governs future behavior. The separation is architectural, not incidental.
For the full treatment of memory’s role in governance, see Clause 11.
Distinguished from: Intent (the governance content that memory persists). Context (what is available at inference time; memory is what persists across sessions). Knowledge base (an implementation; memory is the governance concept of persistence). Learning (which implies behavioral change; memory is storage, governed by separate governance infrastructure).
Intent (Structural Concept)
The content of governance at a delegation interface. Not “what the user wants” — the complete governance specification decomposed into five irreducible primitives (Purpose, Direction, Boundaries, End State, Key Tasks), originating from four sources (Constitutional, Discovered, Cultivated, Emergent). Intent is relational (constituted between entities), processual (evolving through governance relationships), and normative (carrying prescriptive force). Context is how intent is communicated. Memory is how intent persists. Neither is intent itself.
For the full definition and decomposition, see the Intent Stack glossary.
Distinguished from: Desire (subjective; intent in the Intent Stack is a structural governance concept). Instruction (too specific; intent is the full governance specification at a boundary). Goal (too abstract to govern against directly). “What the user wants” (a colloquial reduction; intent is the complete five-primitive governance specification).
First appears in: Clause 11.
13. Additional Terms
Conformance Target
A defined subset of this specification against which an implementation can claim conformance. This specification defines three conformance targets: Activity Model (Clause 8 — the 21 governed activity attributes), Process Structure (Clause 9 — typed gateways, structured exception handling, governed decomposition, explicit flows), and Governance Infrastructure (Clause 10 — controlled vocabulary, policy linkage, decision models, scope boundaries, audit trail). An implementation MAY conform to one or more targets independently.
Distinguished from: Compliance (meeting regulatory requirements; conformance is meeting specification requirements). Certification (which implies third-party verification; conformance is a self-declared claim with documented deviations). Feature checklist (which counts features; conformance requires the structural properties defined by the target, not just the presence of features).
First appears in: Clause 2 §2.1.
Self-Referential Governance
The observation that this specification’s own public/private boundary (the IP classification) is a governance instance of the architecture it describes. The three-tier IP classification (Published, Publishable, Protected) is a Boundaries primitive applied to a governance interface. The IP Classification Document is Intent Stack L3 (Intent Formalization). The scope definition is Intent Stack L2 (Specification). The holdout validation scenarios are Intent Stack L1 (Runtime Alignment). The authoring agent operates within governing constraints inherited from above — a BPM/Agent Stack execution instance.
Distinguished from: Dogfooding (using your own product; self-referential governance is the structural observation that the governance architecture applies to its own publication). Meta-governance (governance of governance processes; self-referential governance is a specific instance where the specification governs its own boundary). Recursion in code (a programming concept; self-referential governance is an architectural property).
First appears in: Clause 13.
Five Intent Primitives
Purpose, Direction, Boundaries, End State, Key Tasks — the five irreducible structural elements present at every governance interface. The primitives describe governance content: what must be governed at any principal-agent relationship. They are defined, decomposed, and analyzed by the Intent Stack (Clause 5 of the companion specification). This specification references them extensively — they appear at every delegation interface, they structure the agent species analysis (Clause 6), and the Key Tasks primitive is the primary stitching point connecting the two specifications (Clause 7).
For the full definition and decomposition of each primitive, see the Intent Stack glossary.
Distinguished from: Governance concerns (the seven concerns describe governance questions; the five primitives describe governance content). Governance layers (which describe governance structure; the five primitives describe what governance contains). Values (which are abstract; the five primitives are structural elements with specific roles).
First appears in: Introduction §I.4. Referenced throughout.
Four Governance Context Layers
The Intent Stack’s four governance concerns arranged in vertical composition: L4 Intent Discovery (what does the principal actually intend?), L3 Intent Formalization (how is intent represented in machine-processable form?), L2 Specification (given this intent, what shall we actually do?), and L1 Runtime Alignment (is execution aligned with intent?), with Constitutional AI as the substrate beneath L1. Intent flows downward from discovery to alignment; evidence flows upward from alignment to discovery.
For the full specification, see the Intent Stack specification.
Distinguished from: The three execution governance concerns (which are this specification’s domain). OSI network layers (the Intent Stack layers are governance concerns, not protocol layers). Software architecture layers (the Intent Stack layers are governance layers, not technology layers). The Five Intent Primitives (which describe governance content; the four layers describe governance structure).
Three Execution Governance Concerns
This specification’s three concerns: Orchestration, Integration, and Execution. Together with the four governance context layers, these constitute the seven governance concerns of the complete two-specification architecture. The three execution governance concerns are the formal scope of this specification — they define what this specification governs and what it does not.
Distinguished from: The four governance context layers (the Intent Stack’s domain). Operational concerns (a broader category; the three execution governance concerns are specifically scoped to process-level execution governance). Infrastructure concerns (which may include hosting, networking, etc.; the three execution governance concerns are governance concerns, not technology infrastructure concerns).
Alphabetical Index
| Term | Section |
|---|---|
| ABPMP BPM CBOK | 2 |
| Accountable Owner | 4 |
| Activity | 3 |
| Activity Attributes | 3 |
| Agent Species | 10 |
| Audit Trail | 9 |
| Auto Research | 10 |
| Boundary Propagation | 11 |
| BPM (Business Process Management) | 1 |
| BPM/Agent Stack | 1 |
| BPMN 2.0 | 2 |
| Business Rule Task | 3 |
| CMMN 1.0 | 2 |
| Coding Harness (Individual) | 10 |
| Coding Harness (Project-Scale) | 10 |
| Compensation Event | 6 |
| Conformance Target | 13 |
| Context (Structural Concept) | 12 |
| Controlled Vocabulary | 9 |
| Cost (Performance Attribute) | 8 |
| Customers (SIPOC Attribute) | 4 |
| Dark Factory | 10 |
| Decision Requirements Diagram | 7 |
| Decision Table (DMN) | 7 |
| Derived Documentation | 9 |
| Deterministic/Probabilistic Separation | 7 |
| DMN 1.0 | 2 |
| Error Event | 6 |
| Escalation Event | 6 |
| Event (BPMN) | 6 |
| Event-Based Gateway | 5 |
| Evidence Return Path | 11 |
| Exclusive Gateway (XOR) | 5 |
| Execution (Governance Concern) | 1 |
| Execution Governance | 1 |
| FEEL | 7 |
| Five Intent Primitives | 13 |
| Four Governance Context Layers | 13 |
| Gateway | 5 |
| Governed Activity | 3 |
| Governed Decomposition | 5 |
| Governance Complexity Gradient | 10 |
| Governance Configuration | 1 |
| Governance Scope Boundary | 9 |
| Hit Policy | 7 |
| Holdout Principle | 11 |
| Inclusive Gateway (OR) | 5 |
| Information Scent | 11 |
| Inputs (SIPOC Attribute) | 4 |
| Integration | 1 |
| Intent (Structural Concept) | 12 |
| ISO 31000 | 8 |
| Key Tasks (as Stitching Primitive) | 11 |
| Manual Task | 3 |
| Memory (Structural Concept) | 12 |
| Message Event | 6 |
| Message Flow | 5 |
| Milestone | 5 |
| Monotonic Accumulation | 11 |
| OMG (Object Management Group) | 2 |
| Orchestration | 1 |
| Orchestration Framework | 10 |
| Orthogonality | 1 |
| Outputs (SIPOC Attribute) | 4 |
| Parallel Gateway (AND) | 5 |
| Participant | 4 |
| Policy Linkage | 9 |
| Process | 2 |
| Process Instance | 2 |
| Process Model | 2 |
| RACI Matrix | 4 |
| Receive Task | 3 |
| RFC 2119 | 2 |
| Script Task | 3 |
| Self-Referential Governance | 13 |
| Send Task | 3 |
| Sequence Flow | 5 |
| Service Task | 3 |
| Seven Governance Concerns | 1 |
| Signal Event | 6 |
| SIPOC | 4 |
| Stitching Mechanism | 11 |
| Structured Exception Handling | 6 |
| Subprocess | 3 |
| Suppliers (SIPOC Attribute) | 4 |
| Swimlane (Pool and Lane) | 5 |
| Systems (Activity Attribute) | 4 |
| Three-Layer Architecture | 1 |
| Three Execution Governance Concerns | 13 |
| Timer Event | 6 |
| Total Time (Performance Attribute) | 8 |
| User Task | 3 |
| Value-Add (Performance Attribute) | 8 |
| Value Stream Mapping | 8 |
| Wait Time (Performance Attribute) | 8 |
| Work Time (Performance Attribute) | 8 |