Skip to content
Clause 11 — Context, Memory, and Intent

Clause 11 — Context, Memory, and Intent

11. Context, Memory, and Intent

(Normative)

11.1 The Overloading Problem

Three concepts — context, memory, and intent — are routinely conflated in discourse about AI agents. People say “give the agent more context” when they mean “communicate intent better.” They say “the agent needs memory” when they mean “intent should persist across sessions.” The conflation obscures structural relationships that matter for governance.

11.2 Structural Disambiguation

Context is the vehicle for intent communication. It is the information available to an LLM at inference time — system prompt, conversation history, tool results, retrieved documents, MCP server outputs. Context is not intent. You can have rich context with zero governance (the current state of most agent deployments). The Intent Stack’s contribution is structuring what goes into context so it carries governed intent, not just information.

Each agent species has a different context architecture. Individual coding harness: human-curated conversational context. Dark factory: formal specification as context. Auto research: frozen metric plus research direction as context. Orchestration framework: per-role context at each handoff. The context architecture is a consequence of the governance configuration, not an independent design choice.

Memory is the mechanism for persistence. It is persistent information across sessions — conversation history, vector stores, knowledge bases, auto-memory. Memory is mechanism, not governance. The Intent Stack distinguishes mechanism (how information persists) from governance (what should be remembered, with what authority, under what constraints).

This distinction was operationally validated in Session 0051 of Intent OS development: native memory mechanisms (auto-memory) handle persistence; governance infrastructure handles what the memory system should capture and how captured information governs future behavior. The separation is architectural, not incidental.

Intent is the content of governance. It is not “what the user wants” — it is the complete governance specification at a delegation interface, decomposed into five irreducible primitives (Purpose, Direction, Boundaries, End State, Key Tasks). Intent originates from four sources (Constitutional, Discovered, Cultivated, Emergent). Intent is relational (constituted between entities), processual (evolving through governance relationships), and normative (carrying prescriptive force). Context is how intent is communicated. Memory is how intent persists. Neither is intent itself.

11.3 Implications for Agent Architecture

The context/memory/intent distinction has practical implications for each layer of the governance architecture:

Layer Context Memory Intent
Constitutional AI Training data and RLHF examples Model weights Trained values and character
Intent Stack Governance interface content Governance evidence store Five primitives at each delegation interface
BPM/Agent Stack Per-activity documentation, policy links, input data Process instance history, audit trail Process specification and governance attributes

Agent frameworks that conflate context with intent will produce agents that are well-informed but ungoverned. Agent frameworks that conflate memory with intent will produce agents that remember everything but serve nothing. The three-layer architecture provides the structural framework for keeping these concepts properly separated while ensuring they work together.