Encoding intent is the primitive

post draft

Systems that don't encode intent degrade to keyword search. No matter how sophisticated the retrieval layer, without a representation of what the user meant, every downstream interaction is a guess. Most "AI integrations" in the wild are keyword wrappers wearing the word *agent* as decoration. They fire on strings; they don't carry meaning.

The identity primitive at the root of a trustworthy agentic system is the ability to carry intent — a structured object that says *this is what I'm trying to do, this is the state I prefer, these are the constraints I won't cross*. Everything else is scaffolding around that primitive. Get the primitive right and the rest composes. Get it wrong and every feature on top of it leaks.

What this lets stakeholders do: delegate to agents with the same fidelity they'd brief a colleague. Expect follow-through that respects the constraints even when the situation shifts. Build systems that improve with use rather than degrade into keyword dependence.

What's still open: what's the minimum intent schema — the smallest structured form that still lets an agent survive a surface change?