CHAPTER · 01
What grounding is, precisely
An answer is grounded if its citation resolves to the live underlying memory and inherits its actor identity.
The word "grounded" has become overloaded. Some vendors mean "retrieval-augmented"; others mean "prompt-engineered on customer data". We propose a tighter definition: an AI answer is grounded if and only if (a) its citation is a live pointer to the underlying memory, (b) its answer is bounded by a drift budget, and (c) its actor identity is inherited from the operator asking.
All three conditions must hold. Vendors that satisfy only (a) can still hallucinate within the resolution of the pointer; vendors that satisfy only (b) can still lie about fresh data; vendors that satisfy only (c) can still fail audit.
CHAPTER · 02
The citation standard
Citations must resolve to the same record the operator sees, via the same identity, in one hop.
We model citations as URIs against the event stream, with an optional projection suffix. A citation like "aixys://event/18f2…?as=actor/priya" means "read this event, viewed through the lens of actor priya". The UI can render the citation as a clickable link; the auditor can subscribe to all events touched by a given actor.
Citations that are strings, or that resolve via heuristics, fail this standard. They are persuasive decoration, not grounding.
CHAPTER · 03
The drift budget
Every AI answer carries a freshness window; the UI must communicate it.
Even grounded answers stale. A lead qualification that was true at 9:00 may not be true at 10:00. We require every AI feature to declare a drift budget — the maximum age an answer may carry before the UI re-renders it.
This is a UX contract as much as a technical one. Our convention is a small drift indicator on every AI-authored block, showing the age and the next recompute time.