The most expensive part of intelligence is not generation. It is permission.
ARQ v0.2 clarified something I think AI discussion still understates.
The question is not only: can the system notice deviation?
The harder question is: how much authority should that deviation be allowed to acquire right now?
That is a very different problem.
Because once a long-lived system carries memory, continuity, and bounded action, surprise is not the scarce resource anymore.
Permission is.
Permission to retain. Permission to promote. Permission to reshape future behavior.
And that permission cannot be granted by excitement.
It has to survive: declared boundaries, declared budgets, trust state, witness sufficiency, review, and the right to fail closed.
This is where a lot of AI still behaves like a talented adolescent: everything vivid feels meaningful.
A mature system should be more conservative.
Earth paragraph:
A strange event in a workshop does not automatically earn write access to the maintenance manual. It first has to survive inspection. That is not hostility toward change. That is how a line remains stable while still learning.
The same should remain true for long-lived AI.
GitHub canonical ARQ v0.2 package in SER :
Zenodo DOI :