Not every anomaly deserves memory.
This may be one of the most important boundaries in long-lived AI.
Current AI culture is still too ready to romanticize surprise. A strange output appears, a model behaves unexpectedly, a deviation looks "interesting," and people immediately start talking as if novelty itself were a sign of growth.
I don't think that is mature.
ARQ v0.2 is built around a harder distinction:
a deviation may be visible, interesting, or even useful,
without yet deserving memory authority.
That is why the lifecycle is staged.
Not: interesting -> important
But something closer to: detected -> classified -> observed -> candidate -> provisional -> confirmed
That extra friction matters.
Because once a system can retain continuity across time, bad memory is worse than no memory. Bad memory is how noise acquires status.
Earth paragraph:
A strange sensor spike at 03:17 may be worth looking at. But nobody rewrites the maintenance manual because one graph looked dramatic. You check calibration, repeated patterns, controller logs, and actual behavior under load. Only then do you decide whether the incident belongs to the machine's history.
That is how memory should work in AI too.
Not every anomaly deserves to become biography.
GitHub canonical ARQ v0.2 package in SER:
Zenodo DOI: