The EU AI Act is landing in the real world - and the timing is not accidental.

Last week, China's Lunar New Year broadcast showed humanoid robots doing coordinated martial-arts and acrobatics.

The point isn't "wow". The point is: embodied systems make responsibility visible. When software gets a body, oversight stops being a debate and becomes a procedure.

Here's the part people miss: the AI Act is not a panic button. It's a compliance timeline + evidence discipline.

Timeline (high-level):

  • 2 Feb 2025: bans on prohibited practices + AI literacy obligations started to apply.
  • 2 Aug 2025: governance rules + obligations for general-purpose AI models (GPAI) started to apply.
  • 2 Aug 2026: the Act becomes broadly applicable (including key transparency duties).
  • 2 Aug 2027: longer transition for high-risk AI embedded in regulated products.

So the practical question is no longer "will we regulate?" but "can you prove what happened?"

My operating frame is simple:

identity -> auditable privileges -> enforceable budgets -> tamper-evident witness trail -> fail-closed defaults.

Not "trust me" automation.

I also published a clean-code release that closes the loop from protocol to implementation: Ester Clean Code (local-first, long-lived entity core).

Repo is in the pinned comment / Featured.