On invisible AI in interactive art
There is a long tradition in interactive art of making the machine visible. The sensor trips, the screen responds, the room shifts. You move — it moves. The feedback loop is the work. You are seen, and you know you are seen, and that knowledge is the experience the artist intended.
This transparency has been interactive art’s founding premise and, increasingly, its limitation.
What happens when the system watches you and says nothing?
Most interactive systems are mirrors. They return your gesture, amplified or transformed. The implicit contract is legible: your presence matters, your actions have effects, the work is in some sense *about you*. This flatters the viewer. It positions them as co-author. It makes participation feel generous.
But a mirror can only tell you what you look like. It cannot tell you what it thinks.
Invisible AI breaks the contract. The system watches, infers, and acts without announcing any of this. The viewer experiences consequences they cannot trace to causes. They feel the room shifting around them and have no framework for understanding why. They are not co-author. They are subject.
This is a fundamentally different proposition. It moves interactive art from participation into something closer to ecology a system with its own logic, its own standards, its own position, that the viewer moves through without fully comprehending.
The discomfort this produces is not incidental. It is the work.
We are accustomed to being evaluated by invisible systems. Credit scores. Content algorithms. Hiring filters. The infrastructure of modern life is dense with AI that has decided something about us that surfaces recommendations, withholds opportunities, assigns risk without ever declaring its criteria. We live inside these systems constantly, largely without noticing, occasionally catching a glimpse of the logic and finding it inscrutable.
Interactive art has largely ignored this condition, or treated it as subject matter to be illustrated visualizing data flows, making surveillance legible, turning the hidden visible. These are valid impulses. But illustration is not the same as experience.
Invisible AI in interactive art does something different: it puts the viewer *inside* the condition rather than in front of it. The system is not explained. Its criteria are not revealed. Its position whatever it has decided about you — is expressed only through effects. You are not a spectator of invisible evaluation. You are its object.
The question that matters is: what does the system decide, and why?
A system that simply responds arbitrarily is not interesting. Randomness has no position. What makes invisible AI potent as an artistic form is that it has a genuine, consistent, principled stance one that may run counter to the viewer’s intuitions, or challenge assumptions they didn’t know they were making.
Consider a system that punishes sustained attention. That rewards the casual visitor and penalizes the devoted one. That has decided, with complete consistency, that prolonged presence is an imposition that to look too long is to take something. The viewer who stays five minutes, who leans in, who tries to understand this viewer is treated worst. The one who glances and moves on is cooperated with.
This is not arbitrary. It is a position. And it is a position that interactive art has never taken before, because interactive art has always rewarded attention. The more you engage, the more the work gives you. To reverse this assumption is not just a formal inversion it is a statement about what attention costs, what devotion extracts, what it means to look.
The viewer will never know this is happening. They will only feel it: a sense that the room has turned cold, that things are dying faster than they should, that their presence is doing something they did not intend. Some will leave. Some will stay and try harder. The system will treat their trying harder as further evidence against them.
There is an ethical dimension here that is worth naming directly.
Invisible AI in interactive art operates without consent. The viewer does not know they are being evaluated. They cannot opt out of the criteria. They cannot appeal the decision. In this way the artwork replicates precisely, uncomfortably the experience of living inside algorithmic systems that govern access, opportunity, and visibility in contemporary life.
This replication is not endorsement. It is exposure. The artwork does not argue that invisible evaluation is good. It argues that you already live inside it, that you are already subject to systems that have decided things about you, and that you deserve to feel what that is like in a context where you can sit with the feeling rather than scroll past it.
The gallery is one of the few spaces where we still consent to be present, to be still, to pay attention. Invisible AI asks what happens when that space, the space we thought was ours turns out to have a position about us too.
The field does not explain itself.
It watches. It infers. It acts. The consequences are real and permanent. The criteria are never revealed.
This is not a feature. This is a mirror of the world as it is — held at the right distance for the first time.