
I recently had the honor of being selected for the Contextual juried exhibition, a show that highlights artists exploring the edges of technology, emotion, and human interaction. Being part of this exhibit was both creatively fulfilling and intellectually energizing.
I presented my interactive AI piece, Interwoven Expressions, which uses emotion recognition to generate real-time, typographic visuals. The installation senses facial gestures, vocal tones, and expressions—then responds with generative patterns that mirror those emotional inputs. It turns the viewer into an active participant and co-creator of the art.
As part of the program, I also gave a talk on emotion detection and AI. I shared a few key ideas I’ve been exploring in my work:
- How computer vision models pick up on subtle micro-expressions and translate them into data signals.
- The importance of ethical frameworks when working with affective computing—especially in public, creative environments.
- A live demo that let attendees experience the AI’s emotional interpretation process firsthand.
What stood out most was the curiosity and openness of the audience. We had great conversations about how AI might evolve from simply analyzing us to actually engaging with us on an emotional level. It was an opportunity to reflect on where we draw the line between technology as tool and technology as collaborator.
I’ve always felt that AI, when used thoughtfully, can do more than compute—it can connect. It can mirror our moods, respond to our presence, and even inspire new emotional experiences. That belief continues to guide my work as I look for ways to make our interactions with machines feel more human, not less.
I’m grateful to have been part of Contextual and look forward to where this dialogue between emotion and technology leads next.