Janus (Interactive Video). GIF screen capture (2022)
As a professor in studio art, web-based technologies, such as Zoom, Miro, and Discord, have allowed my students and I to maintain some continuity of education during the COVID-19 pandemic. We were able to cobble together a virtual approximation of an MFA studio experience until the world returned to normal. This, of course, has not happened. Instead, I continue to mentor students as they complete their final projects online, have their thesis shows online, graduate online, and say goodbye online. These simple web-based technologies are no longer temporary structures for communication, they have become the substrate of Being in the age of COVID-19. As my daily experience becomes increasingly virtual, the impact is surprisingly physical - which is to say, I feel awful.
Janus (Interactive Video). Screen recording (2022)
My digital experience is an embodied one with deeply complicated physical repercussions. However, the digital avatars I inhabit do a poor job embodying my physical state. I am often reduced to a video thumbnail inside a teleconference window grid, or my entire self reconstituted as a moving cursor in an online collaborative whiteboard.
Janus reflects my first attempt to rethink the digital forms we inhabit and virtual spaces where we meet. It is a computer program that splits a video call into two separate A/V feeds and uses them to drive an AI facial tracking model. The mirror surface geometry of each face is reflecting the video feed of the speaker. Audio signal is converted into a displacement map and augments the opposite speaker's face. All of this is done in real-time and presented on a monitor with a MIDI controller and headphones. A viewer is able to orbit around the two faces and control where the camera is placed using the MIDI controller. The sound is spatial and the volume is affected by the camera's relation to the speaker. Hidden along the Z-Axis are the two referenced video feeds presented in black and white and facing opposite directions.
Janus reflects my first attempt to rethink the digital forms we inhabit and virtual spaces where we meet. It is a computer program that splits a video call into two separate A/V feeds and uses them to drive an AI facial tracking model. The mirror surface geometry of each face is reflecting the video feed of the speaker. Audio signal is converted into a displacement map and augments the opposite speaker's face. All of this is done in real-time and presented on a monitor with a MIDI controller and headphones. A viewer is able to orbit around the two faces and control where the camera is placed using the MIDI controller. The sound is spatial and the volume is affected by the camera's relation to the speaker. Hidden along the Z-Axis are the two referenced video feeds presented in black and white and facing opposite directions.
Janus (Interactive Video). Screen stills (2022)
Janus (Interactive Video). Installation View. Artscape Gibraltar Point, Toronto Canada (August 2022)
Janus (Interactive Video). Installation View. Artscape Gibraltar Point, Toronto Canada (August 2022)
Janus Diagram (Paint, Pen, Paper). A hand-drawn computational diagram of the Janus program.