BIG OILED MACHINE – Audiovisual Performance
Generative Motion Design, Live Performance, Stable Diffusion
Big Oiled Machine is a multimedia art project that explores the interplay between a bustling city and its residents. Are we the driving cogs, perpetually maintaining this urban engine, or does the city, in reality, provide the stability that sustains us?’
- Narciso

With this narrative in mind, this performance aims to offer an opportunity for each person in the audience to reflect on their individual role within this system. How do our unique experiences filled with emotions, struggles and growth fuel our environment and how does our environment, in turn, shape us?
Project Scope
Musical Performance: Narciso
Generative Visuals: Yan He & Luka Truhlar

2023
The performance took place in the UltraSuperNew Gallery in Harajuku, Tokyo and featured a live soundtrack including improvised music produced and performed by Narciso. A blend of real-time and pre-rendered generative visuals was created and controlled by Yan and me. Handycam footage shot by Narciso was used as a basis for the visuals.
Sections of the performance roughly visualized in graph form
'Big Oiled Machine' is made up of five consecutive sections. These sections also mean to represent the course of an exemplary ordinary day in the life of a person living in a hectic city like Tokyo.

1: 'Wakening', snoozy & drowsy mood, slow pace
2: 'Transforming', (de)motivating mood, increasing pace
3: 'Performing', hectic mood, fast pace
4: 'Tumbling', tired mood, distorted pace
5: 'Exhaling', dizzy but calming (happy & depressing) mood, decreasing pace

These five sections served as orientation for us to independently work towards a common theme in both the music and the visual design. To leave room to react to the outcome in real-time and improvise authentically, Narciso wished not to see the visual outcome before the performance. Likewise, Yan and I were given a rough musical outline for reference but could not predict the final, partly improvised soundtrack.
Sorting footage and assigning to different sections based on content and mood
The alteration of the footage was mainly done in TouchDesigner. Visual effects that were created in preparation could be layered onto the footage at any time during the performance. This allowed us to adapt to the improvised music. The visuals also included AI-generated material created with an Image-to-Image Stable Diffusion model.
The result is an interplay between the acoustic and the visual that creates a different experience each time performed. Being a part of an experimental project like this has once again opened my eyes to the potential of digital art to bring people together and create memorable experiences.