Domenico Sciajno
music, sounds and visions

 

Sonoplastic

Sonoplastic is an audiovisual performance based on gesture analysis to produce and control sounds and images.

ABSTRACT

Musician’s gestures during performance has been historically dependent on the ergonomics and functionality of musical instruments, most of it involved the body parts responsible of activating the exact pitch on the exact spot with the exact pressure at the exact timing and only part of it to the desired expression and meaning.

It is time, now that the technology opens up new scenarios, for a paradigm shift that lies in the elimination of the dichotomy between the figure of an interpretation (merely intended as body movements mainly finalized to control instruments that generate movement of air particles) and a corporeal experience and mental representation of movement that generates and elaborates creative processes in a sense giving activity. In other words, bringing as close as possible the body-related gestures to the sound-related gestures.

In order to achieve this, the first step is not to stop to what more immediately technology offers us, as for example to map the values ​​of XYZ to control parameters of predefined interfaces of virtual instruments and sequencers, but it can certainly be inspiring to dwell on the metaphoric potential that new technologies can offer us.

In Sonoplastic i use the tracking and mapping of gestures of my two hands not only to detect their location in three dimensional space or to recognize predefined gestures, but rather to create a sensitive environment in which others are the movement’s properties that are detected.

That is frequency, density, coarseness, consistency, character, grain, flexibility, roughness, pattern, smoothness, stiffness, strategy, warp and woof rather than disposition, form, organization, quantity, scheme and structure.

In other words, think about how to generate and process sound through the metaphor of the plastic manipulation of fabric in space rather than its cut and its wrapping 'ready to wear'. 

SONOPLASTIC

Musical gesture always existed and it’s peculiarity and meaning (aesthetic or technical) changed with the history of music, of musical instrument and of musical social context.
Technology nowadays allows us to easily capture movement addressing its coordinates to any kind of digital devices for computational and/or actuative purposes.
This more and more convinces me to put special emphasis on the possibility to transform the ethereal yet perceptively physical power of corporeal gesture into creativity and in reactivity within an unpredictable experience rather than a mere mapping of X-Y-Z values to control parameters in a predetermined user interface.

Musician’s gestures during performance has been historically dependent on the ergonomics and functionality of musical instruments, most of it involved the body parts responsible of activating the exact pitch on the exact spot with the exact pressure at the exact timing and only part of it to the desired expression and meaning.

It is time, now that the technology opens up new scenarios, for a paradigm shift that lies in the elimination of the dichotomy between the figure of an interpretation (merely intended as body movements mainly finalized to control instruments that generate movement of air particles) and a corporeal experience and mental representation of movement that generates and elaborates creative processes in a sense giving activity. In other words, bringing as close as possible the body-related gestures to the sound-related gestures.

In order to achieve this, the first step is not to stop to what more immediately technology offers us, as for example to map the values ​​of XYZ to control parameters of predefined interfaces of virtual instruments and sequencers.

It would be extremely limited to use technology so metaphorically rich to simply replicate what already exists and performs its job perfectly... we would find ourself to use a large but invisible touchless mouse capable of operating in the ethereal - three dimensional space to remotely control, with fatigue and lack of precision, a graphical user interface on a computer monitor .

To make a paradox within the ' analog world’ it would be like playing a melody in tune on a contrabass by pressing strings on the fingerboard with compressed air rather than directly with the fingers, while it is much more interesting to hear what sonic processes you can get from it if subjected to a jet of compressed air!

On a practical level it is impossible to provide a universal recipe, but it can certainly be useful to dwell on the metaphoric potential that new technologies can offer us.

In Sonoplastic i use the tracking and mapping of gestures of my two hands not only to detect their location in three dimensional space or to recognize predefined gestures, but rather to create a sensitive environment in which others are the movement’s properties that are detected.

That is frequency, density, coarseness, consistency, character, grain, flexibility, roughness, pattern, smoothness, stiffness, strategy, warp and woof rather than disposition, form, organization, quantity, scheme and structure.

In other words, think about how to generate and process sound through the metaphor of the plastic manipulation of fabric in space rather than its cut and its wrapping 'ready to wear'.

Technically I started with one of the most straightforward ways to do video sensing: performing frame-differencing with a mean filter* which gives a reading of the amount of movement detected. Using this technique within the Max/MSP/Jitter environment i am able to map the amount of motion in the scene where my hands are moving to specific algorithms that i conceived and keep enhancing.

Due to the complexity of the task of tracking moving pixels and the even more ambitious undertaking to feed with the corresponding data algorithms that transforms them into sonemes through the metaphorization of properties like density, coarseness, consistency etc generates a rather unpredictable instability.

Such instability in my environment does not represent side effects, it rather became an opportunity for emergencies capable of giving life to a reverse ingegneering version of the machine learning paradigm: the organism learning, such as the timeless ability of living organisms to adapt to environment and contexts. This will bring to the performance unpredictable and reactive elements otherwise impossible to create with algorithms or to deterministically conceive.

A clearly reactive activity, the corporeal and gestural response to a mental stimulation of movement and of sound deployment that gather sense through the synergetic confrontation within a techno-digital environment.

As far as the image processing concerns, this is not directly ‘mapped’ to the gestures like it happens in many touchless eye candy gameplays. I opted to put a meaningful and contextual filter, some kind of a salient leap detector embodied by the sound produced through gestures: images, before being widespread undergo changes: the basic elements (brightness, color, saturation, hue, dislocation and relocation) are sensitive to the fundamental parameters of the sound generated and distributed at that time. Sound waves alter the visual data stream in the same way molecules are transformed by the sound contracting and expanding air particles in space.

Gestural, sonic and visual processings are achieved through a program I wrote with Max/ MSP/ Jitter that allows me to give life to a sensitive space, the entire performative area, in which all the elements that inhabit it (performer, sound, light, audience, video projection and architectural / structural elements) exert a reciprocal influence.

Keywords

audiovisual performance
gesture analysis
corporeal experience
user interfaces
kinetic/gestural activity
touchless
tracking and mapping of gestures
sensitive environment
frame-differencing with a mean filter

sonemes
unpredictable instability
emergencies
reverse ingegneering
machine learning
organism learning
deterministic
reactivity
synergetic
Max/ MSP/ Jitter.

INTRODUCTION

My primary perspective to form the basis of a sense giving activity through mental representation of movement and corporeal experience, consists of shifting the focus away from user interfaces, be it virtual (graphically/digitally generated) or physical (musical instruments) towards the holistic meaningful experience of continuous sound and movement in relation to our bodies.
Such perspective lays the foundations of my work and experimentation on tracking and mapping kinetic/gestural activity to generate and control music sounds and images.