My planned sound practice is an interactive space where environmental sounds are generated through the gesture of users. By altering their physical orientation within the space, users' effect on their environment is relayed through sound.
Several other works for my Stage 2 practice - and the Production of Space projects - look into human gestures and interaction, and the invisible spaces we affect as we engage in them.
Building on the work for my first Sound Practice assignment, I hope to build gesture-reactive fountains that alter the acoustic environment for the user. By setting up an array of these fountains, a complex composition of sounds can be created.
**Rather than PD, I intend to use Processing to capture the users' interactions with each Fountain. Currently, the intention is to use a webcam style setup to allow for a hi-definition reading of the interaction. I currently have a basic prototype of this.
An alternative might be to build on the previous project and use analogue LDR's to provide the input data (although this will allow for less interactive definition).
Once the gestures have been captured, I shall begin experimenting with the acoustic output for the environment. The intention is that the user should be relaxed through the soundscape, unless their interaction becomes erratic.
Using OSC through Processing and PureData, I intend to control the lighting for each fountain to reflect its 'mood' within the environment. If some basic AI can be applied to each fountain, then a reactive element could be introduced that attempts to affect the users interaction.