This morning was spent building a prototype for the Mood Fountain, and sorting out some basic logic so that the controller works. The logic is based (although far from comprehensive, or complete) on simple AI logic from last years AI module. I have composed a simple data-class to represent the fountain in logic, which takes input from the motion data and outputs its effects as OSC data.
The OSC data is then picked up by PD and converted to MIDI signals. These signals will then be used to control Live's output.
Now that the technology is proven, I shall be researching the effects of sound on people in order that the environment will invoke some reaction in its inhabitants.