Despite the theory, it seems the video tracking isn’t a particularly ‘hi-def’ way of getting input. Despite the (technical) use of a magnifying glass to give good focus at short distance, the input from the video is still rather unpredictable, resulting in often random sets of data that ruin any form of interaction.
Due to this, I am planning to return to the original use of an analogue controller device as a means of input. Using this will enable me more clarity on the received inputs and thus allow for a finer-tuned set of inter-reactions for the mood fountains.
I shall also be looking into building an array of LEDs to be controlled by the iCube to allow for the mood fountain’s visual appearance to reflect its moods.