‘Sensing Synaesthesia’ Prototype Demonstration
This is a demonstration of an experience I have called ‘Sensing Synaesthesia’, which is a sensory based audio-visual experience built within Isadora, which communicates between a MIDI keyboard and Logic Pro X for sound control.
Sensing Synaesthesia Explainer
This video presents the Isadora file, discussing how hardware and software interact to make an interactive experience.
I set out to make a prototype that focused on Synaesthesia as a model for developing a multi-sensory environment. My original aim was to develop an experience that explored three of the common sensory combinations that someone with synaesthesia may experience; the linking between sound and colour/shape, the linking between touch and colour/sound (both of these are referred to as chromesthesia), and finally, grapheme-colour sysanesthesia: the linking between letters/numbers and colour.
Unfortunately, I only managed to tackle one of these; chromesthesia, specifically the linking of sound and colour/shape (this should’ve been my initial aim in the first place). Despite this, I feel like I have created a prototype that will allow me to develop this into a complete experience when given the opportunity to do so.
The first demonstration, which I refer to as the ‘shapes’ experience in the video, is a straightforward representation of how an audio stimuli can provoke a visual response. The tactility of the keyboard presses makes this a very connected experience, where the user, the sound and the visuals are represented as one performer. This is particularly important when exploring synaesthesia, as these audio-visual representations are in the person’s mind, so establishing this connection produces this representation. Other notions such as the random rotation and position of the shapes make this a dynamic experience, yet I do feel like this is where the connection is lost. I want to give the user complete control over the visuals they see, but this randomness, whilst visually appealing and fairly easy to implement, does not provide the connection nor accuracy I was after. I wish to implement a way of controlling the position of shapes by perhaps using the tempo/speed that the user is playing, or maybe other physical characteristics of the player themselves (their height could perhaps play a part in the highest shape in the stage, for example). Of course, I could develop something truly sensory with brain-ware devices, or eye movement trackers, that the user wears to position shapes in front of them, but that’s an experiment for another day.
The second experience, which I refer to as the ‘particles’ experience in the video, whilst again visually simple, really merges the line between the digital and physical. By using projection mapping, the visual representations are now augmented into the keyboard itself, which adds another layer of dimension to the experience. I found that this made me feel more connected to each key press than the prior experience, especially when the particles started to land on top of my finger, then fall to the bottom of the key (which, as highlighted in the video, is purely an illusion with the throw of the projector increasing and decreasing upon each keypress!). Technically, this experience requires some improvement – particularly in the setup of the projector, which should be above the keyboard and the player, as the shadow really ruins the immersion, but that’s purely as a consequence of the limited equipment and space available at the time. Furthermore, the keyboard itself is rather limiting in this factor. Yes, we’re producing sound, but why not use a giant, room-scale keyboard, or tin cans, bottles…anything. Anything that is not connected to a computer, because as I highlighted in the video, the user, in reality, is still interacting with software, it’s just through a USB cable. I aimed to do this in my initial outcomes, and despite being one step further away from the digital connectivity, it just does not feel right. Using acoustic instruments and objects within the physical environment would make this experience seem more ‘magical’.
The above is an example of a sensory enriched audio-visual installation named ‘Kinesthesia’ (Prasad, 2019). This was designed for music festivals to become interactive and visual works of art. Several experiments were conducted in private settings, using nothing more than and laptop running TouchDesigner (could easily be interchanged with Isadora for my case), an xBox Kinnect and a projector. What was successfully evident with this experience is that players enjoyed the instant feedback the system provided. They enjoyed that it all seemed ‘magic’ to them because they couldn’t see the technology, and that they felt ‘it created a feedback loop of increasing experimentation with body movement’. In the first system, particle systems were used to create the letters, and the ‘velocity of the players movements dictated how fast they move away from the plater’ (ibid). This is the type of experience that I mentioned previously, where the tempo/speed of which the music is played could determine the position of the shapes on the Isadroa stage. 2000 people visited this short experiment, and it is reassuring that my current development could be as achievable as this example.
There’s so much more to explore too, though. When two notes are played together, they show two visual representations (a square and a circle, for example). This is technologically correct – this is what the system should do. However, this is not what our brain says is correct. A chord, a triad or a harmonic is not written to represent 2, 3 or 4 different sounds – it’s designed to be one sound, and based on synaesthesia, this produces an entirely different visual representation, or taste, smell and sense of touch. How do we even account for this? A different visual representation for every chord that could possibly be played is ridiculous, and it becomes predictable. This comes back to giving the user control and agency within this environment, making every interaction they have with the system feel like it is their own brain doing the work, rather than a computer. Maybe exploring narratives that could accompany the experience (i.e. from the synaesthetic perspective from one individual) could help reduce these problems. Or maybe I should decide that this is just not possible, and focus on other narratives, and just use synaesthesia as a metaphor for what it could be like. It’s difficult, I don’t even know where to start, but I can’t wait… and I love it.
There is one thing I am certain about when developing this experience though – I want it to be shared. I asked my sister her thoughts on the experience after she played the piano. She found it engaging, but she noted that she felt herself responding to the shapes and visualisations that appeared on the screen and projections, which actually influenced the way in which she played the music. This is very interesting to me. We don’t only have sound evoking visuals, but we have visuals evoking sound. It’s a constant loop. This is just from one individual playing by themselves in my bedroom. Imagine a room full of people – a room full of senses that are all cross-wired with each other. It all comes back to this notion of agency within the environment;
‘In some spaces, we are on our own and there are no other people about – or they may be about but we do know about them. In other spaces we can easily communicate with other people (or artificial agents) and in other spaces there may not be any people now, but there are traces of what they have done. The availability of agents in an information space is another key feature affecting its usability and enjoyment’ (Benyon, 2012).
There’s a type of synaesthesia also called mirror-touch synaesthesia, where, yes, people feel the physical sensations of others. This is a bit ‘out there’, I know, but what a fantastic metaphor for what this experience could become. We are not only dealing with our own senses, we are using our vestibular and proprioceptive systems to become aware of other people’s senses too. “Can you smell that?”, said one visitor. “No, but I can see the colour purple glowing on the screen behind you”, replied the other. This is opening up another world of shared cognition that is simply too difficult to recreate any other way, other than using immersive media.
Benyon, D., 2012. Presence in blended spaces. Interacting with Computers, 24(4), pp.219-226.
Prasad, K., 2019. Kinesthesia: A Multi-Sensory Gesture Driven Playground of the Future. Master of Fine Arts. Interactive Media & Games Division, School of Cinematic Arts University of Southern California.