Immersive Storytelling

Using Mixed Reality Enviroments as a Stimuli for the Senses


I have already explored the advantages and disadvantages, common practices and debates around using virtual reality (VR) as a narrative device to construct a stimulative, multi-sensory experience, and concluded that it may not be the storytelling medium to use when creating my desired experience. This post will explore how mixed reality spaces can be used to achieve my desired outcome, focusing on why I should choose this medium, rather than focusing on exactly how – this will be explored later.

It is important to define what a mixed reality environment is;

“Interactive storytelling in a mixed reality environment merges digital and physical information and features. It usually uses an augmentation of the real-world and physically-based interaction. The dramatic storyline of the interactive narrative is influenced by the actions of the user” (p.97, Nakevska et al., 2017)

My initial attraction towards this medium is the merging of ‘digital and physical information’ with ‘real-world’ and ‘physically based’ interactivity (ibid). The initial barrier that VR presented was the limitations around particular senses, such as touch, smell and taste. By facilitating ‘real-world’ objects, it is easier to utilise the environment to its full potential and create interactivity using these senses. The example of ‘RoomAlive’ (Jones et al., 2014) demonstrates a projection mapped experience that can ‘transform any room into an immersive, augmented entertainment experience’ that allows audiences to use a multitude of their senses to control content to make it appear as if they ‘seamlessly co-exist within their existing environment’.

RoomAlive experiences (Jones et al., 2014)

It is this notion of ‘co-existing’ that makes mixed reality environments an obvious choice for the type of experience I wish to create. By entering a physical space, the user is fully aware of their senses – they can see their hands and feet, and can also see the exit and entrance, therefore are contextually aware of their position, actions and movement. This is unlike a VR environment where movements are restricted, and feelings of disembodiment and detachment may be present.

Dow (2008) spoke about feelings of embodiment in the example of AR Façade (Dow et al., 2006) – an AR experience based on the desktop-based interactive drama, Façade. The desktop version was an unconstrained, AI driven narrative. This is described as ’embodied narrative engagement (ENE)’ whereby the feelings of ‘presence’ (‘being in a story world’), agency (’empowerment over unfolding events’) and dramatic involvement (‘the feeling of being caught up in the plot and characters of a story’) are all present (p.4, Dow, 2008). Whilst ‘ENE’ can be replicated in a VR environment, it is important to distinguish between ‘immersion’ and ‘presence’;

“Presence is often used synonymously with immersion, but I want to call out an important distinction highlighted in Murray’s definition of immersion: ‘the sensation of being surrounded by a completely other reality… (taking over) our whole perceptual apparatus …and learning to do the things the new environment makes possible.'” Murray cited in Dow (p. 21, 2008)

The concept of being in control of our ‘perceptual apparatus’, referring to our hands, feet, etc. is what I alluded to previously. To be fully immersed in a ‘complete other reality’ is something that I personally do not find appealing when discussing the use of our existing senses in a mixed reality environment. Dow’s ‘ENE’ model, that includes presence, dramatic involvement and agency, is something I wish to explore in much more detail and consider when producing work within this medium.

Embodied Narrative Engagement (ENE) Framework (Dow, p.19, 2008)

Finally, in order to facilitate a truly multi-sensory narrative, it could be argued that the environment should contain other human beings, as this is a significant part of our perceptual awareness and sense of proprioception. Besides this, as Zhu et al. (2018) highlights, ‘people who stand outside the virtual world may want to share the same scenes that are shown on the screen of the headset. It is therefore of great importance to merge real and virtual worlds into the same environment, where physical and virtual objects exist simultaneously and interact in real time’. It is debatable whether the aforementioned impacts the narrative for the user; this depends upon the constructed storyline and interactivity within the experience. Additionally, the increase in virtual reality applications that do attempt to create a shared experience is growing, with examples such as ‘VR Chat’ and Meta’s vision becoming mainstream. Nevertheless, it is important to understand and appreciate the positive implications and ease of use that a mixed reality experience can provide for a shared environment. 

References:

Zhu, Y., Li, S., Luo, X., et al., 2018. A shared augmented virtual environment for real-time mixed reality applications. Computer Animation and Virtual Worlds, [online] 29(5), p.e1805. Available at: <https://doi.org/10.1002/cav.1805> [Accessed 18 November 2021].

Nakevska, M., van der Sanden, A., Funk, M., Hu, J. and Rauterberg, M., 2017. Interactive storytelling in a mixed reality environment: The effects of interactivity on user experiences. Entertainment Computing, [online] 21, pp.97-104. Available at: <http://dx.doi.org/10.1016/j.entcom.2017.01.001> [18 November 2021].

Jones, B., Sodhi, R., Murdock, M., Mehra, R. and Benko, H., 2014. RoomAlive: magical experiences enabled by scalable adaptive projector-camera units. UIST ’14: Proceedings of the 27th annual ACM symposium on User interface software and technology, [online] p.637. Available at: <https://doi.org/10.1145/2642918.2647383> [Accessed 19 November 2021].

Dow, S., 2008. UNDERSTANDING USER ENGAGEMENT IN IMMERSIVE AND INTERACTIVE STORIES. Ph.D. School of Interactive Computing, Georgia Institute of Technology.

Dow, S., Mehta, M., Lausier, A., Harmon, E., MacIntyre, B. and Mateas, M., 2006. AR Façade. Georgia, Atlanta, USA: GVU Center, Georgia Institute of Technology.