VIRTUAL REALITY AS A TOOL TO INVESTIGATE AND PREDICT OCCUPANT BEHAVIOUR IN THE REAL WORLD: THE EXAMPLE OF WAYFINDING

The use of virtual reality (VR) is expanding within the AEC sectors, commonly in design and preconstruction decision-making, including as a tool to test and predict the behaviours of building occupants. The implicit assumption is the experience of an immersive Virtual Reality Environment is representative of the Real Environment, and understanding this prior to construction reduces the likelihood and significance of design errors. However, there are very few studies that have validated this basic assumption, and even fewer that have made a direct comparison between Virtual and Real building use. One behaviour that influences design is wayfinding, and the acknowledged effect of familiarity with the layout of a building, which is the subject of this study. We produced an accurate immersive VR model of part of an existing University building and asked participating students to complete a wayfinding task in both the Real and VR buildings. The results show a quantitative improvement in the route and time taken to complete the task, but highlight differences in behaviours in each environment, including subtleties of head movement, a tendency to experiment and seek amusement, and a range of responses to the technology from enjoyment to suspicion. Further research is required to explore in more detail the effect of VR technologies on participants’ behaviour, and the limitations and potentials of VR as a decision-making tool beyond the example of wayfinding that we use. In conclusion, we need to adopt a cautious approach when designing by VR and recognise that the results of experiments such as ours should complement design decisions, rather than act as their sole justification.


INTRODUCTION
It is widely reported that virtual reality (VR) holds potential in the architecture, engineering and construction sectors (AEC) for providing the experience of a building at various stages prior to construction (Whyte and Nikolić 2018;Portman et al. 2015). Most commonly this includes the design phase as a tool for collaboration and visualisation among AEC professionals and other stakeholders (Mastrolembo Ventura et al. 2020;Maftei and Harty 2015), and the construction phase as a tool to increase efficiency and identify potential mistakes (Roupé et al. 2016;Castronovo et al. 2013). There is an implicit assumption in accepting these ideas that the representation produced and experienced in the virtual environment is in some way analogous to the real world environment that will be experienced at a corresponding point in space at some later date. Furthermore, especially in its use in the design phase, and with the recognition of the importance of participatory design and stakeholder involvement, VR is seen as a tool for testing and predicting occupant behaviours (Buttussi and Chittaro 2018;Kinateder et al. 2014). Such an approach suggests that in the same way that the physical structure of a building can be examined in a model and its design and construction subjected to informed adjustments, a similar contribution can be made to structures of occupancy (user experiences and behaviours). However, it is not clear to what extent the Virtual experience is representative of the Real experience, nor how we should acknowledge and accommodate those differences. Hence the work reported here, which, although it focusses on a single facet of the building experience (namely wayfinding), has wider implications for the use of VR in AEC, and more specifically the use of VR data as the basis for design decisions.
The paper provides a review of literature relating to the use of VR in design, specifically issues of wayfinding and navigation, and critiques research that has been carried out to examine the usefulness of VR as a tool to investigate wayfinding behaviours. It then goes on to describe an experimental set up that was devised to make a direct comparison between a VR Environment (VRE) and the 'same' Real Environment (RE), by recreating a model of an existing building and asking participants to carry out the same wayfinding task in each. This allowed us to compare data in terms of routes, times taken, observations of behaviour, and participants' experience of the two environments. Our findings suggest that while there are similarities between the quantitative results of both wayfinding tasks, there are significant qualitative differences in terms of the behaviours and experiences, which leads us to recommend caution in using VR as a design tool.

2.1
The role of VR in design AEC has a long tradition of computer-based modelling in both 2D and 3D, so it is not surprising that this has recently extended into new forms of interaction with 3D models, including virtual reality (Whyte and Nikolić 2018). As a visualization technology, much of the interest has been in presenting complex information to a range of stakeholders in ways that allow easier understanding of the design and construction processes. In particular, this has centred around creating data-rich impressions of a building at future stages, aiming ultimately to provide a method of experiencing the completed building from a user's perspective before the design is finalised or construction begun (Heydarian et al. 2015). This comes at a time when the AEC sector is pressing forward with efforts to introduce BIM systems and values, the traditional basis of which is a collaboratively produced, datarich 3D model, intended to improve the understanding of design and planning data (Azhar 2012;Eastman et al. 2011). Using digital data created during the project, it is now relatively easy to convert a BIM 3D model to a format suitable for VR, and use the resulting output at different stages of the construction process, from design discussions (Roupé et al. 2016) to site management (e.g. for H&S training, see Sacks et al. 2013).
Where the AEC sectors have struggled generally to apply new visualization technologies is in the postconstruction phase of a building, where there is the potential to become better informed of the use of space and assets, and the behaviours of the occupants. Some studies have used VR to attempt a comparative evaluation of the expectations and realities of a new building (e.g. Westerdahl et al. 2006), and further academic research has examined occupant behaviour in specific simulated circumstances such as emergency evacuation (Vilar et al. 2013;Kinatedar et al. 2014). However, the primary use of VR in AEC has been, to date, as a tool in the design phase and particularly as a means of increasing cross-disciplinary and co-design decision making (Portman et al. 2015). With greater awareness of the importance of stakeholder involvement at the concept and design stages (e.g. Simonsen and Robertson 2013), has come an awareness that traditional AEC data presentation tools and techniques can act as barriers to those without the same technical knowledge, especially end users. Beyond the inclusion of stakeholders outside AEC professionals, the wider difficulties of collaborative design processes even within AEC are also well documented, issues such as confused communication, contested authority, and lack of responsibility (e.g. Luck 2013; Maftei and Harty 2015).
In the search for a bridge between the varying world-views of different stakeholders, and as a more effective or intuitive way to experience the design data, VR systems have been suggested as a potential solution (Heydarian et al. 2015;Castronovo et al. 2013). As a technology that does not depend on specialist knowledge, other than perhaps some experience of the control system, VR may offer a more inclusive and democratic mode of engagement with the design of the built environment, opening up the possibility of a real contribution by the users and owners of buildings to the future of the construction sector.

Wayfinding
One area with potential for a greater use of new digital technologies in understanding the use of buildings is in navigation and wayfinding. There is an opportunity to develop tools and techniques to examine wayfinding behaviours and apply that knowledge to the design of new buildings, as well as to improve the designs of existing buildings. Much of the research to date has focused on this emerging technology as an analogue for the real world in wayfinding tasks. However, in many cases the tasks given to the participants were not intended to be 'realistic', either based on a maze-like experiment (e.g. Dijkstra et al. 2014), or on a modified world view controlling a limited number of variables, such as signage or lighting (Heydarian et al. 2015;Vilar et al. 2014; see also Cubukcu et al. 2005). Understanding the process of wayfinding in this way tends to be seen through the lens of psychology (e.g. Lloyd et al. 2009;Ruddle et al. 1997;Cousins et al. 1983), or presented as an experiment in the perception of the virtual environment (e.g. Dijkstra et al. 2014;Heydarian et al. 2015). This is something of a departure from traditional wayfinding research that traced user movements through the built environment and adopted a more qualitative perspective by talking to participants to try to understand not only their movements through the world, but also their methods and reasons for doing so (e.g. Cousins et al. 1983;Butler et al. 1993;Gärling at al. 1997;Golledge 1999).
Much of the justification for the need to research wayfinding in the built environment is on the basis that it remains a practical problem for large or complex buildings, such as museums and hospitals (e.g. Hillier and Tzortzi 2011;Rooke et al. 2010;Haq et al. 2005), or stressful situations such as emergency evacuation or accident avoidance (Wyke et al. 2019;Nilsson and Kinetedar 2015;Sacks et al. 2013). In such circumstances the response of occupants to the wayfinding task at hand depends on their engagement with the real environment as a whole, including their emotional state and multi-sensory cues to orientation and navigation. In current VR systems, these are largely absent from the virtual environment, but also offer scope for confusion in that the test environment itself is likely to have conflicting sensory cues such as displaced sounds or smells. So while the use of abstract or pseudo-real 'mazes' (e.g. Lloyd et al. 2009 comparing a video game version of Nice, France with a simplified version of Birmingham, UK) provide a useful platform to identify wayfinding strategies, they are inherently limited by the lack of direct comparison with the same real world environment.

Representing the Real World in Virtual Environments
As VR technology is emerging and rapidly developing, research into how it is experienced struggles to keep pace. Early experiments in VR wayfinding such as the 1997 study by Ruddle et al. (see also Darken and Sibert 1996), relied on technologies that struggled to replicate a sense of realism, and tended to see the virtual environment as a novel space to be experienced in its own right. Haq Kuliga et al. 2015;Paes et al. 2017) the most commonly used methodology has been a 3D model displayed on a monitor or screen (or set of screens with 3D anaglyph glasses in the case of Paes et al. 2017), in which the participants either passively follow a prescribed route, or attempt to complete a navigation task. Some of these studies have attempted to compare behaviours in a 'real' built environment (as opposed to an 'experimental' environment e.g. Kimura et al. 2017) with a VR recreation, but few have used immersive navigable VREs, and those that have are not specifically for navigation studies: Heydarian et al., (2015) used a head mounted display (HMD) to compare everyday tasks in a room, looking specifically at lighting, and Kimura et al., (2017) used a similar set-up to investigate orientation. So while there has been general recognition of the need to understand the relationship between the real world and a virtual recreation of it, the latest digital tools are yet to be extensively examined.
The published research suggests there is a general consensus that Virtual Reality Environments (VREs) do have a place in providing an experimental analogue for the as-built or as-planned Real Environments (REs), but produce different results that are sometimes difficult to explain (e.g. Kimura et al. 2017;Skorupka 2009;Haq et al. 2005:402), or apparently contradictory. An example is the conflicting results of research that compared wayfinding strategies between virtual and real environments, with some (e.g. Lloyd et al. 2009) finding that there are no significant differences, while others (e.g. Skorupka 2009) reporting considerable variation in pathfinding behaviours. This raises the question as to whether the fundamental differences between VREs and REs mean that comparative exercises are likely to be inaccurate and misleading. Nilsson and Kinateder cite a "lack of appropriate validation studies" (2015:13) in evacuation wayfinding, and suggest it is not possible to recreate the same conditions in VR. In their case (evacuating buildings in a fire), there are obvious practical and ethical problems, but the wider point holds true: the assumption that ever-greater visual fidelity equates to more realistic responses and behaviours has not been sufficiently proved (Kuliga et al. 2015;Lloyd et al. 2009;cf. Paes et al. 2017:294;Heydarian et al. 2015:118). However, there seems to be less controversy in suggesting that a virtual model can be useful in gaining familiarity with a specific real environment, rather than the participant becoming a more skilled navigator generally, as evidenced by research into video gameplay. Participants with greater experience of gaming appear to adapt more quickly to navigation and orientation tasks in VREs (Murias et al. 2016;Richardson et al. 2011), even though this does not easily translate into real equivalents: "Although transfer of spatial knowledge is possible and perhaps likely, fundamental differences in sensory experience between the two learning situations may preclude the general transfer of navigational skills." (Richardson et al. 2011:557-8, italics in original).
The key aim of this research was therefore to compare behaviours in the same virtual and real environments. Based on a fully immersive system in the form of a Head Mounted Display (HMD), and using wayfinding as an example, what are the similarities and differences of behaviour inside a building between a Virtual Reality Environment (VRE) and the same Real Environment (RE), and what are the implications for the emerging use of VRE experiments to provide information in building design?

The Virtual and Real Environments
The setting for the experiment was chosen to be a large building on a University campus: a two-storey building housing several lecture halls, with entrances off a central atrium and one wide set of stairs to the first floor (see Fig. 2). Rooms are numbered sequentially around the central space, with a prefix 'G' for each of the ground floor rooms, and '1' for each of the first floor rooms. Those that can be directly accessed from this central area were included in the VRE model, ignoring rooms with indirect or security access. In total this left 16 entrances: one of the lecture theatres (G10) is double height and has entrances on both floors, 4 other rooms have entrances on the ground floor, running from GO4 to G10, and on the first floor there are 11 entrances, running from room numbers 102 to 111 and including the upper entrance to G10. Each room has its number prominently displayed above its entrance, and there is a large information panel to the left of the building entrance, near the designated start point, with basic directions such as a heading 'First Floor' with a list of first floor room numbers (see Fig. 1).
A model of the main central area of the building with all entrances was created in Revit, based on floorplans provided by the University Facilities Management team, and further measurements taken on site. The aim was to create a Virtual Environment that was reasonably realistic, within the constraints of time and modelling expertise. The researchers were working in the context of 3D modelling for built environment design and construction, rather than animation or interactive gaming, so emphasis was placed on geometric accuracy, surface texturing, and lighting. Signage relevant to wayfinding was limited but faithfully recreated, including for example the main room information board near the entrance, any ceiling mounted directional signs, emergency evacuation signs, and most importantly the prominent room numbers above each entrance.

FIG. 1: Comparative images of the Real Environment (RE) left, and Virtual Reality Environment (VRE) right, showing Start, by the entrance (top); Main Stairs (centre); Destination -Room 109 (bottom).
The resulting model met our objective to create a limited VRE that matched reasonably closely the size, shape and colour of the same Real Environment, and allowed participants to move in a commensurate way through the model, without being heavily dependent on prior experience or skills in gaming and VR systems. Fig. 1 shows examples of the final model, with side-by-side comparisons of the VRE (screenshots) and the Real Environment (on site photographs).

Experimental Setup and Limitations
The research was conducted in the form of a repeated experiment where users were asked to complete a wayfinding task twice: first navigating the Virtual Reality Environment model using an Oculus Rift head-mounted display (HMD) with standard Xbox controller, then the same task in the Real Environment by walking through the building. A second group of participants were asked to do the same two tasks but in the reverse order: Virtual Reality Environment (VRE) first, then Real Environment (RE). The wayfinding task was designed to be short, around one minute, to limit the potential for distraction, confusion or loss of motivation in the Real Environment (RE) task, and reduce the need for complex 3D modelling for the VRE task. Conversely, the task had to offer enough route options to enable an assessment of differences, and some explanation as to why particular choices were made. Having examined the building layout, Room 109 was chosen as the target destination (See Fig. 2).

FIG. 2: Isometric view of the VRE case study building, showing wayfinding task 'Start' and 'Finish' points, and individual wayfinding paths. Note the choice of two primary routes: turning left or right at the top of the stairs.
This rationale behind the selection of Room 109 was that this particular room is slightly hidden, with its entrance set back and the room number partially obscured, especially as participants reach the top of the stairs (see Fig. 3). It is also the furthest room from the entrance without the participant having to pass through any doors. This may be why it is the only room with a specific ceiling mounted direction sign ( " ↓ Lecture Theatre 109", which can be seen in Fig. 1) located immediately outside the entrance, although the sign itself is quite small and is perpendicular to the door, hence difficult to see when directly or nearly directly facing the door. Room numbers run sequentially in a clockwise direction around the upper floor circulation space, so that on reaching the top of the stairs participants are faced with room numbers increasing to their right, with 102 immediately in front of them, then 103, 104 etc. moving to the right. Room 109 is near the end of the sequenceslightly confusingly, it actually occurs after 111 and before the upper entrance to G10 (as can be seen in Fig. 3). The upper circulation space is a wide landing, open in the centre to the ground floor and roughly triangular. This means that turning right takes the participants along two sides to 109, whereas turning left takes them around the one other side to 109 (See Fig. 2 above). This provided us with a case study that satisfied the requirements of a relatively simple VRE model, but enough complexity to provide a number of alternative wayfinding choices: reading the direction sign; route to and up the stairs; left/right choice at the top of the stairs; room numbers out of sequence; visibility of entrance.

FIG. 3: Visibility of Room 109 (Destination). Note limited visibility from top of the stairs.
The PC running the VRE and the HMD were set up near the entrance to the building, and hence the 'Start' point for the RE wayfinding task, whilst also being near one entrance to a popular café. This allowed space for the VRE task to be completed, a regular flow of people to ask for participants, and generated interest from students entering the building or going to the café. To ensure a degree of uniformity, we selected participants who had little or no knowledge of the building layout by asking for details of how often they had visited the building, then gave them a short tutorial on the use of the Xbox controller, and time to familiarise themselves with the Oculus Rift HMD. The review of literature suggested that there is little to support differentiation on the basis of gender, but may be some evidence to support prior experience in gaming as an influencing factor. We mitigated this by limiting and hence simplifying the control options, and allowing time for familiarisation. We are confident this produced reasonable results since our observations during a pilot study and the main experiment did not suggest a noticeable difference, however we recognise that this is an area of validation that could be improved upon. A related effect that we did not take into account is the equivalent effect of physical ability in the RE. As discussed below, one or two participants decided that speed was of the essence and were able to physically move very quickly. We did not gather any data on, for example, participants' fitness or disabilities, and can therefore shed no light on the potential equalising effect of a controlled VR system of locomotion vs the variation in real world physical abilities. These limitations are discussed further in recommendations for future research, below.
To reduce the influence of prior knowledge of the layout of the building we asked participants to grade their experience from 0 to 4 (none to extensive experience) and limited the participants to those who ranged from none to limited experience of the building. We are confident that we were successful in mitigating this variable as only two participants (one in each groupsee below) knew where the destination room was, both having been to only that room in the building (hence their 'limited experience').
All participants agreed to take part on the day and carried out the tasks immediately. The sample was selected at random, either by passing invitation, or as volunteers after seeing the experiment taking place. The use of an immersive HMD did tend to attract those who were interested in the technology, but very few participants had any previous experience in a HMD, and none who considered themselves to be 'very experienced'. Some potential participants refused our invitation as they felt uneasy or even suspicious, while others were boisterous and competitive, but recognising this, we made specific efforts to reassure and recruit a variety of participants. All participants had some experience of simple gaming controls. Previous research (e.g. Murias et al. 2016) suggests that familiarity with gaming systems, rather than gender, predicts navigational competency in VR, so we do not present data specifically on gender bias. Since this was taking place on a University campus, there was also a very limited age demographic that, although not formally gathered as part of our data, ranged from around 18 to 25 years. So, although we adopted various strategies to mitigate bias and ensure robust data (limiting controller options; reduced experience of the building; range of enthusiasm to scepticism; similar demographic; similar experience of VREs), we recognise that there may be further work required to ensure our assumption of lack of bias is correct, and suggest this opens up a range of new research possibilities in considering each of these variables in more detail.

Data Collection and Analysis
In total 30 participants were recruited, split into two groups. Group 1 carried out the wayfinding task in the Virtual Reality Environment (VRE) first, then the same exercise in the Real Environment (RE); Group 2 carried out the same task in the reverse order. On each occasion, whether in the RE or the VRE, the participant was positioned just inside the entrance to the building (the designated 'Start') and given the same instruction: "Find Room 109". There were no requests to follow a specific route, identify signage, verbalise thoughts, or complete the task as fast as possible. This resulted in a total of 60 journeys.
From the 'Start' location, each participant's RE movement was tracked by following closely and plotting their movement on a printed floorplan, which, along with observations and notes, were used as the basis for a handdrawn route map for each individual journey completed. Any location where the participant stopped for more than a few seconds was also recorded as a 'Pause'. The same protocol was adopted for the VRE experience, with the researchers observing the movement on screen and using the same printed floorplan, again recording the location of any pauses. Tracking the movement would have been fairly easy to do digitally within the VR environment, and offered the potential for more data and greater nuance in the analysis. However the equivalent real-world movement tracking was not practical. Those technologies that were readily available, especially Bluetooth or RFID monitoring did not offer accuracy of greater than 1-2m and were not easily capable of collecting data such as tracking direction or eye movements, which would have been possible digitally in the VRE. We prioritised equivalence as far as possible, qualitative data perhaps at the expense of quantitative data, but maintaining the same approach reduces the potential for unintentional bias introduced by different measuring and recording systems. A more detailed and wide-ranging study might find some benefits in adopting an approach of enhancing their digital tools and standardising the collection of data in both settings (real and VR) that way.
This provided two sets of data. The first, quantitative, was used to create individual routes, which were digitized using a graphics package, each line with reduced opacity so that overlying lines became darker, to produce visualisations of aggregated sets of data (Fig. 4), supplemented by analysis at key decision points, particularly the choice of direction at the top of the stairs. Similarly, 'Pause Locations' were plotted to provide a pattern of behaviour we discuss below. Further information came from the same set of data in the time taken for each task completion, including averages and maximum/minimum times (see Table 1). Each participant was then briefly interviewed after they had completed both the VRE and RE wayfinding tasks, and comments noted. Notes were made during the interview and supplemented by observations made during the navigation exercise, allowing the participant to talk about any aspect of their experience and to answer any specific queries or clarify motives. The interview data was divided largely according to the primary topics that were discussed, which in turn came from the review of previous research, focussing on essentially three key themes: use of the technology; the comparative experience of Virtual and Real Environments; and issues of navigation and wayfinding (see 2.2 and 2.3 above). These were collated and summarised to give a sense of the pattern of responses and any recurring comments (see Table 2 below). As with any semi-structured qualitative data, a valid critique would be the difficulties of repeating the results and a question over whether those responses were more broadly representative. By ensuring the focus of the interview was based on three key elements that had been derived from the literature review, we feel confident that the data is valid and useful. However, we acknowledge the limitations of the participant pool, especially the age demographic and do not suggest that these comments are likely to be generalisable to significantly different populations. This should not detract from the results, since as we suggest later, good design practice should aim to include relevant stakeholders who are primary users of the space in question, such as the use of students in a University building.

RESULTS
By asking all participants to complete the wayfinding task twice, we are able to compare the results of routes and behaviours for participants completing the same task in the virtual and real environments, as well as examining changes with increased familiarity, and differences between learning in VRE or RE. Cross-cutting the data in these two directions (VRE/RE and first/second attempt) gives more nuanced insights into the potential role for VR in investigating behaviours in the real built environment. The principal data collected was a record of the routes taken by each participant, which are then aggregated. This can be illustrated by separating out the 60 journeys into four categories based on the order, and in which environment, each participant completed the journey: VRE first, RE first, VRE second, RE second. Each individual journey was recorded on the same floorplan, and subsequently transposed to digital format to create the visualisations below (Fig. 4).

FIG. 4: Plan view of routes taken by individual participants with average time taken. Note the START (building entrance) is lower right, DESTINATION (Room 109) is up the stairs and back towards the front of the building.
We are able to use these to draw some convincing conclusions about the comparative routes taken, but further information recorded during the empirical data gathering also allows us to consider some of the detail behind those results. Table 1 below summarises the average and spread of times taken for each of the four key categories of journey; the choice of direction on reaching the top of the stairs; and the pattern of pauses. These quantitative data support and augment the visual summaries in Fig. 4, but offer a more objective sense of the choices made by participants. Finally, in an effort to understand how participants experienced their two journeys, the interviews conducted after their wayfinding activity provide some subjective insights. In particular three aspects were chosen as key themes, following issues highlighted in the review of literature: the technology's interface especially the controls, the experience of being in a VRE, and the process of wayfinding and navigation. To collate the responses we have brought these together into recurring themes, where any mention of the equipment (the headset or the controller) were grouped together, any reference to the participants' experience or tactics for wayfinding were grouped together, and any discussion of the experience of the VRE were grouped together. These are listed in Table 2. Some of these reinforce well-known issues when using this technology, particularly the way that immersion is conditioned by the novelty and design of the VR environment (Nikolic et al. 2019), whilst others support our original suspicions about the complexities and confusions in the specific wayfinding task undertaken.

DISCUSSION
The range of data collected and summarised above allows us to directly compare wayfinding in the 'same' Virtual and Real Environments in a number of ways. In line with the aims described in the Introduction, we consider below the results of routes taken in both environments as well as: the effect of increased familiarity; cues from the environment that guide and influence choice of route; the participants' experiences of completing the task in both environments; and our observations of behaviours and interactions within both environments.

Learning the Route
The primary aim of this project was to investigate the potential for navigating a route in both VR and Real environments, and the comparative effect of experiencing the task on subsequent attempts (Richardson et al. 2011). As described above, the wayfinding task was designed to be relatively simple, but to include choices and the potential for confusion. Given the simple instruction to "Find room 109" from the entrance of the building, we were satisfied that all the participants attempted to complete the task as simply as possible, not necessarily as quickly as possible (although some did become quite competitive), but always aiming for the shortest route. The key issue this exposed was the choice of direction at the top of the stairs: the longer route to the right, and the shorter route to the left. Identifying the stairs as the correct route and moving up them only caused delays for one or two participants in each group. Fig. 4 gives a visual overview of the routes taken and allow us to make some broad comparisons. In the first journeys (either VR or Real), there is a noticeable difference between the VR and Real journeys, with the majority following the room numbers around to the right in the VR environment (10/15 = 67%), but a minority turning right in the Real environment (6/15 = 40%). Admittedly this is quite a small sample, so some caution needs to be exercised in drawing definitive conclusions, but taken along with the comments and observations described below about differences in head turning, it suggests that using the behaviours and responses in an unfamiliar VRE to inform design decisions made about an unfamiliar Real environment, may be problematic. Our results question the assumption of the VRE as an accurate predictive model of use. In saying this we are agreeing with those researchers who have recommended that caution be exercised when using VR to examine detailed behaviour (Kimura et al. 2017;Richardson et al. 2011;Nilsson and Kinateder 2015; but see Heydarian et al. 2015 for an alternative point of view).
What is less contentious is the effect of increased route familiarity in either environment on route choice. In both cases, as is clear in the visual diagrams above (Fig. 4), participants learned from their previous journey. When repeating the journey, not only did the average time taken decrease significantly (see Table 1 for details), but the number who took the shorter route showed a distinct change. In the group who completed the task in VR first, the majority shifted from long route to short route when repeating the journey in Realitydropping from 67% to 20%. Similarly, the proportion who took the long route when completing the Real journey first, went from 40% down to 7% in the subsequent VR journey. This confirms the data on pause frequency and patterns (Table 1), which saw a dramatic drop in the second journey: the 'VR first / Real second' group (Group 1) dropped from 10 people pausing to 1; 'Real first / VR second' (Group 2) dropped from 13 people pausing to 3. So whether the participants carried out the task first in VR then in Reality, or vice versa, the data points strongly towards the effect of learning the route and using that experience to reduce the effort required when repeating the journey soon afterwards.
Importantly, the result is independent of the mode of learning, whether VR or Real: 14 of all 30 participants chose the shorter route in their first journey, and 26 of 30 chose the shorter route in their second journey. Although the magnitude of improvement suggests a difference between VR and RE, with Group 1 improving by 9%, while Group 2 improved by 25%, these results would seem to be offset by the fact that participants reported the VR experience as feeling slow, despite the fact that the system had movement set to average walking speed. This seems to have allowed participants to plan ahead while 'waiting [for the model] to catch up' in VR, while in the RE their physical movement overcame the monotony experienced in VR of standing still holding the thumbstick forward. This strongly suggests that immersive VR can be used effectively as a tool to learn wayfinding routes, to reduce the time taken and to improve decisions on choice of direction. This is a novel result in that previous studies have either replicated a Real Environment in VR but not investigated navigation (e.g. Westerdahl et al. 2006;Heydarian et al. 2015;Kuliga et al. 2015;Paes et al. 2017), or investigated navigation in a VRE but not replicated the same Real Environment (e.g. Richardson et al. 2011;Darken and Sibert 1996;Kinateder et al. 2014;Lloyd et al. 2009;Ruddle et al. 1997;Dijkstra et al. 2014) Those that have attempted to compare navigation in the same Virtual and Real Environments (Haq et al. 2005;Skorupka 2009) did not use immersive virtual environments, and interestingly both reported results that were to some extent contradicting previous research (Haq et. al. 2005:402;Skorupka 2009:6).

Areas of Decision or Confusion
As discussed in the methodology, the design of the wayfinding task was deliberately intended to provide participants with a number of choices, provoking a range of responses and consequently different routes from the building entrance to Room 109. To complete the task, participants had to first identify a general direction, and had a large information panel near their start point to guide them (seen in Fig. 1 above, near the seated people) The sign does not explicitly state that rooms on the Ground floor have a 'G' prefix and those on the first floor a '1' prefix, but this can be implied from the list of numbers allocated to each floor. However, almost none of the participants looked at the information panel, but the majority were able to deduce (presumably from past experience) that Room 109 would be on the first floor, and hence headed towards the conspicuous stairwell directly in front of them. 7 of the 15 participants mentioned this explicitly in their interviews. It can also be seen that one or two people in each group (Virtual and Real) spent some time wandering around the ground floor before heading up the stairs. Other researchers have previously discussed wayfinding strategies, including the use of landmarks, signage, features, cognitive maps, memory, intuition, past experiences etc. (e.g. Golledge 1999;Rooke et al. 2010;Vilar et al. 2013;Dijkstra et al. 2014;Murias et al. 2016), but we sought to explore differences rather than strategies explicitly, so only make brief observations here.
Beyond the initial information panel near the entrance, the signage within the building is very limited. The one sign for the destination room has limited visibility, and in practice only offers direction when within a few metres of it (see Fig. 3). The most obvious indication offered in the way of guidance are the large room numbers over each entrance (seen in Fig. 1) especially on reaching the top of the stairs, where they increase in a clockwise direction (i.e. turning right) from room 102 directly in front. Our observations indicate that there is a much lower tendency to look around by turning the head in a VR HMD, and often participants would look almost straight ahead, while using the Xbox controls to rotate their view (as reported in Skorupka 2009, although using a desktop VR system, cf. Ruddle et al., 1999). For the participants to realise that Room 109 was behind them, and that turning right to follow the room numbers was a longer route, they had to stop and look around when reaching the top of the stairs. This suggests that one aspect of the use of VR HMDs is noticeably different from real-world behaviour, and is likely to alter the results of actual vs predicted wayfinding. The Oculus Rift HMD system is quite light but is tethered to the PC by data and power cables (which rub gently against the back of the head), and the user was seated or standing, limiting their movement in real world space.
While it is true to say that the participants were less likely to turn their head when using the HMD, the act of pausing to assess the situation was similar between the two experiences. It was expected that the majority of pauses would be associated with the information provided to assist in wayfinding: the main guide notice near the entrance; the pattern of room numbers; the sign to Room 109; as well as checking the various other noninformative signs around the space (toilets, reception etc.) but this proved not to be true. The distribution of pause locations centred on areas of decision and confusion, rather than areas of information (See Fig. 5). The frequency of pauses on the ground floor was generally low in comparison to the first floor, probably because the majority of participants identified that their route was to the upper floor and the conspicuous staircase gave an obvious direction to follow. The key pause area on the ground floor was at the bottom of the stairs, as participants sought reassurance that the correct route was to go up the stairs.
The next area of decision, at the top of the stairs, created a concentration of pauses as participants sought information on their next move. It is worth noting that the signage at the top of the stairs does not direct to specific rooms, and to face room 109 from that point requires them to turn and look in the opposite direction to that in which they are currently facing (see Fig. 3). As noted above, the entrance to room 109 is hidden from that point, and the room sign is oriented and sized such that it is also very difficult to see. Similarly, an area of confusion occurs when participants realised that their route seemed to be taking them in a circular direction, as they reached the corner by rooms 105/106 and had to decide if they were in fact heading in the right direction. This is the point of greatest visibility for room 109 (Fig. 3), and for many participants was the point at which they realised the layout of the rooms and identified their destination. They could have turned and retraced their route, but none did, realising that by this stage the remaining portion of their current path was shorter than returning to the top of the stairs and following the 'short route'.

FIG. 5: Location of pauses, with three 'pause concentration' areas.
Having described the pattern and location of the pauses, in the context of this research to compare wayfinding behaviours between Virtual and Real environments, the important point to make is that the patterns of behaviour in both were similar. As shown in Table 1, the majority of participants in both the VRE and RE first journeys paused at least once, often twice and occasionally more. There is a potentially significant difference between the two which may also relate to the point made above about head movement, in that the RE journey participants were slightly more likely to pause than the VRE equivalent. The same is true of the second journeys, where the number in both cases dropped substantially (94% for Group 1, and 81% for Group 2), but the RE journey tended to include more pauses. The possible difference would need to be substantiated before we could confidently suggest this is a noteworthy point, but the similarities are striking enough to suggest that an immersive Virtual Reality Environment is a good predictor of the locations of decision and confusion in wayfinding. This agrees with previous research that has suggested the use of VR as a tool for planning and testing building layouts and the use of intuitive and environmental information to guide wayfinding choices (Vilar et al. 2013;Paes et al. 2017;Heydarian et al. 2015).

The Experience of Wayfinding in VR
Although there have been impressive strides towards realism in VR content, especially in the gaming industry, the concept of VR realism is primarily visual fidelity. Often this incorporates illusions to trick the user into experiencing a sense of reality, while coping with compromises in the limitations of computing and human resources to produce the content. Some examples are to overlay a shading map onto textures (e.g. the brick wall pattern) to create a sense of depth without having to render true 3D; rendering objects in less detail when they are further from the viewpoint; fixing the lighting scheme by permanently altering ('baking') colours according to how light or dark they appear in shadow to avoid the need for continuous light calculations; adding a false shadow in corners (ambient occlusion); blurring edges to reduce a jagged appearance (anti-aliasing) etc. The balance between detail and performance is a perennial issue in VR environments, and more so in HMDs since the view needs to be rendered twice (once per eye), but tricks such as these have done a good job of creating a convincing visual environment. The model created here was drawn in Autodesk Revit, and then imported into Revizto, an app based on the Unity gaming engine that allows you to add textures, lighting and movement in a HMD. As can be seen from the images above (Fig. 1) the geometry of the model is accurate and the visual effect is also reasonably realistic. This was commented on by several of the participants (8 of 15 were 'impressed'), despite the lack of specific details such as some of the notices, the fire safety equipment, and the general debris one finds in a student environment. This is likely to be tempered by the novelty of the HMD, which created quite some interest, and many participants were evidently seduced by the technology with much praise for the quality of the experience (Nikolic et al. 2019). Others felt the environment was quite 'game-like' but it is not clear if this was the whole experience of task completion in a HMD, rather than presentation of the virtual environment per se.
Where modern VR environments do tend to struggle in their sense of realism is in movement, which has been the subject of numerous innovative attempts in recent years, including 360º treadmills, arm-swing or gait detection, and teleportation, as well as the traditional gliding (Boletsis 2017). For the purpose of the task of wayfinding, movement and specifically walking is of course an important factor. Unlike a typical screen-based Virtual environment, the more immersive environments such as the HMD based system used here, tend to avoid the 'headbob walking' motion that gives clues to the user as to their speed and level of exertion, since it tends to exacerbate motion sickness. Instead the most common forms of movement in a HMD are teleportation (where the user clicks on a point in the distance and is immediately transported there), and smooth traversing (where the user glides across the surface). In our model, glide movement was controlled with a standard Xbox controller, with a speed adjusted to match typical walking pace, and direction controlled by the thumb-stick on the controller. The jump function was left enabled, so users could jump on objects, over railings, up stairs etc. Varying numbers commented on the ease (9/15) or difficulty (4/15) of using the controller, but several participants used the programmed control functions to experience in the VRE what they could not experience easily in reality, for example walking backwards, climbing on furniture, and jumping over the upper barrier. Part of the design of the VRE includes a series of decisions on the part of the VR creator on what to allow and what to prevent in an effort to ensure equivalent real-world behaviours. Despite this, the possibilities enabled in the VRE allowed playfulness and subversion in ways that might be physically possible, but did not happen in the RE.
Another common comment was that the effect of walking up the stairs was strange, as the glide motion tended to occasionally stick, and the alternative jump motion was exaggerated, and as can be seen in Table 2, five of the fifteen participants did report some degree of motion sickness. These comments along with conversations during the task, as well as with bystanders and observers, suggest that the technology itself has an influence on the user. The timings show that for both the first and second journeys the VRE times were slightly faster than the Real times (-6% first journey and -14% second), and the range of times also varied less in the VRE. Despite the suggestion from the data that the task was completed in VR faster than in Reality, and the speed was set to match an average walking pace, a majority of participants (10/15) commented that it seemed slow. This may be a consequence of the apparent lack of activity and effort (i.e. glide rather than 'head-bob' walking) as well as the actual lack of exertion since the participants were almost stationary. For the use of VR as a wayfinding tool, this suggests that work would need to be done to ensure the relative speed experience was adjusted, or in some way accommodated, to give the user a realistic sense of the journey speed and distance when compared with the same journey in reality.
The increasing drive towards visual fidelity has traditionally been accompanied by a rhetoric of greater realism and 'presence' (Steuer 1992;Slater and Wilbur 1997;Kilteni et al. 2012;McMahan et al. 2012), and it is only recently that there is greater acknowledgement of the fact that the VR experience is inherently differentknowingly designed for a specific purpose (e.g. Gilbert 2016) with different affordances and limitations (Gibson 1979). Our results support the general assertion that although the experience of wayfinding in an immersive Virtual Environment is in many ways similar to an equivalent task in the same Real Environment, there are subtle differences. These include the mediation of technology (Nikolić et al. 2019), the curated environment, the sense of embodiment (Kilteni et al. 2012), and experiential novelty. The extent to which these differences affect the behaviours of participants in research such as this will depend on more work being done in a wider range of settings, with an explicit goal to correlate responses to nominally equivalent virtual and real environments.

CONCLUSIONS and RECOMMENDATIONS
Recent advances in computing and visualisation hardware are making it increasingly easy to produce 3D modelled architectural content and render it in an immersive Virtual Reality Environment (VRE), and this is a trend that shows no signs of abating. The presumption that by increasing the visual realism of these environments and displaying them in a more immersive way offers insights into equivalent behaviours in the Real World has important implications and needs to be tested and justified. The importance and study of navigating though the built environment has been widely reported for many years and is an ideal candidate to test the application of VR as a tool to predict wayfinding behaviour.
Our study found distinct similarities in the routes taken during a wayfinding task between participants who undertake it in a real building and a VRE model of the same building. Furthermore, the effect of experiencing the route in either environment (real or virtual) led to similar changes in learnt behaviour. With greater routefamiliarity, participants chose the shorter route in both cases, and showed less confusion about which direction to take. At face value, this is positive news for those who wish to use VR to help predict behaviour in building use; however, the experience of the building is substantially different between real and virtual environments. There are the more obvious and perhaps expected differences, such as jumping from an upper balcony and surprise caused by the novelty of immersive VR, but also some subtle differences that may be more important, such as reduced head movement, perceptions of speed and distance, and a tendency to seek amusement or alleviate boredom. These differences deserve greater investigation to tease them out in more detail and to establish the true effect on predictions of behaviour.
As an early piece of research into the comparison between real and virtual environments, this project raises as many questions as it answers, and has exposed the need for future research to tackle a number of outstanding issues. Many of these have been flagged during the presentation and discussion of our results, but they fall essentially into three categories: • Understanding the effect of the technology on participants. In offering a novel experience, immersion into a virtual world, there is still a sense of participants being seduced by, or suspicious of, the technology. This caused a sense of excitement in some and vulnerability in others, which suggests a wider phenomenon that deserves greater attention. • Understanding nuances of behaviour in comparing VREs with REs. Our observations were focused largely on wayfinding behaviours and the interaction of people with the VRE system. The possible gamut of behaviours goes far beyond this and was too large a topic for us to investigate fully. We have made specific note of head movements, boredom, and pausing, and suggest further research is needed to investigate issues such as the experience of the VRE by people of limited mobility, and the need to correlate VRE participants with the real building's expected users. • Understanding the limitations of the technology. A constant difficulty in technological research is rapid change, but we question the necessity to aim for perfect visual reproduction as promised by ever-increasing computing power. Further research could investigate how realism is perceived and what characteristics of realism are influencers of behaviour. This may be geometry, colour, light and shade, detailed objects, presence and movement of avatar populations, noise or other sensations such as smell and heat. Another key difference will always be modes of locomotion, and whether we need to recreate the sensation of exertion and distance travelled, or to accept proxy information from gliding or teleporting.
Taken together, our results and these suggestions lead us to question the need to make people feel 'as-if they are there' in a Virtual Environment, without really understanding what makes their behaviour comparable to being in a Real Environment. In conclusion, we need to adopt a cautious approach when designing by VR, and recognise that the results of experiments such as ours should complement design decisions, rather than act as their sole justification.