Horizon CDT Research Highlights

Research Highlights

Human-Machine Interface design for navigation systems in future highly automated vehicles

  Chloe Jackson (2016 cohort)

Autonomous vehicles are rapidly becoming part of everyday life, whether cars on our roads, pods transporting passengers around airports or pedestrianised areas of cities, or farm vehicles ploughing fields while the farmer reads the newspaper in the cab [4].

The context of this research is best explained by beginning with the driving task itself [5].This divides the driving task into three levels, starting with the control task – is the driver in physical control of the vehicle? Are their hands on the steering wheel, their feet on the peddles, eyes on the road? The next level is tactical – is the vehicle within the white lines, is the distance between the vehicle and the vehicle in front a safe braking distance at the current speed. The final stage, strategic is where the navigation task belongs. There has been a lot of research into the control and tactical levels but less in the strategic area.

This research will consider the driver’s navigation task within the context of automation. The driver’s navigation task begins with trip planning, before the vehicle sets off on its journey [1]. Once the journey begins, there are 5 stages:

  1. Preview – time and distance to the next manoeuvre and a picture of the next manoeuvre
  2. Identify – pinpoint location of next manoeuvre within the road environment, identify direction of travel, and adjust speed and road position
  3. Confirmation – has the correct manoeuvre been identified and have there been any navigational errors
  4. Confidence – has the right route been taken? It is important here to be aware that false confirmation is a common problem.
  5. Orientation – driver gains awareness of current location in relation to surroundings In future highly automated vehicles, much of the driver’s navigation task will be handled by the vehicle, potentially including the trip planning stage. It is possible that a ‘driver’ could enter a vehicle and ask ‘Take me to the nearest hospital/hotel/restaurant’, treating the vehicle more like a taxi than a vehicle they would drive.

A key area of interest is what happens if the vehicle, for some reason, is no longer able to navigate. This could be due to various things, the vehicle may be leaving it’s ODD (Operational Design Domain), it may have lost signal, or have lost navigation. At this point, the ‘driver’ will be expected to take back control of the navigation task. This project will look at what that might look like.

Camilla Grane [2] states that ‘technology is always evolving, and there have been several occasions throughout history when advances have changed human behaviour dramatically’. One of these occasions may well be in progress due to the development of autonomous vehicles. An area to investigate in relation to this is what effects reliance (or over-reliance) on technology might have on people’s ability to locate themselves, how quickly this may happen, and what strategies they employ. People’s spatial cognition skills and strategies are different depending on the scale of space [3]. The four scales of space, as defined by Hegarty et al, are:

• Figural space – ‘small in scale relative to the body, external to the individual, includes both the flat pictorial space and the space of small manipluable objects. Most existing psychometric tests of spatial ability are at this scale of space.’

• Vista space – ‘projectively as large or larger than the body but can be visually apprehended from a single place without appreciable locomotion. It is the space of single rooms, town squares, small valleys, and horizons’

• Environmental space – ‘large in scale relative to the body and ‘‘contains’’ the individual. It includes the spaces of buildings, neighborhoods, and cities and typically requires locomotion for its apprehension.’

• Gigantic space – ‘(e.g., states, countries, and planets) is larger again and cannot be apprehended from direct locomotion experience. This scale of space is typically apprehended by viewing maps, representations that are themselves figural spaces.’

This research will look for ways to test people’s navigation skills at both the figural and environmental scales. These two scales stand out and are of particular interest to this study because people navigate through the environmental space using a combination of locomotion and their senses to determine where they are. They will often use examples of figural scale space, for example, maps, to assist them in this task. The approach taken will be a mixed methods approach beginning with a qualitative study, the results of which will help to inform and design a method for testing the effects of technology use (among other factors) on people’s ability to navigate. Both will be piloted and the design process may be iterative.

In addition, a state of the art study is also underway, including a long-term series of interviews with academics and professionals within relevant fields, with the view that the method of testing and the results may help to design the navigation systems of the future.


  1. E. G.; Burnett. 1998. ‘ Turn right at the King ’ s Head ‘ Drivers ’ requirements for route guidance information. Loughborough University.
  2. Camilla Grane. 2018. Assessment selection in human-automation interaction studies: The Failure-GAM2E and review of assessment methods for highly automated driving. Applied Ergonomics 66: 182–192. https://doi.org/10.1016/j.apergo.2017.08.010
  3. I. Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah. 2003. Santa Barbara Sense of Direction Scale Questionnaire. Development of a self-report measure of environmental spatial ability. Intelligence, 30: 425–448. https://doi.org/10.1017/CBO9781107415324.004
  4. Robert Lea. 2016. Tractors to plough a lonely furrow | Business | The Times & The Sunday Times. The Times. Retrieved 1 March 2017 from http://www.thetimes.co.uk/my-articles/tractors-to-plough-a-lonely-furrow-ltqdxdtdj
  5. John A Michon. 1985. A critical view of driver behaviour models: what do we know, what should we do?

This author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/L015463/1) and Ordnance Survey.