Research Perspectives - Tools for Visualisation of Portfolios
EPSRC logo

EPSRC Database


Source RCUK EPSRC Data

EP/K011766/1 - Testing view-based and 3D models of human navigation and spatial perception

Research Perspectives grant details from EPSRC portfolio

http://www.researchperspectives.org/gow.grants/grant_EPK0117661.png

Dr A Glennerster EP/K011766/1 - Testing view-based and 3D models of human navigation and spatial perception

Principal Investigator - Sch of Psychology and Clinical Lang Sci, University of Reading

Scheme

Standard Research

Research Areas

Image and Vision Computing Image and Vision Computing

Vision, Hearing and Other Senses Vision, Hearing and Other Senses

Start Date

02/2013

End Date

07/2016

Value

£419,878

Similar Grants

Automatic generation of similar EPSRC grants

Similar Topics

Topic similar to the description of this grant

Grant Description

Summary and Description of the grant

The way that animals use visual information to move around and interact with objects involves a highly complex interaction between visual processing, neural representation and motor control. Understanding the mechanisms involved is of interest not only to neuroscientists but also to engineers who must solve similar problems when designing control systems for autonomous mobile robots and other visually guided devices.

Traditionally, neuroscientists have assumed that the representation delivered by the visual system and used by the motor system is something like a 3D model of the outside world, even if the reconstruction is a distorted version of reality. Recently, evidence against such a hypothesis has been mounting and an alternative type of theory has emerged. 'View-based' models propose that the brain stores and organises a large number of sensory contexts for potential actions. Instead of storing the 3D coordinates of objects, the brain creates a visual representation of a scene using 2D image parameters, such as widths or angles, and information about the way that these change as the observer moves. This project examines the human representation of three-dimensional scenes to help distinguish between these two opposing hypotheses.

To do this, we will use immersive virtual reality with freely-moving observers to test the predictions of the 3D reconstruction and 'view-based' models. Head-tracked virtual reality allows us to control the scene the observer sees and to track their movements accurately. Certain spatial abilities have been taken as evidence that the observer must create a 3D reconstruction of the scene in the brain. For example, people are able to view a scene, remember where objects are, walk to a new location and then point back to one of the objects they had seen originally even if it is no longer visible (i.e. people can update the visual direction of objects as they move). However, this capacity does not necessarily require that the brain generate a 3D model of the scene and, as evidence, we will extend view-based models to include this pointing task and others like it. We will then test the predictions of both view-based and 3D reconstruction models against the performance of human participants carrying out the same tasks.

As well as predicting the pattern of errors in simple navigation and pointing tasks, we will also measure the effect of two types of stimulus change. 3D reconstruction uses 'corresponding points' which are points in an image that arise, for example, from the same physical object (or part of an object) as a camera or person moves around it. Using a novel stimulus, we will keep all of these 'corresponding points' in a scene constant yet, at the same time, changing the scene so that the images alter radically when the observer moves. This manipulation should have a dramatic effect on a view-based scheme but no effect at all on any system based only on corresponding points.

Overall, we will have a tight coupling between experimental observations and quantitative predictions of performance under two types of model. This will allow us to determine which of the two models most accurately reflects human behaviour in a 3D environment. One potential outcome of the project is that view-based models will provide a convincing account of performance in tasks that have previously been considered to require 3D reconstruction, opening up the possibility that a wide range of tasks can be explained within a view-based framework.

Structured Data / Microdata


Grant Event Details:
Name: Testing view-based and 3D models of human navigation and spatial perception - EP/K011766/1
Start Date: 2013-02-01T00:00:00+00:00
End Date: 2016-07-31T00:00:00+00:00

Organization: University of Reading

Description: The way that animals use visual information to move around and interact with objects involves a highly complex interaction between visual processing, neural representation and motor control. Understanding the mechanisms involved is of interest not only to ...