There are large individual differences in navigation ability; some people never seem to get lost, while others become quite disoriented after traveling only a few meters in an unfamiliar environment. Poor navigation skills have an enormous impact on the richness and quality of life. People who easily become disoriented are less likely to explore unfamiliar environments, and when they must do so, they can experience helplessness, anxiety, frustration, anger, and even depression. The factors that distinguish good navigators from poor ones are not well understood, however, and this puts perception and action research into the forefront of a vital research program.
My approach in studying the perceptions and actions involved in navigation focuses on understanding which aspects of navigation people generally accomplish successfully and then investigating the ways in which this natural ability breaks down. Much of my work involves asking people to navigate to nearby locations by walking. This is a powerful technique, as it provides strong muscular self-motion signals and allows one to study distances well beyond arm's reach. Another advantage is that people can walk to targets up to 20 m away with astonishing accuracy and precision, even if they close their eyes before beginning to walk. This allows a very tight experimental focus on non-visual self-motion information and also provides critical insights into where the target's initial location is perceived to be. Vision is frequently unavailable for controlling walking by virtue of occlusions or darkness, so this task is by no means esoteric. The fact that people can do it so well suggests that walking without vision recruits fundamental brain mechanisms, calibrated and fine-tuned over the long history of evolution of our species. A host of complex computations must take place in the brain for us to accurately reach a destination without vision. The questions I address in my research are aimed at characterizing these computations. How does the brain use vision to determine where the destination is in the first place? How does it keep track of where our body is while we walk? How does higher-level cognitive information impact our ability to keep track of our location? And how should we, as scientists, interpret the results of action-based indications of location and other spatial properties of the world?
I attack these issues from two separate but interrelated fronts. One is concerned primarily with investigating the perceptual and cognitive processes underlying locomotion and navigation in healthy (neurologically-intact) people. The other seeks to gain insight into the neural underpinnings of these psychological processes by studying the behavior of brain-injured individuals.
How is "Perceived Distance"
represented in the brain?
My work is unique in that I have pioneered methods to test absolute distance perception in brain-injured patients using distances beyond arm's reach. This research is urgently needed, because virtually no studies of this kind have been performed previously. I have developed a comprehensive battery of spatial tests, involving both action-based and visually-based responses. This work clearly shows, quite unexpectedly, that egocentric distance perception is extremely robust to brain injury. To date, I have tested patients with unilateral lesions in the posterior parietal lobe as well as patients with medial temporal lobe injuries. These two regions are thought to be highly specialized for spatial processing. Despite strong reasons to expect that these patients would exhibit at least some impairment in distance perception, both groups performed well within normal limits. I have also tested individual patients with damage to portions of the frontal lobe, basal ganglia, lateral temporal lobe, and occipital lobe, and each of these patients also performed well. These surprising findings provide badly-needed basic research concerning the effects of brain injury on absolute distance perception. This research suggests that distance perception is subserved by a distributed network of brain structures.
How does the brain monitor our changing location as we walk?
Although much progress is being made in understanding non-visual self-motion sensing in animals, my research program is among the first attempts to investigate the neural mechanisms of this process in humans as they walk. While at George Washington University, I have had the opportunity to study a unique group of patients who have undergone a partial medial temporal lobectomy as therapy for intractable epilepsy. This region is known to be crucial for normal self-motion sensing in rodents, but the homology of function between rat and human medial temporal lobe structures remains very poorly understood. There is thus a pressing need to validate the relevance of animal navigation studies for investigating the neural underpinnings of human navigation. Interestingly, I find that patients with right hemisphere medial temporal lobectomies, but not left, do show deficits in self-motion sensing. Specifically, they systematically underestimate the extent of their self-motion. This indicates that one of the specialized functions of the right medial temporal lobe is keeping track of one's location while walking; it also indicates that there may be more lateralization of function than is present in the rat.
The medial temporal lobes are impacted by a number of biological factors, including epilepsy, Alzheimer's Dementia, depression, and even normal aging. Thus, there is a large population of people who either have or will have abnormalities and/or damage to the medial temporal lobes. By enhancing understanding of the effect of damage to this area, my work stands to pave the way for designing effective therapies for these at-risk populations.
How does cognitive information affect our ability to keep track of our location?
The majority of research concerning how humans navigate in their local environment has focused on the processes of monitoring and integrating various forms of sensory self-motion signals. My recent research is exploring new ground by investigating the higher-level cognitive contributions to this process. This work shows that if we have prior knowledge about what our upcoming walking path will be, there are powerful benefits for our ability to keep track of where we are while navigating without vision. It is as if the prior knowledge allows us to generate expectations about what self-motion signals are likely, and this improves our ability to monitor our location. In this view, as we walk, we compare the incoming self-motion signals with the expected signals, and monitoring this "error signal" significantly improves our ability to estimate our current position. This result has implications for improving models of human navigation and also for improving orientation skills in poor navigators.
How can we tell if an action-based response reflects what a person perceives?
After seeing an object 10 feet away, a person might verbally estimate its distance as 8 feet, and yet walk 12 feet when walking towards it without vision. Does either response reflect what the person perceives? How can we reconcile the two? Questions such as these lie at the core of perception and action research and have many important implications. Space perception theorists have tended to interpret such discrepancies in terms of differences in cognitive processing: for example, one's memory of the "foot" unit may differ from that of the length of a pace, but both responses are assumed to be controlled by a common spatial representation ("perceived distance"). A predominant view in cognitive neuroscience, however, is that such differences reflect multiple spatial representations that are subserved by anatomically separate neural pathways. Although many behavioral results are interpreted as favoring the predominant view, many are in fact consistent with either interpretation. I have reasoned that manipulating distance cue availability provides a means of testing between the two alternatives. Using this method, I found that although verbal and blindfolded walking responses do yield somewhat different numerical values, there is no behavioral dissociation-they remain tightly linked across a wide variety of viewing conditions. When people make mistakes using verbal responses, they make the same kind of mistakes when walking. This type of study is strong evidence that both response types are based on a common perceptual representation, and this representation bears all the hallmarks of conscious perception. This does not rule out the existence of a neural visual stream used to control actions, but it does indicate that blindfolded walking provides a powerful clue about where people perceive the destination's initial location to be. This, in turn, opens the door for using the walking response in a variety of visual space perception experiments, in both healthy and brain-injured individuals.