UC Berkeley-led
research team finds brain's perception depends upon the source
of cues it receives
21
November 2002
By
Sarah Yang, Media Relations
Berkeley
- When the human brain is presented with conflicting information
about an object from different senses, it finds a remarkably
efficient way to sort out the discrepancies, according to
new research conducted at the University of California, Berkeley.
The researchers
found that when sensory cues from the hands and eyes differ
from one another, the brain effectively splits the difference
to produce a single mental image. The researchers describe
the middle ground as a "weighted average" because
in any given individual, one sense may have more influence
than the other. When the discrepancy is too large, however,
the brain reverts to information from a single cue - from
the eyes, for instance - to make a judgment about what is
true.
The
findings, reported Friday, Nov. 22, in the journal Science,
could spur advances in virtual reality programs and remote
surgery applications, which rely upon accurately mimicking
visual and haptic (touch) cues.
|
Above,
researcher Marc Ernst uses a force-feedback device that
provides conflicting visual and haptic cues. Below,
a schematic of the device. Photo and drawing courtesy
of Martin Banks
|
In a series
of experiments, the researchers divided 12 subjects into two
groups. One group received two different types of visual cues,
while the other received visual and haptic cues. The visual-haptic
group assessed three horizontal bars. Two appeared equally
thick to the eye and hand in all instances, while the third
bar alternately appeared thicker or thinner to the eye or
hand. The group with two visual inputs assessed surface orientation,
with two surfaces appearing equally slanted according to two
visual cues, while a third appeared more slanted according
to one cue and less slanted according to the other.
To manipulate
the sensory cues, the researchers used force-feedback technology
to simulate touch and shutter glasses to simulate 3-D visual
stimuli. Participants in the visual-haptic group inserted
their thumb and forefinger into the device to "feel"
an object projected onto a computer monitor. Through the devices,
they see and feel the virtual object.
"We
found that when subjects grasped an object that felt 54 millimeters
thick but looked as if it were 56 millimeters thick, their
brains interpreted the object as being somewhere in between,"
said James M. Hillis, lead author of the study and a former
graduate student in vision science at UC Berkeley. Hillis,
now a post-doctoral researcher in psychology at the University
of Pennsylvania, worked on the research with Martin S. Banks,
professor of optometry and psychology at UC Berkeley.
"If
the brain is taking in different sensory cues and combining
them to create one representation, then there could be an
infinite number of combinations that the brain is perceiving
to be the same," said Banks. "The brain perceives
a block to be three inches tall, but was it because the eyes
saw something that looked four inches tall while the hands
felt something to be two inches tall? Or, was it really simply
three inches tall? We wanted to know how much we could push
that."
What the
researchers found was that pushing the discrepancies too far
resulted in the brain defaulting to signals from either the
hands or eyes, depending upon which one seemed more accurate.
That means the brain maintains three separate representations
of the object's property. One representation comes from the
combined visual and haptic cues, the second from just the
visual cues, and the third from the haptic cues.
What surprised
the researchers was that this rule did not hold true when
the brain received discrepant cues from the same sense. In
tests where participants used only their eyes, researchers
presented conflicting visual cues regarding the degree of
slant in surfaces appearing before them. One cue - the binocular
disparity - made the surface appear to slant in one direction,
while the other cue - textured gradient - indicated a different
slant. The participants regularly perceived the "weighted
average" of the visual signals no matter how far the
two cues differed.
"If
the discrepant cues were both visual, the brain essentially
threw the two individual estimates away, keeping only the
single representation of the object's property," said
Hillis.
Why would
the brain behave differently when receiving information from
two senses instead of one? "We rely upon our senses to
tell us about the surrounding environment, including an object's
size, shape and location," Hillis explained. "But
sensory measurements are subject to error, and frequently
one sensory measurement will differ from another."
"There
are many instances where a person will be looking at one thing
and touching another, so it makes sense for the brain to keep
the information from those two sensory cues separate,"
Banks added. "Because people can't look at two different
objects at the same time, the brain can more safely discard
information from individual visual cues after they've been
combined into one representation. The brain is efficient in
that it doesn't waste energy maintaining information that
it will not likely need in real life."
Banks
said that understanding how the brain perceives various sensory
inputs is vital in the development of virtual reality applications,
such as remote surgery technology, when what the eyes see
and what the hands feel must accurately reflect reality.
"Imagine
a future where the surgeon is in San Francisco, and the patient
is in Nevada," said Banks. "The surgeon is looking
at a monitor to manipulate a robot arm with a surgical instrument
cutting into the patient. The surgeon feels the contact with
the patient with a force-feedback device like the one we used
in our experiment. Knowing how the brain combines visual and
haptic stimuli is the first step in helping researchers develop
better programs that provide accurate touch feedback to the
physician so he or she can actually feel what's going on inside
the patient."
Other
co-authors of the paper are Marc O. Ernst, research scientist
at the Max-Planck Institute for Biological Cybernetics in
Germany, and Michael S. Landy, professor of psychology at
New York University. Ernst conducted the visual-haptic experiments
for this study while he was a post-doctoral researcher in
vision science at UC Berkeley. The research was supported
by the Air Force Office of Scientific Research, the National
Institutes of Health, the Max-Planck Society and Silicon Graphics.
The
vision research contributes to UC Berkeley's Health Sciences
Initiative, a major push to find innovative solutions to today's
health problems through interdisciplinary collaboration.
###