KEYWORDS: Visualization, Haptic technology, Information visualization, Spatial frequencies, Error analysis, Statistical analysis, 3D acquisition, Sensors, 3D printing, Information fusion
Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the
proximal perception of objects. The interaction of these systems has been the subject of some debate over the
years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their
psychophysical relationship. To better understand the nature of this interaction we have performed a variety of
experiments characterizing the detection, discrimination, and production of 3D shape. A stimulus set of 25 complex,
natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured
using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and
haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using
uni- or crossmodal source information. In all experiments, the performance in the unimodal conditions were
similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of
object features affected performance differentially across the range used in this experiment. The sculpted objects
were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli.
Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual
input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and
comparison experiments yielded similar findings. There is a marked performance difference between the visual
and haptic systems and these differences were systematically distributed along the range of feature details. The
existence of non-universal (i.e. modality-specific) representations explain the poor crossmodal performance. Our
current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is
independent and somewhat efficient translation is possible. Vision shows a distinct advantage when dealing with
higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.