July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
iMap Motion: Validating a Novel Method for Statistical Fixation Mapping of Temporal Eye Movement Data
Author Affiliations
  • Yingdi Liu
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Junpeng Lao
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Sébastien Miellet
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Gustav Kuhn
    Department of Psychology, Goldsmiths College, London, United Kingdom
  • Roberto Caldara
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
Journal of Vision July 2013, Vol.13, 796. doi:10.1167/13.9.796
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yingdi Liu, Junpeng Lao, Sébastien Miellet, Gustav Kuhn, Roberto Caldara; iMap Motion: Validating a Novel Method for Statistical Fixation Mapping of Temporal Eye Movement Data. Journal of Vision 2013;13(9):796. doi: 10.1167/13.9.796.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual system is equipped with the most sophisticated machinery to effectively adapt to the world. Where, when, and how human eyes are moved to gather information to adapt to the visual environment has been a question that has fascinated scientists for more than a century. However, research in visual cognition and in eye movements has primarily relied on the use of static images, which are fairly impoverished representations of real-life situations. At least for eye movements, this methodological limitation might arise from the difficulty of using videos as visual inputs (since they engender a large quantity of data), but also by the absence of computational tools performing adequate statistical analyses on 3D dataset (i.e., 2D images over time). To overcome this limitation, we adapted iMap (Caldara and Miellet, 2011), a robust data-driven method that generates statistical fixation maps on 2D images. We developed a novel data-driven method that does not require a priori segmentation of video frames into Regions of Interest and isolates statistical significant fixation contrasts over time: iMap Motion. To validate the technique, we recorded eye movement data on two well-established paradigms, involving the viewing of dynamic magic tricks (Kuhn and Tatler, 2005). After extracting fixations, the data were smoothed by convoluting Gaussian kernels to generate three-dimensional fixation maps at each frame. We then applied the statistical Random Field Theory to correct for multiple comparisons. Finally, we assessed significant fixation differences in this 3D search space by tracking the peak intensity of the statistical contrasts of interest over time: the success or failure in detecting the magic tricks. iMap Motion automatically identified the maximum fixation contrasts, which were related to fixations misdirected from the location of the tricks. These observations posit the method as an effective tool to isolate meaningful fixation differences in temporal eye movement data.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×