August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Cross-modal feature based attention facilitates spatial transfer of perceptual learning in motion-domain figure-ground segregation
Author Affiliations & Notes
  • Catherine A. Fromm
    Rochester Institute of Technology Center for Imaging Science
  • Krystel R. Huxlin
    Flaum Eye Institute, University of Rochester Medical Center
    University of Rochester Center for Visual Science
  • Gabriel J. Diaz
    Rochester Institute of Technology Center for Imaging Science
    University of Rochester Center for Visual Science
  • Footnotes
    Acknowledgements  NIH 1R15EY031090
Journal of Vision August 2023, Vol.23, 5914. doi:https://doi.org/10.1167/jov.23.9.5914
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Catherine A. Fromm, Krystel R. Huxlin, Gabriel J. Diaz; Cross-modal feature based attention facilitates spatial transfer of perceptual learning in motion-domain figure-ground segregation. Journal of Vision 2023;23(9):5914. https://doi.org/10.1167/jov.23.9.5914.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This study tested the role of a cross-modal feature based attention (FBA) cue on perceptual learning and spatial transfer. The trained task was figure-ground segregation in the motion domain. The experiment involved a pre-test, ten days of training, and a post-test. Twelve visually intact participants were immersed in a virtual environment delivered to a Vive Pro Eye. Participants identified the location and motion direction (MD) of a peripheral 10° aperture of semi-coherently moving dots embedded at randomized locations within whole-field random dot motion. The aperture contained both randomly moving dots and signal dots which had global leftward or rightward motion. To manipulate motion coherence, a 3-up-1-down staircase adjusted the direction range of the signal dots in response to segregation judgments. The dot stimulus was preceded by a 1s white-noise spatialized auditory cue emitted from the fixation point (neutral group), or from an emitter moving in the direction of signal dots at 80°/s in a horizontal arc centered on the fixation point (FBA cue group). Visual feedback indicated the selected and true aperture locations, and correctness of the MD judgment. Analysis measured MD discrimination within the aperture as well as segregation ability, both measured in terms of direction range threshold (DRT). At trained locations, MD DRT improved similarly in FBA and neutral groups, and learning was retained when the pre-cue was removed (ΔDRT from pre-test to post-test: 61±10˚ (SD) FBA, 74±10˚ neutral), and transferred to untrained locations (41±10˚ FBA, 45±10˚ neutral). DRT for localization also improved in both groups when pre-cues were removed (49±10˚FBA, 44±10˚ neutral), but only the FBA group showed full transfer of learning to untrained locations in the segregation task (32±10˚ FBA, 23±10˚ neutral). In summary, transfer occurred for both MD and segregation tasks, but the segregation transfer required the presence of the cross-modal FBA cue during training.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×