Abstract
The current research aims to investigate how attention samples information from dynamic stimuli. We measured participants' precision in reporting from continuously varying features (color and orientation) at a specific moment in time using a visual cue (a white circle around the target for 54ms). Participants first completed a perceptual matching and then a working memory matching block. These allowed participants to become comfortable with the continuous reporting scale and provided a baseline measurement of working memory precision. The next three test blocks were counterbalanced across participants where six shapes, located in a circle around fixation, rotated or changed in color through a cyclical vector at a rate of 4 degrees per 27ms. During each trial a cue was presented around one of the shapes and participants attempted to reconstruct the value of one or two features at the moment of the cue using continuous report with a mouse and visual feedback. In the orientation-only and color-only blocks one feature was held constant whilst the other was dynamic and subjects ignored the static feature. In the both-feature condition, both color and orientation were dynamic and participants reported both. Using a mixture model (Zhang & Luck, 2008) the mean shift and variance of the report errors were analyzed. It was found that in the single and double feature report, participants, on average were reporting the orientation and color value of the cued shape 189ms and 256ms, respectively, after the cue. Surprisingly there was little additional variance or delay in report errors when participants had to report two features instead of one. Furthermore, our results suggest independent access to the two features, given assumptions about the time to access features. These assumptions are characterized in three statistical models that compare simultaneous, sequential and independent access to two features of a single object.
Meeting abstract presented at VSS 2016