Abstract
Generalized flash-lag effect (FLE) is a visual phenomenon in which an abrupt onset of flash is typically perceived to lag behind a continuously changing visual feature by several tens of milliseconds. To test whether three previously proposed hypotheses (temporal extrapolation, latency difference and time integration) hold true for other situations than the FLE of motion, I measured the time window for binding the content of three different visual attributes (bar orientation, head orientation of 3D face images, and face identity of morphing face images) to a visual flash or pulse sound (auditory flash) using a reverse correlation technique. In experiments, participants (n=10) were instructed to look at a sequential presentation of randomly-chosen images and report the content at the time when a visual or auditory flash appeared (2AFC task by pressing buttons). Each participant repeated 400 trials for each condition. In case of visual flash, the peak latencies of the estimated time windows were +43ms, -13ms and -84ms for bar orientation, face orientation and face identity, respectively (+ means time after flash and vice versa). Therefore, flash-lead, instead of flash-lag was observed in face identity judge. On the other hand, in case of auditory flash, the peak latencies were +47ms, +74ms and +75ms, showing small difference depending on visual attribute. The half-band width of these time windows was significantly wider for auditory flash than for visual flash. Together with the results from experiments measuring FLE of smoothly changing visual features whose change direction flips in midstream, it is suggested that temporal integration process whose time range changes depending on visual attribute and flash modality underlies the perception of FLE, i.e. a hybrid of latency difference and time integration hypotheses. Our results also indicate different temporal mechanisms between within-modal binding and cross-modal binding.
Meeting abstract presented at VSS 2017