Purchase this article with an account.
Brent Strickland, Brian Scholl; "Event type" representations in vision are triggered rapidly and automatically: A case study of containment vs. occlusion. Journal of Vision 2012;12(9):1103. doi: 10.1167/12.9.1103.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
Recent infant cognition research suggests that the mind reflexively categorizes dynamic visual input into representations of "event types" (such as occlusion or containment), which then prioritize attention to relevant visual features - e.g. prioritizing attention to the dimension (height vs. width) that predicts whether a rectangular object will fit inside another in the context of containment, but not occlusion, even when those events are highly visually similar. We recently discovered that this form of "core knowledge" continues to operate in adults' visual processing: using a form of change detection, we showed that the category of an event dramatically influences the ability to detect changes to certain features. In the current study we explored just how event type representations may be quickly and flexibly triggered by specific visual cues. Subjects viewed dynamic 2D displays depicting repeating events wherein 5 rectangles oscillated horizontally, moving either behind or into 5 horizontally-oriented and haphazardly placed containers. Occasionally, a rectangle changed its height or width while out of sight, and observers pressed a key when they detected such changes. Detection was better for height changes than for width changes in containment events, but not in occlusion events (since height predicts fit in horizontal containment events). This was true not only when each individual rectangle always consistently underwent occlusion or containment, but also when each rectangle randomly underwent occlusion or containment during each oscillation. We also independently varied containment vs. occlusion for the disappearance and reappearance of the rectangles, and discovered that enhanced change detection for the "fit"-relevant dimension only occurred when containment cues were present for both the disappearance and reappearance. Collectively, these and other results indicate that event-type representations are formed and discarded during online visual processing in response to cues that may change from moment to moment.
Meeting abstract presented at VSS 2012
This PDF is available to Subscribers Only