Abstract
Our environment abounds in objects of different orientations. When we want to interact with these objects, e.g. by grasping, we can do so provided we accurately perceive their orientation. The orientation of an object can be characterized along three independent spatial axes: x (or slant), y (or tilt), and z (or depth). How does the human brain achieve the percept of object orientation, is it by processing each axis separately, or by processing them conjointly? Also, do different (combinations of) monocular and binocular depth cues affect our perception of depth differently? To answer these questions, we designed a behavioral task in which participants were asked to reproduce the orientation of a target array by adjusting the tilt and slant of a test array. On each trial, random combinations of target orientations on both X and Y were generated. Error magnitudes for each trial and for each axis were recorded. Over the course of three behavioral experiments, we coupled this task with an incremental number of depth cues, such that: i) only texture gradients were available; ii) texture gradients were coupled with line vergeance; iii) texture gradients, line vergeance and binocular disparity were used together. Our results appear to indicate a differential coupling strength between X and Y orientation perception according to which depth cues are available. In a follow-up fMRI study using the same behavioral procedure and texture gradients only, we presented not only trials where X and Y orientations were generated together, but also trials where only one of these axes was manipulated. The neuroimaging data have been analyzed through both effective and functional approaches, to further specify the brain areas involved in object orientation perception, as well as their coupling.
Meeting abstract presented at VSS 2018