Abstract
Our ability to recognize familiar objects depends in part on their visual orientation. In the most well-known case, we are better able to recognize faces when they appear upright than inverted (e.g., Yin, 1969). We recently showed that in addition to retinal orientation, environmental orientation also influences performance in face processing tasks; participants were better at perceiving and remembering faces when the faces were environmentally upright as compared to environmentally inverted, even when retinal orientation was held constant (Davidenko & Flusberg, 2012). Here we investigated whether this sensitivity to environmental orientation requires a lifetime of exposure to a class of stimuli, or whether it can manifest in a short-term learning paradigm. In an old/new recognition task, we presented participants with novel abstract 2D shapes while they lay horizontally. Between study and test, we manipulated participants’ orientation (same as in study or Δ180°) and the shapes’ orientation (same as in study or Δ180°). This allowed us to detect independent effects of retinal and environmental orientation on recognition performance. We found that retinal orientation significantly affected performance, with mean d'=1.29 when retinal orientation was matched between study and test versus d'=0.87 when it when it differed by 180° (t-test20=5.5, p<0.0001). Intriguingly, environmental orientation also affected memory performance, with d'=1.20 when environmental orientation was matched versus d'=0.95 when it differed by 180° (t-test20=3.2, p<0.005), consistent with early experiments by Irvin Rock (1957). In a follow-up experiment, we included sitting upright conditions to test whether environmental orientation is always represented, or only when an observer is in a non-canonical orientation during encoding. We found similar effects of environmental orientation when subjects learned the shapes while sitting upright as while lying sideways (p>0.5). These results suggest that environmental orientation plays a role in our visual representations regardless of our orientation during encoding.
Meeting abstract presented at VSS 2013