December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Searching the sock drawer: How do people find pairs?
Author Affiliations & Notes
  • Aoqi Li
    School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, PR China
  • Jeremy M Wolfe
    Brigham & Women’s Hospital, Boston, MA, USA
    Harvard Medical School, Boston, MA, USA
  • Zhenzhong Chen
    School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, PR China
  • Christian NL Olivers
    Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
    Institute for Brain & Behavior Amsterdam, Vrije Universiteit, Amsterdam
  • Footnotes
    Acknowledgements  National Natural Science Foundation of China (Grant No. 62036005), China Scholarship Council
Journal of Vision December 2022, Vol.22, 3871. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aoqi Li, Jeremy M Wolfe, Zhenzhong Chen, Christian NL Olivers; Searching the sock drawer: How do people find pairs?. Journal of Vision 2022;22(14):3871.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In standard visual search tasks, observers compare what they see in visual displays to target representation(s) loaded into memory. Typically, response times increase linearly with visual set size and logarithmically with memory set size. In contrast, consider searching for any matching pair of socks from a jumble. Here, the searcher is free to choose the current target(s), potentially searching for one or several possible target socks at any one moment and then changing search target(s) if no pair is found. Little is known on how people perform such “pair search”. Our observers compared two simultaneously presented arrays of objects for a single match across arrays. They responded by clicking on either member of the pair. Eye movements were recorded. Set size was varied independently for the two arrays. Though RT was a linear function of both the smaller and of the larger set size, it was not a monotonic, linear function of the total set size. As an index of memory involvement, we also measured the number of eye movement transitions between the two displays as well as the number of fixations on an array between those transitions. The number of transitions was a roughly 1:1 function of the smaller set size. In addition, the number of fixations between transitions remained quite small, even for the larger set sizes. These findings can reject at least two simple models: Observers did not exhaustively memorize one display and then perform a single "hybrid" search through the other array. Nor did observers appear to memorize just one item at a time, and then exhaustively search for it before returning to memorize the next target item. Instead, the trade-off between scanning and transitioning likely reflects a trade-off between cheap eye movements and expensive memory.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.