October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Building the “Reachspace Database”: a large-scale stimulus set of reachable environments
Author Affiliations
  • Emilie Josephs
    Harvard University
  • Haoyun Zhao
    Harvard University
  • Talia Konkle
    Harvard University
Journal of Vision October 2020, Vol.20, 193. doi:https://doi.org/10.1167/jov.20.11.193
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emilie Josephs, Haoyun Zhao, Talia Konkle; Building the “Reachspace Database”: a large-scale stimulus set of reachable environments. Journal of Vision 2020;20(11):193. https://doi.org/10.1167/jov.20.11.193.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Much of human behavior takes place in the near space: whether for work or for leisure, we spend our days acting upon the reachable environment, using our hands to manipulate objects in the service of a task—typing an email, chopping vegetables, or pouring a cup of coffee. While classic frameworks in vision do not draw sharp distinctions between reachable-scale and navigable-scale scenes, we have found that views of these reachspaces dissociate from navigable-scale scenes in both perceptual and neural measures (Josephs & Konkle, 2019; Josephs & Konkle, VSS, 2019). Here, we introduce a large-scale database of reachspace images, in order to facilitate research into the perception of reachable environments. To sample reachspaces, we compiled a list of hobbies, chores, and professions, and used online searches to find images of the reachable environments people view when engaged in those activities. We additionally considered common locations, generated lists of activities performed in those spaces, and collected the corresponding images. We restricted images to have an appropriate perspective, angle, and distance, reflecting the view of a person performing a task in that space. This process has identified 175 reachspace categories each with 20 or more images per category (N= 4,267 images), with more categories continuously being added. The sampled categories endeavor to show broad cultural variability (i.e. Go boards as well as chessboards), represent a broad sampling of occupations (i.e. sound mixers, office workers, carpenters), and include a broad sampling of activities in each location (i.e. kitchens can support chopping vegetables, washing dishes, decorating cakes, mixing ingredients, etc). Rich and high-quality image sets exist for objects and scenes, and here we introduce one for views of the space in between. Overall, this database will provide an important resource for researching visual cognitive processes operating over reachable views of the environment.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.