September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Analysis of dynamic multispectral video using systems factorial technology (SFT)
Author Affiliations
  • Elizabeth Fox
    Wright State University, Dayton, OH
  • Joseph Houpt
    Wright State University, Dayton, OH
Journal of Vision August 2017, Vol.17, 567. doi:https://doi.org/10.1167/17.10.567
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elizabeth Fox, Joseph Houpt; Analysis of dynamic multispectral video using systems factorial technology (SFT). Journal of Vision 2017;17(10):567. https://doi.org/10.1167/17.10.567.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In previous research we have explored the mechanisms behind how individuals combine several different multispectral, real-world images using SFT. We compared two displays: one presented each single image beside one another and a second displayed a single algorithmically combined image. Using SFT provided us with evidence for the cognitive underpinnings that may have led to a particular pattern of performance. With this research, we found participants processing of each sensor image declined when provided multiple images, regardless of how we attempted to combine (or not combine) the imagery. Additionally, individuals exhibited the ability to process all the image information simultaneously and sometimes with full-integration. In the current work we explored whether the results of processing strategies for static imagery generalized to dynamic environments. Dynamic environments contain movement information that is highly correlated across time. Particular aspects of each single-sensor may provide redundant or complementary movement information for an operator to make a quick, accurate decision. How multiple sensors are displayed may influence redundancy gains or facilitate performance when spatially confining the information to a single visual reference space. Using short, dynamic video segments we found speed and accuracy improvements for multiple sensors presented beside one another above single sensors presented alone or algorithmically combined. Our findings agree with the previous static image results: processing efficiency of each image declines when multiple images are provided indicating people can use information from multiple images of a static scene, although with limited capacity. When short video segments are used instead of a static scene, accuracy and response times improve with redundant videos whether the two video types are displayed next to one another or are combined into a single stream. However, the gains are not sufficiently large enough to reach accuracy and response time levels predicted by unlimited capacity parallel processing.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×