Abstract
In previous research we have explored the mechanisms behind how individuals combine several different multispectral, real-world images using SFT. We compared two displays: one presented each single image beside one another and a second displayed a single algorithmically combined image. Using SFT provided us with evidence for the cognitive underpinnings that may have led to a particular pattern of performance. With this research, we found participants processing of each sensor image declined when provided multiple images, regardless of how we attempted to combine (or not combine) the imagery. Additionally, individuals exhibited the ability to process all the image information simultaneously and sometimes with full-integration. In the current work we explored whether the results of processing strategies for static imagery generalized to dynamic environments. Dynamic environments contain movement information that is highly correlated across time. Particular aspects of each single-sensor may provide redundant or complementary movement information for an operator to make a quick, accurate decision. How multiple sensors are displayed may influence redundancy gains or facilitate performance when spatially confining the information to a single visual reference space. Using short, dynamic video segments we found speed and accuracy improvements for multiple sensors presented beside one another above single sensors presented alone or algorithmically combined. Our findings agree with the previous static image results: processing efficiency of each image declines when multiple images are provided indicating people can use information from multiple images of a static scene, although with limited capacity. When short video segments are used instead of a static scene, accuracy and response times improve with redundant videos whether the two video types are displayed next to one another or are combined into a single stream. However, the gains are not sufficiently large enough to reach accuracy and response time levels predicted by unlimited capacity parallel processing.
Meeting abstract presented at VSS 2017