Abstract
As VR gains mainstream traction and is adopted for more serious use-cases such as remote monitoring and troubleshooting, a thorough study of perception over such devices becomes important. An advantage that VR has over its 2D counterpart is the large virtual space. However, it needs to be empirically determined how visual search characteristics derived from traditional 2D visual search experiments (~50 objects) scale to immersive 3D scenarios with more numerous objects (~1000). To study this, we designed the classic feature and conjunction search experiment in VR, modelling virtual space using a spherical coordinate system centered at the VR headset's initial position. The target was presented in one of 32 equally sized regions blocked with 45 degree increments in radial angle and elevation. The target was a red cube embedded in 96, 480, 768, or 1024 distractors that were equally distributed among the 3D regions. Distractors were either green cubes (feature search) or red spheres and green cubes (conjunction search). The task was to find the target as quickly as possible using head and body movements. We studied slopes of reaction times with respect to number of distractors for each of the 32 regions. Based on data from 25 participants, overall, the typical pattern of slope of feature and conjunction search was observed. For regions directly in front of the participants, reaction times were faster for the left versus right visual field. Even regions behind the observer followed similar trends as regions in front. Search in atypical regions, such as close to the toes or directly above demonstrated haphazard characteristics. Though these findings seem robust, we found that occlusion can be a nuisance variable in such search tasks.
Meeting abstract presented at VSS 2018