The architecture of the retina imposes certain constraints on visual function, e.g., poor acuity in peripheral retina and poor discrimination of very fast speeds in central retina. From an adaptive perspective, it is likely that the distinct retinal regions serve unique roles in action tasks. Until recently, technology has limited the ability to explore the roles of the various retinal regions on tasks performed by mobile observers. With faster computers and graphics boards, and more sensitive head and eye tracking systems, we can now isolate and visually stimulate specific retinal regions even in the presence of observer eye, head, and body movements. We have been exploring the roles of the retinal periphery and central retina in navigation. To do so, we use a wide-field head tracking system to define the observer's point of view within some 3D world, transform the view into a 2D perspective image, mask it, and output the image to a head-mounted display. To ensure that the masked area remains fixed in retinal coordinates, the position of the mask is controlled by the observer's eye position, which is determined from online analysis of eye images. One of the challenges in developing such a system for vision-action research is the necessity to achieve fast throughput (minimum processing time). I will discuss the effects of system delays on human response measures in navigation as well as present some of our recent findings on the role of the retinal periphery for navigation.
Supported by NIH EY07389.