Modern neurophysiological and psychophysical studies of vision are typically based on computer-generated stimuli presented on flat screens. While this approach allows precise delivery of stimuli, it suffers from a fundamental limitation in terms of the maximum achievable spatial coverage. This constraint becomes important in studies that require stimulation of large expanses of the visual field, such as those involving the mapping of receptive fields throughout the extent of a cortical area or subcortical nucleus, or those comparing neural response properties across a wide range of eccentricities. Here we describe a simple and highly cost-effective method for the projection of computer-generated stimuli on a hemispheric screen, which combines the advantages of computerized control and wide-field (100° × 75°) delivery, without the requirement of highly specialized hardware. The description of the method includes programming techniques for the generation of stimuli in spherical coordinates and for the quantitative determination of receptive field sizes and shapes. The value of this approach is demonstrated by quantitative electrophysiological data obtained in the far peripheral representations of various cortical areas, including automated mapping of receptive field extents in cortex that underwent plasticity following lesions.

*Hemispheric screen:*A translucent polycarbonate hemisphere, 90 cm in diameter, was used. Similar hardware can be made to order by most manufacturers of skylights. The thickness of the polycarbonate layer was 5 mm, resulting in a transparency of 14%. The inner surface of the hemisphere was coated with a thin layer of photographer's dulling spray to reduce reflection. With the aid of a custom-made device, lines of longitude and circles of latitude, at 10° interval, were drawn on the hemisphere with a permanent marker to provide a coordinate system for receptive field locations (Figure 1A). These also served as landmarks for calibrating the projector (see below). In order to maximize the precision of the method and to avoid geometric distortions, the center of the base of the hemispheric screen needs to be brought to a position in space that corresponds to the nodal point of one of the eyes (or, in experiments requiring binocular stimulation, the midpoint between the nodal points of the eyes). In this way, the prime (or vertical) meridian (

*λ*= 0) and the equator (

*φ*= 0) are positioned directly in front of the animal. In the convention adopted in the present paper (Figure 3A), this means that −90° ≤

*λ*≤ 0° corresponded to the right visual field and 0° ≤

*λ*≤ 90° to the left visual field. Similarly, 0° ≤

*φ*≤ 90° corresponded to the upper visual field and −90° ≤

*φ*≤ 0° to the lower visual field. In practice, small errors in the positioning of the hemispheric screen relative to the animal's head were not important, as they could be corrected

*a posteriori*based on the results of the experiment. For example, the precise location of the vertical meridian could be determined by mapping receptive field locations across the boundary of the primary and secondary visual areas (V1 and V2, respectively) and determining the midpoint of the zone of overlap between the representations in the two cerebral hemispheres (Fritsches & Rosa, 1996; Rosa et al., 1993). The location of the horizontal meridian could be determined by receptive field locations on the dorsal and ventral surfaces of V2 (Rosa, Sousa, & Gattass, 1988; Rosa, Fritsches, & Elston, 1997).

*Projector:*We used an Optoma EP726S DLP (Digital Light Processing) projector (Optoma Technology, Fremont, CA, USA), configured to operate at 640 × 480 resolution and 85-Hz refresh rate, with geometrical correction (i.e., keystoning) turned off. The projector was attached to a custom-made mount that allowed the projector to be rotated and tilted in the calibration procedure but locked afterward. The projector was placed about 2.1 m away from the hemisphere. Because direct line-of-sight visualization of the projector lamp (2800 ANSI lumens) can become uncomfortable to human vision following prolonged exposure, we used several layers of neutral density filter (Kodak, ND 1.00) interposed between the projector's lens and the hemispheric screen, thus reducing the luminance of the projected image. In a dimly illuminated room, stimulus luminance measure on the internal surface of the hemisphere could be reliably varied between 0.31 cd/m

^{2}and 4.0 cd/m

^{2}, thus allowing stimulus contrast up to 86.0%. The range of contrast achievable with this method can be adjusted according to the needs of the experiments, depending on the exact hardware configuration (including the model of projector, the degree of transparency of the hemispheric screen, and the configuration of filters interposed along the light path). In addition, a small piece of tracing paper (2° × 2°) was attached to the projection center on the hemisphere to further diffuse the bright image of the projector lamp along the line of sight. Finally, an occluder made from thick cardboard was used to block the projector's light at all times between stimulus presentations, to avoid prolonged exposure of the animal's eye to the lamp.

*Software:*We used a customized version of Expo (release 1.5.0) designed by Dr. Peter Lennie and others for stimulus presentation and data acquisition. The various spherical stimuli described below were implemented as Expo routines using the Objective-C programming language and the OpenGL library. The software ran on a Power Macintosh (Apple, Cupertino, CA, USA) with dual 2.8-GHz quad-core Intel Xeon processors and 4 GB of RAM. Two ATI Radeon HD 2600 XT graphics cards were used to drive an LCD monitor (for the Expo user environment) and the projector (for stimulus presentation). Data analysis was performed by software written in

*Mathematica*(Wolfram Research, Champaign, IL, USA).

*Electrophysiological recording:*Extracellular single-unit recordings were obtained from anesthetized marmoset monkeys (male and female adults, 350–415 g) following a protocol slightly modified from Bourne and Rosa (2003). In brief, following premedication with diazepam (5 mg/kg) and atropine (0.2 mg/kg), anesthesia was induced by intramuscular injection of Alfaxan (alfaxalone 10 mg/kg). Following surgery, the animals were anesthetized and paralyzed by an intravenous infusion of sufentanil (6

*μ*g/kg/h) and pancuronium bromide (0.1 mg/kg/h) and were artificially ventilated with a gaseous mixture of nitrous oxide and oxygen (70:30). The electrocardiogram and SpO

_{2}level were continuously monitored. Appropriate focus and protection of the cornea from desiccation was achieved by means of contact lenses selected by streak retinoscopy. Visual stimuli were monocularly presented in a dark room to the eye contralateral to the cortical hemisphere from which the neuronal recordings were obtained. The ipsilateral eye was occluded. The data presented in the Results section were obtained from the peripheral representation of V1 (Fritsches & Rosa, 1996), the middle temporal area (MT; Rosa & Elston, 1998), as well as area prostriata—a visual association area located in the rostral calcarine sulcus (Cuénod, Casey, & MacLean, 1964; Palmer & Rosa, 2006; Rosa, Casagrande, Preuss, & Kaas, 1997). In one animal, a lesion of V1 was placed unilaterally at the age of 6 weeks, followed by a recovery period of 12 months, resulting in a “cortical scotoma.” In this case, the method described here was used to estimate the extent of this scotoma, through mapping receptive fields along the perimeter of the lesion zone (Rosa et al., 2000). The experiments were conducted in accordance with the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes, and all procedures were approved by the Monash University Animal Ethics Experimentation Committee.

*f*(Equation 4), which allows the generation of “distorted” 2D images for geometrical shapes specified in spherical coordinates. When the images are projected to a translucent hemisphere with a video projector, the intended shapes are recreated on the spherical surface (Figure 2).

*p*on the hemisphere can be specified by its

*longitude*(−90° ≤

*λ*≤ 90°) and

*latitude*(−90° ≤

*φ*≤ 90°). For many applications, it is also convenient to use the polar coordinate system, where the hemisphere is parameterized by

*eccentricity*(0° ≤

*r*≤ 90°) and

*polar angle*(−90° ≤

*θ*≤ 270°). To translate from (

*λ, φ*) to (

*r, θ*):

*f,*first consider a special case where the stimulus

*p*is a point on the equator (Figure 4A) and the center of projection is at (

*λ, φ*) = (0°, 0°). Let

*d*be the radius of the hemisphere,

*m*the distance between the hemisphere and the (idealized) projector lens, and

*n*the distance between the projector lens and the light source. It is easy to see that to display point

*p,*the corresponding point

*p*′ on the projector's image plane should be

*r*is the longitude of

*p*(in radian), and

*r*′ is the planar coordinate of

*p*′ (in pixels). The general case where

*p*is not restricted to the equator (Figure 4B) is only slightly more complicated if

*p*is expressed in spherical polar coordinates

*p*= (

*r, θ*) and

*p*′ in planar polar coordinates

*p*′ = (

*r*′,

*θ*′):

*f*:(

*λ, φ*) → (

*x, y*) therefore is

*d*and

*m*can be measured directly,

*n*is inaccessible. With a hemisphere whose lines of longitudes and circles of latitudes are already marked on the surface, it is more convenient to record a set of correspondence between the spherical coordinates (

*λ, φ*) and the image coordinates (

*x, y*), and then estimate the optimal parameters (

*d, m,*and

*n*) by least-square fitting the corresponding points to the model expressed by Equation 4. Specifically, with the projector positioned such that the horizontal line crossing the center of the image plane projects to the equator on the hemisphere and the vertical line crossing the center of the image plane projects to the prime meridian (

*λ*= 0°) on the hemisphere, we manually moved a cursor such that it was projected to match the positions of 56 registration points on the hemisphere and recorded the (

*x, y*) coordinates. The registration points we used were intersections of the lines of longitudes and the circles of latitudes (Figure 1A). Figure 5A illustrates the correspondence between the spherical coordinates and image coordinates for our setup. The best-fit values for Equation 4 were

*d*= 94531300.0,

*m*= 414320000.0, and

*n*= 1989.81. The errors were smaller than 1° and were probably due to imperfection in the manufacturing of the hemisphere.

*λ, φ*) = (±30°, 0°) instead of (0°, 0°) to provide better coverage of the far periphery of the contralateral visual field (Figure 5B). The

*λ*of the stimulus needs the subtraction or addition of 30° to compensate for the effect of the displaced projection center. The projector center can also be off the equator, for example (−30°, −30°), to provide better coverage of the lower visual field (at the expense of upper field coverage). In this configuration, the 3 × 3 rotation matrix that transform (−30°, −30°) to (0°, 0°) needs to be applied before

*f*.

*λ, φ*) on the hemisphere, the stimulus generation software simply has to draw a small square at the corresponding (

*x, y*) coordinates given by Equation 4. However, connecting dots to form lines and shapes on the hemisphere is more involved than drawing straight lines in the image space, because “straight lines” on the hemisphere (geodesics) correspond to curves in (

*x, y*) space (see Figure 5A) and therefore need to be approximated by multiple line segments. Although explicit calculation of geodesics is possible, in most applications it is easier to do the calculations by rotating lines of longitude and circles of latitude. The following provides recipes for programming some of the most basic building blocks of stimuli used in visual physiology. A reference implementation (as

*c*source code) can be found in Supplement I. A short video demonstrating several commonly used stimuli is in Supplement III.

*Squares:*Quadrangles {(

*λ, φ*)∣

*λ*

_{1}≤

*λ*≤

*λ*

_{2},

*φ*

_{1}≤

*φ*≤

*φ*

_{2}} are the natural analogs of planar squares. To project a quadrangle on the hemisphere, note that the corresponding (

*x, y*) image is not a rectangle but has curved sides and therefore needs to be approximated by a polygon. This can be accomplished by evenly sampling the edges of the quadrangle with 10 to 20 vertices on each side, and then projecting the vertices to the image space with

*f*. The envelope of a receptive field can be mapped by flashing a small patch of quadrangle displayed in an 8 × 8 or 12 × 12 grid (see Figure 7A for an example). The white noise stimulus used in reverse correlation experiments (Jones & Palmer, 1987a; Marmarelis & Marmarelis, 1978; Ohzawa, DeAngelis, & Freeman, 1996; Rust, Schwartz, Movshon, & Simoncelli, 2005) can also be constructed with quadrangles.

*Bars:*A bar is the most basic geometrical shape for stimulus construction. To draw an elongated bar of arbitrary orientation at an arbitrary point on the hemisphere, first sample a polygon representing an elongated quadrangle centered on (

*λ, φ*) = (0°, 0°), i.e., {(

*λ, φ*)∣ − (1/2)

*L*≤

*λ*≤ (1/2)

*L,*− (1/2)

*W*≤

*φ*≤ (1/2)

*W*}, where

*L*and

*W*are the length and width, rotate the coordinates in 3D around the

*x*

_{3}-axis to the desired orientation, translate the rotated coordinates on the sphere to the desired location, and then calculate the corresponding image coordinates with

*f*.

*Moving bars:*Direction tuning of a neuron is often measured by its response to a moving bar (see Figure 7D for an example). The method described above can be used to generate a moving bar on the hemisphere. First, generate a sequence of vertical bars moving from left to right centered around (

*λ, φ*) = (0°, 0°) by shifting the longitude. For each bar in the sequence, rotate the vertices to the desired direction and location.

*Gratings:*Drifting sinusoidal gratings are frequently used to characterize the spatiotemporal filtering properties of visual neurons (see Figures 7E and 7F for examples).

*f*. First generate a 2D sinusoidal grating as OpenGL texture, divide the texture into a 10 × 10 (or more) mosaic, and then project the squares individually to the framebuffer using OpenGL's texture mapping mechanism by converting the four vertices of the squares to the image space using

*f*. Caution must be used in interpreting data obtained with this stimulus. Although the generated images provide a good approximation of sinusoidal gratings and can supply useful information about the tuning properties of visual neurons, they are not strictly 2D Fourier components, due to errors in image warping, variation in focus plane, and variation in luminance. For applications where accurate Fourier components are critical, CRTs are more appropriate.

*Dot fields:*Moving dots fields are commonly used to characterize motion selectivity (e.g., Albright, 1984; Newsome & Paré, 1988). Large-field dot patterns can also be used to simulate the global structure of motion in the eye of a moving observer—i.e., optical flow fields (for example, Duffy & Wurtz, 1995). Expanding or contracting dot fields displayed on the hemisphere creates a more convincing sense of self-motion than that generated on the CRT. To create uniformly distributed random dots on the hemisphere (Weisstein, 2010), first sample

*u*from the uniform distribution

*U*(−(1/2)

*π,*(1/2)

*π*),

*v*from

*U*(−1, 1), and then transform them by (

*λ, φ*) = (

*u,*cos

^{−1}

*v*− (1/2)

*π*). The intuitive method of sampling (

*λ, φ*) from uniform distributions is undesirable because the unit area on the sphere is a function of

*φ,*which leads to higher concentration of dots near the two poles. The evenly sampled points should be expressed in polar coordinates (Equation 1) so that they can be easily rotated, expanded, or contracted.

_{5}, also known as the Kent distribution) to model the envelope of oval-shaped receptive field. The FB

_{5}distribution is an analog of the Gaussian distribution on the sphere (Kent, 1982). FB

_{5}assumes a particular simple form with only two parameters (

*κ*and

*β*) when it is centered at the north pole (

*x*

_{1},

*x*

_{2},

*x*

_{3}) = (0, 1, 0) with its major axis paralleled to the

*x*

_{1}-axis and its minor axis paralleled to the

*x*

_{3}-axis. This configuration is called the

*standard reference frame*. Let (

*λ, φ*) be a point on the hemisphere, which is also expressed as (

*x*

_{1},

*x*

_{2},

*x*

_{3}) in Cartesian coordinate:

_{5}at the standard reference frame is

*concentration*parameter

*κ*≥ 0 determines the size of the envelope (the larger

*κ,*the smaller the envelope), the

*ovalness*parameter 0 ≤

*β*≤ 1/2

*κ*determines the aspect ratio (

*β*= 0 is perfectly circular), and constant

*c*normalizes the function to a probability density function. For our purpose,

*c*can be set to 1/exp(

*κ*). Figure 6 illustrates the fb* function for two different parameter combinations.

_{5}distribution at any point

*p*on the sphere, a rotation matrix Γ is introduced:

*p*to the standard reference frame. The dimension of the 3 × 3 orthogonal matrix Γ is 3. The FB

_{5}distribution therefore has 5 free parameters (excluding the scaling factor

*c*).

*D*= {(

*λ, φ*)∣

*λ*

_{1}≤

*λ*≤

*λ*

_{2},

*φ*

_{1}≤

*φ*≤

*φ*

_{2}}, in principle, should be modeled by the integral of the Fisher–Bingham distribution over

*D*:

*D*and the area of

*D*:

*λ**,

*φ**) is the center of

*D,*and

*A*(

*D*) is the surface area of

*D*:

*E*be the spike-triggering ensemble of a neuron, defined as the centers of the spherical quadrangles that triggered a spike. Multiple occurrences of the same coordinates are allowed in

*E*to reflect the number of spikes triggered by the same stimulus. The center of the FB

_{5}distribution is estimated by

*E*. The Cartesian coordinates (

*x*

_{1},

*x*

_{2},

*x*

_{3}) of

*λ, φ*) by

*c*programming language.

*H*be a 3 × 3 rotation matrix that rotates

*x*

_{1},

*x*

_{2},

*x*

_{3}) = (0, 1, 0), and let

*B*=

*H*·

*S*·

*H*′, where

*S*is the correlation matrix of

*E*.

*B*is then the correlation matrix of

*E*rotated to the north pole. We further rotate

*E*so that the major axis is aligned with the

*x*

_{1}-axis, by finding the eigenvectors of the correlation matrix on the

*x*

_{1}

*x*

_{3}-plane. Let

*B*

_{13}be (

*e*

_{11},

*e*

_{12}) and (

*e*

_{21},

*e*

_{22}), and let

*Event synchronization*(Quiroga, Kreuz, & Grassberger, 2002) is a simple and robust measure for quantifying the level of quasi-simultaneous events in a pair of spike trains. We use the average value of pairwise synchronization in all pairs of repeated trials to identify stimulus conditions that fall outside the receptive field. Figure 7A illustrate this procedure. The shaded squares in the 8 × 8 grid represent conditions in which event synchronization was higher than the (empirically chosen) 0.2 threshold.

*c, κ,*and

*β*) can be estimated by nonlinear optimization procedures, such as the Levenberg–Marquardt algorithm (Press, Teukolsky, Vetterling, & Flannery, 2007). The quality of model fitting can be evaluated by the coefficient of determination (

*r*

^{2}).

*x*

_{1}-axis, which corresponds to values larger than 20% of the maximum. The width is similarly estimated on the

*x*

_{3}-axis.

_{5}family of distribution generally characterize the envelopes of V1 receptive fields well (Jones & Palmer, 1987b). However, receptive fields in extrastriate areas are not necessarily oval-shaped. For example, “teardrop-shaped” or “comet-shaped” receptive fields have been reported (Maguire & Baizer, 1984; Pigarev, Nothdurft, & Kastner, 2002; see also Figures 9C–9D). Although some spherical distributions (Wood, 1988) permit skewed contours, they are in practice difficult to work with. Instead, we propose the following approximation procedure: The spike-triggering ensemble is first rotated to the north pole with Equation 16, and then projected to the 2D plane using the Lambert projection (Equation 20, switching

*x*

_{2}and

*x*

_{3}). Firing rate is modeled as the product of the spherical area of the stimulus (Equation 11) and

*g*(

*u**,

*v**), where (

*u**,

*v**) is the Lambert-projected coordinate of the center of the quadrangle. The bivariate distribution

*g*(

*u, v*) defined on a disk in Cartesian plane {(

*u, v*)∣

*u*

^{2}+

*v*

^{2}≤ 2} is the product of a univariate skewed normal distribution (Azzalini, 1985) on the major axis (

*u*) and a univariate normal distribution on the minor axis (

*v*):

*N*is the probability density function of the standard normal distribution and

*ω*

_{ u }> 0 and

*ω*

_{ v }> 0 determine the size of the envelope and its aspect ratio,

*ζ*shifts the envelope along the major axis, and

*α*determines the skewness. The receptive field center is estimated by finding the maximum of the fitted distribution, which is then transformed to a spherical point by inverting the Lambert projection. The width and length of the receptive field can be similarly estimated.

*λ, φ*) with a conventional plotting program (in our case, ContourPlot[] and ListContourPlot[] in

*Mathematica*), and then transforming the coordinates of the vertices of the contour lines into 3D using Equation 5. This form of representation is most useful for visualizing very large receptive fields (Figure 9C).

*λ, φ*) → (

*u, v*) is

*x*

_{1},

*x*

_{2},

*x*

_{3}) is the Cartesian coordinates of (

*λ, φ*).

*Callithrix jacchus*). Representative results from one such experiment, recorded from a neuron in the far peripheral representation of V1, are illustrated in Figure 7. The extent of the receptive field was first mapped using a bright square (2.5° × 2.5°) in an 8 × 8 grid centered on (72°, −12°). The responses (raster plot shown in Figure 7A) were fitted to the response model given in Equation 10. The best-fit function (

*κ*= 515.1,

*β*= 88.3) is plotted in Figures 7B and 7C. The center of the receptive field was estimated as (73.6°, −15.3°), which corresponds to 74.2° in eccentricity (Equation 1). The width and length of the receptive field were also estimated automatically from the best-fit function.

*Aotus trivirgatus*). Brain Research, 31, 85–105. [PubMed] [CrossRef] [PubMed]

*Callithrix jacchus*). Brain Research Protocols, 11, 168–177. [PubMed] [CrossRef] [PubMed]

*Callithrix jacchus*). Journal of Comparative Neurology, 372, 264–282. [PubMed] [CrossRef] [PubMed]

^{−/−}mice. Journal of Neuroscience, 28, 7376–7386. [PubMed] [CrossRef] [PubMed]

*Callithrix jacchus*). European Journal of Neuroscience, 25, 1780–1792. [PubMed] [CrossRef] [PubMed]

*Callithrix jacchus*): Middle temporal area, middle temporal crescent, and surrounding cortex. Journal of Comparative Neurology, 393, 505–527. [PubMed] [CrossRef] [PubMed]

*Pteropus*. Visual Neuroscience, 11, 1037–1057. [PubMed] [CrossRef] [PubMed]

*Pteropus poliocephalus*and

*Pteropus scapulatus*). Journal of Comparative Neurology, 335, 55–72. [PubMed] [CrossRef] [PubMed]