Free
Research Article  |   December 2010
A simple method for creating wide-field visual stimulus for electrophysiology: Mapping and analyzing receptive fields using a hemispheric display
Author Affiliations
Journal of Vision December 2010, Vol.10, 15. doi:https://doi.org/10.1167/10.14.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hsin-Hao Yu, Marcello G. P. Rosa; A simple method for creating wide-field visual stimulus for electrophysiology: Mapping and analyzing receptive fields using a hemispheric display. Journal of Vision 2010;10(14):15. https://doi.org/10.1167/10.14.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Modern neurophysiological and psychophysical studies of vision are typically based on computer-generated stimuli presented on flat screens. While this approach allows precise delivery of stimuli, it suffers from a fundamental limitation in terms of the maximum achievable spatial coverage. This constraint becomes important in studies that require stimulation of large expanses of the visual field, such as those involving the mapping of receptive fields throughout the extent of a cortical area or subcortical nucleus, or those comparing neural response properties across a wide range of eccentricities. Here we describe a simple and highly cost-effective method for the projection of computer-generated stimuli on a hemispheric screen, which combines the advantages of computerized control and wide-field (100° × 75°) delivery, without the requirement of highly specialized hardware. The description of the method includes programming techniques for the generation of stimuli in spherical coordinates and for the quantitative determination of receptive field sizes and shapes. The value of this approach is demonstrated by quantitative electrophysiological data obtained in the far peripheral representations of various cortical areas, including automated mapping of receptive field extents in cortex that underwent plasticity following lesions.

Introduction
The mosaic of orderly representations of the visual world in the cerebral cortex is one of the hallmarks of mammalian visual systems. Although detailed visuotopic maps of multiple visual areas in various species have been published, and despite the fact that functional imaging has made it possible to record visually evoked BOLD responses from the entire brain, the demand for precise visuotopic mapping using single-unit recording has not diminished. For example, controversy still surrounds the pattern of visual representation in many areas, some areas are yet to be mapped in detail, and the relationship between the visual areas of different species still needs to be resolved (for review, see Rosa & Tweedale, 2005). Furthermore, abnormal visuotopy induced by physical lesions (Calford, Schmid, & Rosa, 1999; Rosa, Tweedale, & Elston, 2000; Wandell & Smirnakis, 2009), developmental interventions (Luhmann, Greuel, & Singer, 1990; Trevelyan, Upton, Cordery, & Thompson, 2007), and genetic manipulations (Haustead et al., 2008; Larsen, Luu, Burns, & Krubitzer, 2009) all demand greater methodological sophistication. However, reviewing one of the earliest papers on visuotopic mapping (Allman & Kaas, 1971) reveals that in many respects little has changed over the intervening decades. 
Although programmability has made cathode ray tube (CRT) monitors ubiquitous in visual experiments, their small coverage area (typically up to 40°, depending on the distance to the eye) limits the size of the stimulus that can be presented, as well as the extent of the visual space that can be explored without relocating the monitor and recalibrating the stimulus. This problem is pronounced in visuotopic mapping experiments in which the entire visual hemifield needs to be stimulated, and is particularly acute when one is dealing with animal models with lateralized eyes (see Rosa & Schmid, 1994 for a review). The traditional method of visuotopic mapping, where the experimenter manually shines a spot of light onto a translucent hemisphere and draws minimal response fields based on audible electrophysiological responses (e.g., Allman & Kaas, 1971; Gattass, Gross, & Sandell, 1981; Rosa, Schmid, Krubitzer, & Pettigrew, 1993), is considerably more convenient than the use of a CRT for its complete coverage of the visual field and its spherical geometry, but it suffers from three major drawbacks. First, the determination of receptive fields is inherently subjective. Although this is usually not a serious limitation when one is trying to map areas where receptive fields are small and where neurons are highly responsive, it can become an issue when the receptive fields are very large or have undergone plastic change due to lesions in the visual pathway (compare Collins, Lyon, & Kaas, 2003; Rosa et al., 2000). Second, experimental results are manually drawn on a spherical surface, making the primary data awkward to work with in terms of digitization and quantitative analysis, such as bias-free interpolation of visuotopic contours (Sereno, McDonald, & Allman, 1994). Finally, once the receptive field is charted, it is not easy to perform additional quantitative tests on the same neuron. Switching back and forth between the hemisphere and the CRT is impractical due to the fact that relocating and recalibrating the apparatus is both time consuming and error prone. The limitations described above also apply to mapping using a tangent screen, with the additional disadvantage that the mapped receptive fields subsequently need to be converted to spherical coordinates. 
In this paper, we describe a technique for stimulus presentation that combines the virtues of both approaches by projecting computer-generated stimuli onto a hemispheric screen, which creates a large coverage space (100° × 75°) for visuotopic mapping experiments as well as quantitative measurements of tuning curves. Compared to other projector-based techniques, such as the iDome (Bourke, 2005) and VisionStation (Elumens, Cary, NC, USA. Defunct), the described method does not require additional optical components such as spherical mirrors or fisheye lens and can be implemented easily, quickly, and economically. We will first describe the procedures for projector calibration and stimulus generation and techniques for analyzing data obtained with spherical projection. Some applications of the described setup will then be presented. A video demonstrating the projected stimuli, as well as reference source codes, is included in the Supplementary material
Methods
Implementation
The design described in the following sections is sufficiently generic that it can be implemented using a wide variety of software and hardware. We have successfully implemented it with a large hemispheric screen (90 cm in diameter) for primate vision (Figure 1A) and, more recently, a smaller one (40 cm in diameter) for rodents (Figure 1B). The data presented in the Results section were obtained with the following setup. 
Figure 1
 
Two implementations of the stimulus projection design described in the paper. (A) A large hemisphere (90 cm in diameter) for primate vision. The dots on the hemisphere indicate intersections of lines of longitude and circles of latitude, in 10° interval. They are also used for calibrating the projector. The projector (not shown) is positioned 2.1 m away from the hemisphere. (B) A small hemisphere (40 cm in diameter) for rodent vision. The distance between the hemisphere and the projector (arrow) is about 1 m.
Figure 1
 
Two implementations of the stimulus projection design described in the paper. (A) A large hemisphere (90 cm in diameter) for primate vision. The dots on the hemisphere indicate intersections of lines of longitude and circles of latitude, in 10° interval. They are also used for calibrating the projector. The projector (not shown) is positioned 2.1 m away from the hemisphere. (B) A small hemisphere (40 cm in diameter) for rodent vision. The distance between the hemisphere and the projector (arrow) is about 1 m.
Hemispheric screen: A translucent polycarbonate hemisphere, 90 cm in diameter, was used. Similar hardware can be made to order by most manufacturers of skylights. The thickness of the polycarbonate layer was 5 mm, resulting in a transparency of 14%. The inner surface of the hemisphere was coated with a thin layer of photographer's dulling spray to reduce reflection. With the aid of a custom-made device, lines of longitude and circles of latitude, at 10° interval, were drawn on the hemisphere with a permanent marker to provide a coordinate system for receptive field locations (Figure 1A). These also served as landmarks for calibrating the projector (see below). In order to maximize the precision of the method and to avoid geometric distortions, the center of the base of the hemispheric screen needs to be brought to a position in space that corresponds to the nodal point of one of the eyes (or, in experiments requiring binocular stimulation, the midpoint between the nodal points of the eyes). In this way, the prime (or vertical) meridian (λ = 0) and the equator (φ = 0) are positioned directly in front of the animal. In the convention adopted in the present paper (Figure 3A), this means that −90° ≤ λ ≤ 0° corresponded to the right visual field and 0° ≤ λ ≤ 90° to the left visual field. Similarly, 0° ≤ φ ≤ 90° corresponded to the upper visual field and −90° ≤ φ ≤ 0° to the lower visual field. In practice, small errors in the positioning of the hemispheric screen relative to the animal's head were not important, as they could be corrected a posteriori based on the results of the experiment. For example, the precise location of the vertical meridian could be determined by mapping receptive field locations across the boundary of the primary and secondary visual areas (V1 and V2, respectively) and determining the midpoint of the zone of overlap between the representations in the two cerebral hemispheres (Fritsches & Rosa, 1996; Rosa et al., 1993). The location of the horizontal meridian could be determined by receptive field locations on the dorsal and ventral surfaces of V2 (Rosa, Sousa, & Gattass, 1988; Rosa, Fritsches, & Elston, 1997). 
Projector: We used an Optoma EP726S DLP (Digital Light Processing) projector (Optoma Technology, Fremont, CA, USA), configured to operate at 640 × 480 resolution and 85-Hz refresh rate, with geometrical correction (i.e., keystoning) turned off. The projector was attached to a custom-made mount that allowed the projector to be rotated and tilted in the calibration procedure but locked afterward. The projector was placed about 2.1 m away from the hemisphere. Because direct line-of-sight visualization of the projector lamp (2800 ANSI lumens) can become uncomfortable to human vision following prolonged exposure, we used several layers of neutral density filter (Kodak, ND 1.00) interposed between the projector's lens and the hemispheric screen, thus reducing the luminance of the projected image. In a dimly illuminated room, stimulus luminance measure on the internal surface of the hemisphere could be reliably varied between 0.31 cd/m2 and 4.0 cd/m2, thus allowing stimulus contrast up to 86.0%. The range of contrast achievable with this method can be adjusted according to the needs of the experiments, depending on the exact hardware configuration (including the model of projector, the degree of transparency of the hemispheric screen, and the configuration of filters interposed along the light path). In addition, a small piece of tracing paper (2° × 2°) was attached to the projection center on the hemisphere to further diffuse the bright image of the projector lamp along the line of sight. Finally, an occluder made from thick cardboard was used to block the projector's light at all times between stimulus presentations, to avoid prolonged exposure of the animal's eye to the lamp. 
Software: We used a customized version of Expo (release 1.5.0) designed by Dr. Peter Lennie and others for stimulus presentation and data acquisition. The various spherical stimuli described below were implemented as Expo routines using the Objective-C programming language and the OpenGL library. The software ran on a Power Macintosh (Apple, Cupertino, CA, USA) with dual 2.8-GHz quad-core Intel Xeon processors and 4 GB of RAM. Two ATI Radeon HD 2600 XT graphics cards were used to drive an LCD monitor (for the Expo user environment) and the projector (for stimulus presentation). Data analysis was performed by software written in Mathematica (Wolfram Research, Champaign, IL, USA). 
Electrophysiological recording: Extracellular single-unit recordings were obtained from anesthetized marmoset monkeys (male and female adults, 350–415 g) following a protocol slightly modified from Bourne and Rosa (2003). In brief, following premedication with diazepam (5 mg/kg) and atropine (0.2 mg/kg), anesthesia was induced by intramuscular injection of Alfaxan (alfaxalone 10 mg/kg). Following surgery, the animals were anesthetized and paralyzed by an intravenous infusion of sufentanil (6 μg/kg/h) and pancuronium bromide (0.1 mg/kg/h) and were artificially ventilated with a gaseous mixture of nitrous oxide and oxygen (70:30). The electrocardiogram and SpO2 level were continuously monitored. Appropriate focus and protection of the cornea from desiccation was achieved by means of contact lenses selected by streak retinoscopy. Visual stimuli were monocularly presented in a dark room to the eye contralateral to the cortical hemisphere from which the neuronal recordings were obtained. The ipsilateral eye was occluded. The data presented in the Results section were obtained from the peripheral representation of V1 (Fritsches & Rosa, 1996), the middle temporal area (MT; Rosa & Elston, 1998), as well as area prostriata—a visual association area located in the rostral calcarine sulcus (Cuénod, Casey, & MacLean, 1964; Palmer & Rosa, 2006; Rosa, Casagrande, Preuss, & Kaas, 1997). In one animal, a lesion of V1 was placed unilaterally at the age of 6 weeks, followed by a recovery period of 12 months, resulting in a “cortical scotoma.” In this case, the method described here was used to estimate the extent of this scotoma, through mapping receptive fields along the perimeter of the lesion zone (Rosa et al., 2000). The experiments were conducted in accordance with the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes, and all procedures were approved by the Monash University Animal Ethics Experimentation Committee. 
Projection to the hemisphere
The spherical coordinate system is arguably the most natural choice of coordinate system for the study of vision. We describe in this section a function f (Equation 4), which allows the generation of “distorted” 2D images for geometrical shapes specified in spherical coordinates. When the images are projected to a translucent hemisphere with a video projector, the intended shapes are recreated on the spherical surface (Figure 2). 
Figure 2
 
The paper describes a technique that allows stimuli to be designed as geometrical shapes specified in spherical coordinates (λ: longitude, φ: latitude). Function f (Equation 4) transforms the spherical coordinates to image coordinates (x and y) such that when the image is projected to a hemisphere, the intended geometrical shape is produced.
Figure 2
 
The paper describes a technique that allows stimuli to be designed as geometrical shapes specified in spherical coordinates (λ: longitude, φ: latitude). Function f (Equation 4) transforms the spherical coordinates to image coordinates (x and y) such that when the image is projected to a hemisphere, the intended geometrical shape is produced.
Figure 3 establishes the convention that will be used for the rest of the paper. A point p on the hemisphere can be specified by its longitude (−90° ≤ λ ≤ 90°) and latitude (−90° ≤ φ ≤ 90°). For many applications, it is also convenient to use the polar coordinate system, where the hemisphere is parameterized by eccentricity (0° ≤ r ≤ 90°) and polar angle (−90° ≤ θ ≤ 270°). To translate from (λ, φ) to (r, θ): 
r = e c c ( λ , φ ) = cos 1 ( cos λ · cos φ ) , θ = a n g ( λ , φ ) = sin 1 ( sin φ / sin r ) , λ 0 , θ = a n g ( λ , φ ) = π sin 1 ( sin φ / sin r ) , λ < 0 .
(1)
To derive f, first consider a special case where the stimulus p is a point on the equator (Figure 4A) and the center of projection is at (λ, φ) = (0°, 0°). Let d be the radius of the hemisphere, m the distance between the hemisphere and the (idealized) projector lens, and n the distance between the projector lens and the light source. It is easy to see that to display point p, the corresponding point p′ on the projector's image plane should be 
r = g ( r ) = n · d · sin r / ( n + m + d d · cos r ) ,
(2)
where r is the longitude of p (in radian), and r′ is the planar coordinate of p′ (in pixels). The general case where p is not restricted to the equator (Figure 4B) is only slightly more complicated if p is expressed in spherical polar coordinates p = (r, θ) and p′ in planar polar coordinates p′ = (r′, θ′): 
θ = θ , r = g ( r ) .
(3)
Function f:(λ, φ) → (x, y) therefore is 
x = g ( e c c ( λ , φ ) ) · cos ( a n g ( λ , φ ) ) , y = g ( e c c ( λ , φ ) ) · sin ( a n g ( λ , φ ) ) .
(4)
 
Figure 3
 
A point p on the hemisphere can be expressed as (A) its longitude (−90° ≤ λ ≤ 90°) and its latitude (−90° ≤ φ ≤ 90°), or (B) its eccentricity (0° ≤ r ≤ 90°) and its polar angle (−90° ≤ θ ≤ 270°). The center of the two eyes is positioned at (x 1, x 2, x 3) = (0, 0, 0) facing the direction of (0, 0, 1), which is also the point of zero longitude and latitude. The right visual field is represented by −90° ≤ λ ≤ 0° and the left visual field by 0° ≤ λ ≤ 90°. The upper visual field is represented by 0° ≤ φ ≤ 90° and the lower visual field by −90° ≤ φ ≤ 0°.
Figure 3
 
A point p on the hemisphere can be expressed as (A) its longitude (−90° ≤ λ ≤ 90°) and its latitude (−90° ≤ φ ≤ 90°), or (B) its eccentricity (0° ≤ r ≤ 90°) and its polar angle (−90° ≤ θ ≤ 270°). The center of the two eyes is positioned at (x 1, x 2, x 3) = (0, 0, 0) facing the direction of (0, 0, 1), which is also the point of zero longitude and latitude. The right visual field is represented by −90° ≤ λ ≤ 0° and the left visual field by 0° ≤ λ ≤ 90°. The upper visual field is represented by 0° ≤ φ ≤ 90° and the lower visual field by −90° ≤ φ ≤ 0°.
Figure 4
 
Derivation of the transformation f. (A) In the special case where p is on the equator, the transformation is Equation 2. (B) For an arbitrary point on the hemisphere, f is expressed by Equation 4.
Figure 4
 
Derivation of the transformation f. (A) In the special case where p is on the equator, the transformation is Equation 2. (B) For an arbitrary point on the hemisphere, f is expressed by Equation 4.
Projector positioning and calibration
Although d and m can be measured directly, n is inaccessible. With a hemisphere whose lines of longitudes and circles of latitudes are already marked on the surface, it is more convenient to record a set of correspondence between the spherical coordinates (λ, φ) and the image coordinates (x, y), and then estimate the optimal parameters (d, m, and n) by least-square fitting the corresponding points to the model expressed by Equation 4. Specifically, with the projector positioned such that the horizontal line crossing the center of the image plane projects to the equator on the hemisphere and the vertical line crossing the center of the image plane projects to the prime meridian (λ = 0°) on the hemisphere, we manually moved a cursor such that it was projected to match the positions of 56 registration points on the hemisphere and recorded the (x, y) coordinates. The registration points we used were intersections of the lines of longitudes and the circles of latitudes (Figure 1A). Figure 5A illustrates the correspondence between the spherical coordinates and image coordinates for our setup. The best-fit values for Equation 4 were d = 94531300.0, m = 414320000.0, and n = 1989.81. The errors were smaller than 1° and were probably due to imperfection in the manufacturing of the hemisphere. 
Figure 5
 
(A) The free parameters (d, m, and n) in Equation 4 can be estimated by fitting the model to a set of registration points on the hemisphere (expressed by the spherical coordinate λ and φ) and their corresponding image coordinates (x and y). In our particular implementation, the (x, y) coordinates of a small patch of bright square projected to match the 56 registration points (indicated by black dots in the figure) on the hemisphere were recorded. The (x, y) coordinates corresponding to lines of longitude and circles of latitude of the best-fit model are plotted as gray lines. The 640 × 480 image space covers 100° × 75° of visual space on the hemisphere. (B) In many applications, it is useful to direct the center of projection to (±30°, 0°) on the equator to provide better coverage of the far periphery.
Figure 5
 
(A) The free parameters (d, m, and n) in Equation 4 can be estimated by fitting the model to a set of registration points on the hemisphere (expressed by the spherical coordinate λ and φ) and their corresponding image coordinates (x and y). In our particular implementation, the (x, y) coordinates of a small patch of bright square projected to match the 56 registration points (indicated by black dots in the figure) on the hemisphere were recorded. The (x, y) coordinates corresponding to lines of longitude and circles of latitude of the best-fit model are plotted as gray lines. The 640 × 480 image space covers 100° × 75° of visual space on the hemisphere. (B) In many applications, it is useful to direct the center of projection to (±30°, 0°) on the equator to provide better coverage of the far periphery.
The area of coverage of our particular setup was about 100° × 75°. Most experiments in neurophysiology present stimuli in the hemifield contralateral to the recording sites. In such situations, it is useful to aim the projector at (λ, φ) = (±30°, 0°) instead of (0°, 0°) to provide better coverage of the far periphery of the contralateral visual field (Figure 5B). The λ of the stimulus needs the subtraction or addition of 30° to compensate for the effect of the displaced projection center. The projector center can also be off the equator, for example (−30°, −30°), to provide better coverage of the lower visual field (at the expense of upper field coverage). In this configuration, the 3 × 3 rotation matrix that transform (−30°, −30°) to (0°, 0°) needs to be applied before f
Stimulus programming
To display a point at (λ, φ) on the hemisphere, the stimulus generation software simply has to draw a small square at the corresponding (x, y) coordinates given by Equation 4. However, connecting dots to form lines and shapes on the hemisphere is more involved than drawing straight lines in the image space, because “straight lines” on the hemisphere (geodesics) correspond to curves in (x, y) space (see Figure 5A) and therefore need to be approximated by multiple line segments. Although explicit calculation of geodesics is possible, in most applications it is easier to do the calculations by rotating lines of longitude and circles of latitude. The following provides recipes for programming some of the most basic building blocks of stimuli used in visual physiology. A reference implementation (as c source code) can be found in Supplement I. A short video demonstrating several commonly used stimuli is in Supplement III
Squares: Quadrangles {(λ, φ)∣λ 1λλ 2, φ 1φφ 2} are the natural analogs of planar squares. To project a quadrangle on the hemisphere, note that the corresponding (x, y) image is not a rectangle but has curved sides and therefore needs to be approximated by a polygon. This can be accomplished by evenly sampling the edges of the quadrangle with 10 to 20 vertices on each side, and then projecting the vertices to the image space with f. The envelope of a receptive field can be mapped by flashing a small patch of quadrangle displayed in an 8 × 8 or 12 × 12 grid (see Figure 7A for an example). The white noise stimulus used in reverse correlation experiments (Jones & Palmer, 1987a; Marmarelis & Marmarelis, 1978; Ohzawa, DeAngelis, & Freeman, 1996; Rust, Schwartz, Movshon, & Simoncelli, 2005) can also be constructed with quadrangles. 
Bars: A bar is the most basic geometrical shape for stimulus construction. To draw an elongated bar of arbitrary orientation at an arbitrary point on the hemisphere, first sample a polygon representing an elongated quadrangle centered on (λ, φ) = (0°, 0°), i.e., {(λ, φ)∣ − (1/2)Lλ ≤ (1/2)L, − (1/2)Wφ ≤ (1/2)W}, where L and W are the length and width, rotate the coordinates in 3D around the x 3-axis to the desired orientation, translate the rotated coordinates on the sphere to the desired location, and then calculate the corresponding image coordinates with f
Moving bars: Direction tuning of a neuron is often measured by its response to a moving bar (see Figure 7D for an example). The method described above can be used to generate a moving bar on the hemisphere. First, generate a sequence of vertical bars moving from left to right centered around (λ, φ) = (0°, 0°) by shifting the longitude. For each bar in the sequence, rotate the vertices to the desired direction and location. 
Gratings: Drifting sinusoidal gratings are frequently used to characterize the spatiotemporal filtering properties of visual neurons (see Figures 7E and 7F for examples). 
Approximations of drifting sinusoidal gratings can be displayed on the hemisphere by “warping” a rectangular patch of grating texture using f. First generate a 2D sinusoidal grating as OpenGL texture, divide the texture into a 10 × 10 (or more) mosaic, and then project the squares individually to the framebuffer using OpenGL's texture mapping mechanism by converting the four vertices of the squares to the image space using f. Caution must be used in interpreting data obtained with this stimulus. Although the generated images provide a good approximation of sinusoidal gratings and can supply useful information about the tuning properties of visual neurons, they are not strictly 2D Fourier components, due to errors in image warping, variation in focus plane, and variation in luminance. For applications where accurate Fourier components are critical, CRTs are more appropriate. 
Dot fields: Moving dots fields are commonly used to characterize motion selectivity (e.g., Albright, 1984; Newsome & Paré, 1988). Large-field dot patterns can also be used to simulate the global structure of motion in the eye of a moving observer—i.e., optical flow fields (for example, Duffy & Wurtz, 1995). Expanding or contracting dot fields displayed on the hemisphere creates a more convincing sense of self-motion than that generated on the CRT. To create uniformly distributed random dots on the hemisphere (Weisstein, 2010), first sample u from the uniform distribution U(−(1/2)π, (1/2)π), v from U(−1, 1), and then transform them by (λ, φ) = (u, cos−1 v − (1/2)π). The intuitive method of sampling (λ, φ) from uniform distributions is undesirable because the unit area on the sphere is a function of φ, which leads to higher concentration of dots near the two poles. The evenly sampled points should be expressed in polar coordinates (Equation 1) so that they can be easily rotated, expanded, or contracted. 
For our purpose, a straightforward implementation of the computation described above without any specialized optimization has been sufficient for a Power Macintosh with 2.8-GHz Intel processors to drive full-field optic flow stimuli without dropping frames. Advanced programmers can take advantage of the texture mapping facilities of the graphics card to move the coordinate transformation computation to the graphics processors. 
Data analysis: Fitting oval-shaped receptive fields
The techniques for analyzing direction and spatial and temporal frequency tuning curves have been described extensively in the literature. This section therefore focuses on the analysis of receptive field maps. The receptive fields of visual neurons can be mapped by flashing a small bright square in an 8 × 8 or 12 × 12 grid. Studies using CRTs for stimulus presentation typically characterize the shape of receptive fields with bivariate Gaussian functions (Jones & Palmer, 1987b). However, since the mapping stimulus in this paper is specified in spherical coordinates, density functions of spherical distributions (Fisher, Lewis, & Embleton, 1987) are needed to reflect the geometry of the sphere. The use of spherical statistics is particularly important for the large receptive fields typically found in peripheral vision (see Figure 9), because the geometry of the space covered by the stimulus is no longer Cartesian. The following describes the principle of analyzing receptive field maps generated using a hemisphere. A reference implementation of the procedure is included in Supplement II
We use the 5-parameter Fisher–Bingham distribution (FB5, also known as the Kent distribution) to model the envelope of oval-shaped receptive field. The FB5 distribution is an analog of the Gaussian distribution on the sphere (Kent, 1982). FB5 assumes a particular simple form with only two parameters (κ and β) when it is centered at the north pole (x 1, x 2, x 3) = (0, 1, 0) with its major axis paralleled to the x 1-axis and its minor axis paralleled to the x 3-axis. This configuration is called the standard reference frame. Let (λ, φ) be a point on the hemisphere, which is also expressed as (x 1, x 2, x 3) in Cartesian coordinate: 
[ x 1 x 2 x 3 ] = [ cos φ sin λ sin φ cos φ cos λ ] .
(5)
The density function of FB5 at the standard reference frame is 
f b * ( λ , φ ) = f b * ( x 1 , x 2 , x 3 ) = c · exp ( κ x 2 + β ( x 1 2 x 3 2 ) ) .
(6)
The concentration parameter κ ≥ 0 determines the size of the envelope (the larger κ, the smaller the envelope), the ovalness parameter 0 ≤ β ≤ 1/2κ determines the aspect ratio (β = 0 is perfectly circular), and constant c normalizes the function to a probability density function. For our purpose, c can be set to 1/exp(κ). Figure 6 illustrates the fb* function for two different parameter combinations. 
Figure 6
 
Two examples of the Fisher–Bingham distribution in the standard reference frame (Equation 6). Parameters: (A) κ = 5, β = 0; (B) κ = 26, β = 10. Gray level represents the magnitude of the function. White is 1; black is 0.
Figure 6
 
Two examples of the Fisher–Bingham distribution in the standard reference frame (Equation 6). Parameters: (A) κ = 5, β = 0; (B) κ = 26, β = 10. Gray level represents the magnitude of the function. White is 1; black is 0.
To center the FB5 distribution at any point p on the sphere, a rotation matrix Γ is introduced: 
f b ( λ , φ ) = f b ( x 1 , x 2 , x 3 ) = f b * ( x 1 , x 2 , x 3 ) ,
(7)
where 
[ x 1 x 2 x 3 ] = [ Γ ] [ x 1 x 2 x 3 ] .
(8)
Γ is an orthogonal matrix that rotates p to the standard reference frame. The dimension of the 3 × 3 orthogonal matrix Γ is 3. The FB5 distribution therefore has 5 free parameters (excluding the scaling factor c). 
The firing rate of a neuron stimulated by a spherical quadrangle D = {(λ, φ)∣λ 1λλ 2, φ 1φφ 2}, in principle, should be modeled by the integral of the Fisher–Bingham distribution over D: 
R D = ( λ , φ ) D f b ( λ , φ ) sin φ d λ d φ .
(9)
However, the evaluation of the integral can be computationally demanding in terms of numerical optimization. To simplify the problem, we model the firing rate as the product of fb at the center of D and the area of D: 
R D = c · f b ( λ * , φ * ) · A ( D ) ,
(10)
where (λ*, φ*) is the center of D, and A(D) is the surface area of D: 
A ( D ) = ( λ 2 λ 1 ) · ( sin φ 2 sin φ 1 ) .
(11)
The Γ matrix can be estimated algebraically (Kent, 1982). In brief, let E be the spike-triggering ensemble of a neuron, defined as the centers of the spherical quadrangles that triggered a spike. Multiple occurrences of the same coordinates are allowed in E to reflect the number of spikes triggered by the same stimulus. The center of the FB5 distribution is estimated by
m
, the 3D vector average of E. The Cartesian coordinates (x 1, x 2, x 3) of
m
can be converted to (λ, φ) by 
λ = a t a n 2 ( x 1 , x 3 ) , φ = a t a n 2 ( x 2 , x 1 2 + x 3 2 ) ,
(12)
where atan2() is the two-parameter inverse tangent function defined by the math library of the c programming language. 
Let H be a 3 × 3 rotation matrix that rotates
m
to the north pole (x 1, x 2, x 3) = (0, 1, 0), and let B = H · S · H′, where S is the correlation matrix of E. B is then the correlation matrix of E rotated to the north pole. We further rotate E so that the major axis is aligned with the x 1-axis, by finding the eigenvectors of the correlation matrix on the x 1 x 3-plane. Let 
B = [ b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 ] ,
(13)
and 
B 13 = [ b 11 b 13 b 31 b 33 ] .
(14)
Let the eigenvectors of B 13 be (e 11, e 12) and (e 21, e 22), and let 
K = [ e 11 0 e 12 0 1 0 e 21 0 e 22 ] .
(15)
Matrix Γ is then 
K · H .
(16)
 
In practice, this method is sensitive to outliers and therefore can be influenced by irregular bursts of spontaneous firing. The problem is more pronounced when the center of the receptive field is at the edge of the mapping grid—a condition that cannot be avoided for large receptive fields at the edge of the peripheral visual field. To address this issue, the spontaneous firing rate is first subtracted from the firing rate of each condition. The spontaneous firing rate is estimated by averaging the firing rates of conditions that are outside an initial estimate of the receptive field. The spike trains in these conditions are unrelated to the onset and offset of the stimuli and therefore do not have consistent patterns across repeated trials. Event synchronization (Quiroga, Kreuz, & Grassberger, 2002) is a simple and robust measure for quantifying the level of quasi-simultaneous events in a pair of spike trains. We use the average value of pairwise synchronization in all pairs of repeated trials to identify stimulus conditions that fall outside the receptive field. Figure 7A illustrate this procedure. The shaded squares in the 8 × 8 grid represent conditions in which event synchronization was higher than the (empirically chosen) 0.2 threshold. 
Figure 7
 
The receptive field is first mapped with a bright square stimulus (2.5° × 2.5°) in an 8 × 8 grid, centered at (72°, −12°). The stimulus was on for 0.2 s, off for 0.2 s, and was repeated 10 times. (A) The raster plot of the response of a representative neuron in V1. The shaded cells in the grid represent an initial estimation of the receptive field, based on the event synchronization (threshold = 0.2). The response profile was then fitted to a response model based on the Fisher–Bingham distribution (Equation 10). The coefficient of determination (r 2) of the fitted model was 0.93, with best-fit parameters κ = 515.1 and β = 88.3. The contour of the best-fit model is plotted in (B) and (C). The receptive field center, eccentricity, width, and length can be calculated from the fitted model. We also projected a moving bar stimulus onto the hemisphere to measure the direction tuning curve of the neuron, as illustrated in (D). The black line connects mean spike rates at each condition, and the error bars represent standard error of the mean. The tuning curve was fitted to the von Mises function (Swindale, 1998), represented by the green line. Similarly, drifting sinusoidal gratings were used to estimate the (E) spatial frequency and (F) temporal frequency tuning curves. The responses were fitted to log-transformed skewed Gaussian functions (Lui, Bourne, & Rosa, 2007). The estimated tuning parameters are compatible with published data collected with conventional CRT stimulation (Yu et al., 2010).
Figure 7
 
The receptive field is first mapped with a bright square stimulus (2.5° × 2.5°) in an 8 × 8 grid, centered at (72°, −12°). The stimulus was on for 0.2 s, off for 0.2 s, and was repeated 10 times. (A) The raster plot of the response of a representative neuron in V1. The shaded cells in the grid represent an initial estimation of the receptive field, based on the event synchronization (threshold = 0.2). The response profile was then fitted to a response model based on the Fisher–Bingham distribution (Equation 10). The coefficient of determination (r 2) of the fitted model was 0.93, with best-fit parameters κ = 515.1 and β = 88.3. The contour of the best-fit model is plotted in (B) and (C). The receptive field center, eccentricity, width, and length can be calculated from the fitted model. We also projected a moving bar stimulus onto the hemisphere to measure the direction tuning curve of the neuron, as illustrated in (D). The black line connects mean spike rates at each condition, and the error bars represent standard error of the mean. The tuning curve was fitted to the von Mises function (Swindale, 1998), represented by the green line. Similarly, drifting sinusoidal gratings were used to estimate the (E) spatial frequency and (F) temporal frequency tuning curves. The responses were fitted to log-transformed skewed Gaussian functions (Lui, Bourne, & Rosa, 2007). The estimated tuning parameters are compatible with published data collected with conventional CRT stimulation (Yu et al., 2010).
Once the Γ matrix is estimated, the other parameters (c, κ, and β) can be estimated by nonlinear optimization procedures, such as the Levenberg–Marquardt algorithm (Press, Teukolsky, Vetterling, & Flannery, 2007). The quality of model fitting can be evaluated by the coefficient of determination (r 2). 
The length and width of the receptive field can be estimated from the fitted function in the standard reference frame. The length of the receptive field is estimated by the spatial extent (in degree) on the x 1-axis, which corresponds to values larger than 20% of the maximum. The width is similarly estimated on the x 3-axis. 
Data analysis: Fitting comet-shaped receptive fields
The oval-shaped contours of the FB5 family of distribution generally characterize the envelopes of V1 receptive fields well (Jones & Palmer, 1987b). However, receptive fields in extrastriate areas are not necessarily oval-shaped. For example, “teardrop-shaped” or “comet-shaped” receptive fields have been reported (Maguire & Baizer, 1984; Pigarev, Nothdurft, & Kastner, 2002; see also Figures 9C9D). Although some spherical distributions (Wood, 1988) permit skewed contours, they are in practice difficult to work with. Instead, we propose the following approximation procedure: The spike-triggering ensemble is first rotated to the north pole with Equation 16, and then projected to the 2D plane using the Lambert projection (Equation 20, switching x 2 and x 3). Firing rate is modeled as the product of the spherical area of the stimulus (Equation 11) and g(u*, v*), where (u*, v*) is the Lambert-projected coordinate of the center of the quadrangle. The bivariate distribution g(u, v) defined on a disk in Cartesian plane {(u, v)∣u 2 + v 2 ≤ 2} is the product of a univariate skewed normal distribution (Azzalini, 1985) on the major axis (u) and a univariate normal distribution on the minor axis (v): 
g ( u , v ) = g u ( u ) · g v ( v ) ,
(17)
 
g u ( u ) = 2 ω u · N ( u ζ ω u ) · N ( α · u ζ ω u ) ,
(18)
 
g v ( v ) = N ( v ω v ) ,
(19)
where N is the probability density function of the standard normal distribution and
N
is the cumulative distribution function of the standard normal distribution. Parameters ω u > 0 and ω v > 0 determine the size of the envelope and its aspect ratio, ζ shifts the envelope along the major axis, and α determines the skewness. The receptive field center is estimated by finding the maximum of the fitted distribution, which is then transformed to a spherical point by inverting the Lambert projection. The width and length of the receptive field can be similarly estimated. 
Visualizing receptive field maps
Contour plots of receptive field maps projected to the 3D hemisphere (Figures 7C and 9C) are created by first generating a 2D contour plot in (λ, φ) with a conventional plotting program (in our case, ContourPlot[] and ListContourPlot[] in Mathematica), and then transforming the coordinates of the vertices of the contour lines into 3D using Equation 5. This form of representation is most useful for visualizing very large receptive fields (Figure 9C). 
In many applications, it is more convenient to display spherical data on a 2D plane (Figures 8, 9B, and 9D). The area-preserving Lambert projection (Maling, 1973) can be used for this purpose. The Lambert projection (λ, φ) → (u, v) is 
u = x 1 2 1 + x 3 , v = x 2 2 1 + x 3 ,
(20)
where (x 1, x 2, x 3) is the Cartesian coordinates of (λ, φ). 
Figure 8
 
Results from an experiment that mapped the spatial extent of the scotoma induced by cortical lesion (Rosa et al., 2000). Fifty-one receptive fields recorded from 11 penetrations in V1 are plotted. The ellipses represent the 80% maximal magnitude contours of the fitted Fisher–Bingham functions. Lambert projection (Equation 20) was used to project the spherical ellipses to a 2D plane. Tuning parameters such as direction tuning bandwidth and response latency were measured for each neuron in the sample.
Figure 8
 
Results from an experiment that mapped the spatial extent of the scotoma induced by cortical lesion (Rosa et al., 2000). Fifty-one receptive fields recorded from 11 penetrations in V1 are plotted. The ellipses represent the 80% maximal magnitude contours of the fitted Fisher–Bingham functions. Lambert projection (Equation 20) was used to project the spherical ellipses to a 2D plane. Tuning parameters such as direction tuning bandwidth and response latency were measured for each neuron in the sample.
Figure 9
 
Two examples of receptive field maps obtained in extrastriate areas. (A, B) The receptive field of a neuron recorded in area MT, in the same V1-lesioned animal whose scotoma is plotted in Figure 8. The spike trains triggered by the 8 × 8 flashing square (5° × 5° in size, 0.2 s ON, 0.2 s OFF) are plotted in (A), where color represents mean spike rate. The mean spike rates associated with the spherical coordinate of the center of the flashing square are also displayed as a contour plot in (B), using Lambert projection (Equation 20). Superimposed in (B) is the scotoma estimated from data plotted in Figure 8 (dashed red contour). Interestingly, the V1 lesion seems to have torn a hole in one single MT receptive field. (C, D) The receptive field of one neuron recorded in the prostriata, a small cortical area located at the tip of the calcarine sulcus. The receptive field envelope is very large and teardrop-shaped (C). The envelope, fitted to Equation 17, is plotted in (D).
Figure 9
 
Two examples of receptive field maps obtained in extrastriate areas. (A, B) The receptive field of a neuron recorded in area MT, in the same V1-lesioned animal whose scotoma is plotted in Figure 8. The spike trains triggered by the 8 × 8 flashing square (5° × 5° in size, 0.2 s ON, 0.2 s OFF) are plotted in (A), where color represents mean spike rate. The mean spike rates associated with the spherical coordinate of the center of the flashing square are also displayed as a contour plot in (B), using Lambert projection (Equation 20). Superimposed in (B) is the scotoma estimated from data plotted in Figure 8 (dashed red contour). Interestingly, the V1 lesion seems to have torn a hole in one single MT receptive field. (C, D) The receptive field of one neuron recorded in the prostriata, a small cortical area located at the tip of the calcarine sulcus. The receptive field envelope is very large and teardrop-shaped (C). The envelope, fitted to Equation 17, is plotted in (D).
Results
The Methods section describes a simple design for projecting computer-generated stimuli onto a hemisphere, as well as statistical techniques for analyzing the geometry of receptive field maps. For over a year, we have been using the described setup in visuotopic studies of a number of visual areas in the marmoset monkey (Callithrix jacchus). Representative results from one such experiment, recorded from a neuron in the far peripheral representation of V1, are illustrated in Figure 7. The extent of the receptive field was first mapped using a bright square (2.5° × 2.5°) in an 8 × 8 grid centered on (72°, −12°). The responses (raster plot shown in Figure 7A) were fitted to the response model given in Equation 10. The best-fit function (κ = 515.1, β = 88.3) is plotted in Figures 7B and 7C. The center of the receptive field was estimated as (73.6°, −15.3°), which corresponds to 74.2° in eccentricity (Equation 1). The width and length of the receptive field were also estimated automatically from the best-fit function. 
The techniques described in the Methods section for projecting moving bars and drifting gratings on the hemisphere make it possible to quantify the tuning properties of neurons in addition to the receptive field map. Figures 7D7F illustrate the direction tuning curve, the spatial frequency tuning curve, and the temporal frequency tuning curve of the same neuron, measured with projected stimuli. In mapping experiments, the tuning properties of sampled neurons are rarely quantitatively documented, primarily because mapping is difficult without a hemisphere, but the stimuli for measuring tuning curves normally require a CRT monitor. As a consequence, it has been difficult to investigate how tuning properties vary with receptive field location on a large scale. In addition, in studies of cortical map reorganization where normal visuotopy is artificially disrupted, it is highly valuable to correlate receptive field location with tuning properties (Chino, Smith, Kaas, Sasaki, & Cheng, 1995; Giannikopoulos & Eysel, 2006; Rosa et al., 2000). By projecting stimuli to the hemisphere, the convenience of the hemisphere (large coverage, spherical geometry) does not have to be in conflict with the advantages of the CRT (computer-controlled stimuli and data acquisition). 
In a study of cortical map plasticity, we used spherically projected stimuli to estimate the spatial extent of the scotoma induced by a lesion placed in V1 (the preparation is similar to that described in Rosa et al., 2000). Figure 8 illustrates one such map, plotting 51 receptive fields recorded from 11 penetrations along the edge of the spared visual cortex. Tuning parameters, such as direction tuning bandwidth and response latency, were measured for each neuron in the sample. 
Although the receptive fields of early visual areas usually can usually be delineated by listening to the spikes triggered by a manually operated light spot or slit, the practice of “handmapping” is inherently subjective. The reliability of individual maps cannot be readily assessed and it is also difficult to estimate the size and aspect ratio of the receptive field with consistent criteria. Furthermore, retinal and cortical lesions can disrupt the balance of excitation and inhibition in the neural network, which in turn can lead to receptive fields with diffused boundaries, unusually large sizes, and irregular/disjointed shapes (Gilbert & Wiesel, 1992; Heinen & Skavenski, 1991; Kaas et al., 1990; Schmid, Rosa, & Calford, 1995), making them difficult to map manually. Using repeated presentation of a bright square in an 8 × 8 or 12 × 12 grid, we documented some receptive fields whose unusual shapes were not apparent in initial handmapping. Figure 9A illustrates one such example, showing the receptive field map of a neuron recorded in area MT of the same animal whose lesion-induced scotoma is depicted in Figure 8. The receptive field appears to have a curved profile, with a “gap” in the lower right corner. When the scotoma is superimposed on the map (Figure 9B), it becomes clear that the gap in the MT receptive field is aligned with the scotoma. Detailed maps such as this are difficult to achieve with handmapping alone. 
The visual space explored by the mapping stimulus in Figure 9A was 40° × 40°, which is about the largest spatial extent that a CRT can cover in a typical physiological recording setting. However, receptive fields in extrastriate areas can exceed that size. Receptive field size in the medial superior temporal area (MST), for example, can be as large as 80° in diameter (Desimone & Ungerleider, 1986; Rosa & Elston, 1998). The sheer size of the receptive fields makes mapping using a CRT difficult. Although receptive field boundaries can sometimes be drawn manually on the hemisphere, copying down long contours on a spherical surface and recreating them on a 2D plane is tedious and therefore is typically executed with rough approximations. Consequently, quantitative descriptions of the geometry of large receptive fields are rare (see, however, Motter & Mountcastle, 1981; Steinmetz, Motter, Duffy, & Mountcastle, 1987). Spherical projection addresses these problems. Figures 9C9D illustrate the receptive field map of a neuron recorded in area prostriata (Cuénod et al., 1964; Rosa, Casagrande et al., 1997), a small cortical area located at the rostral tip of the calcarine sulcus. The width of the receptive field is estimated to be more than 50°. Furthermore, the receptive field is teardrop-shaped with the most sensitive region centered near the edge of the visual field. It is difficult to appreciate the size and the structure of a large receptive field such as this without the use of a computer-generated stimulus projected onto a hemisphere. 
Discussion
We have described a simple and economical system to produce spherical stimuli. Although we have only used this technique in electrophysiological experiments, displays based on a similar principle are also applicable to psychophysical experiments. The ubiquitous use of CRTs has limited the size of stimulus and the visual space that can be explored. As a consequence, the function of peripheral vision and the interaction of visual locations across large distances remain poorly understood. The spherical coordinate system is arguably the most natural choice for visual sciences, and it is surprising that the use of spherical displays is not more commonplace. Experiments using projected large-field stimuli are already revealing surprises about the neglected peripheral vision (e.g., Fujimoto & Yagi, 2007; Thorpe, Gegenfurtner, Fabre-Thorpe, & Bülthoff, 2001). 
The projection technique was inspired by the elegant panoramic display system iDome described by Bourke (2005), which was originally created as a multimedia platform for education and entertainment but has since found applications in neurophysiology (Harvey, Collman, Dombeck, & Tank, 2009). The iDome, based on projection to a spherical mirror, works best with large hemispheres (2 to 3 m in diameter) and therefore cannot be easily accommodated in a typical physiology laboratory with limited space and budget. Although it is possible to scale down the iDome, we found the optimal placement of the spherical mirror in conflict with the stereotaxic frame and surgical equipment. In contrast, the design described in this paper can be adopted without modifying the physiology setup, because the projector is placed far away from the equipments. This has the additional advantage of preventing the electronic noise and heat produced by the projector from interfering with the physiological sensors. If laboratory space is ample, the spherical mirror approach should be considered because it offers full coverage of visual space. However, a higher level of expertise in software development is required to program the stimulus. The data analysis techniques and the methods for drawing geometrical shapes described in this paper can be used without modification if the spherical mirror design is used. Wide-field visual stimulation can also be achieved by utilizing head-mounted displays. Specialized optical goggles, for example, have been used in fMRI experiments (e.g., Hoffman, Richards, Coda, Richards, & Sharar, 2003). Low-cost, consumer-grade systems are also becoming available, which potentially can be highly valuable for human psychophysics. However, it should be noted that consumer-grade goggles typically do not stimulate the far peripheral visual field and cannot be used directly on animal models. 
The described projection technique has two limitations. First, the visual space that can be stimulated is about 100° × 75°, which is 60.9% of the visual space represented by one hemisphere of the visual cortex. Although 100° of longitudinal coverage is sufficient for the majority of studies in visual neuroscience, 75° of latitudinal coverage leaves parts of the upper and lower visual spaces unreachable. In visuotopic mapping experiments, this region can still be explored by the traditional “handmapping” technique and therefore does not limit the design too severely. This limitation could be circumvented through the use of two projectors, one aimed at the upper visual field and the other directed toward the lower visual field. With the availability of low-cost miniature projectors, the dual-projector approach is becoming very practical. The second limitation is that the spherically projected image is slightly out of focus at the edges (this effect is more noticeable for eccentricities >60°). Because high-resolution stimuli in mapping experiments are not critical, this is not a severely limiting factor. Moreover, in our laboratory conditions at least, the poor spatial resolution in the peripheral visual field means that the slight blur is not perceptible to humans while they maintain fixation at the center of the hemisphere. Projectors using scanning lasers for image formation do not use lenses and therefore do not suffer from this problem. Such miniature laser projectors are becoming commercially available at low cost, and in future implementations, they could be used in place of DLP projectors. 
A final limitation of the present projection technique comes from the direct visualization of the projector's lamp, which can create artifactual luminance transients near the fovea, when the projector is aimed directly at the center of the hemisphere. To circumvent this problem in situations where high precision near the fovea is necessary, a simple and effective solution is to move the projector along an arc parallel to the horizontal meridian, well into the ipsilateral visual field. For example, a placement of the projection center 30° into the ipsilateral hemifield will allow accurate stimulation of the fovea, while at the same time enabling exploration of the peripheral contralateral hemifield up to 80°. Alternatively, in studies where monocular stimulation is sufficient, the projector can be moved so that the lamp's line of sight is directed to the coordinates of the center of the optic disk (approximately 15° along the horizontal meridian of the contralateral hemifield, in most species of simian primate). 
Additional challenges in data analysis are associated with the use of spherically defined stimulus. The local geometry of a small patch on the surface of a sphere is approximately Cartesian, meaning that the small receptive fields recorded in the central representations of early visual areas can be treated as if they were mapped by a regular CRT. However, as the area to be stimulated becomes larger, the Cartesian assumption deviates from the true geometry. Alternatively, spherical statistics (Batschelet, 1981; Fisher et al., 1987) can be employed, which provide a natural and elegant framework for describing spherically defined data. As described in the Methods section, the oval-shaped receptive fields most commonly found in early visual areas can be modeled by the five-parameter Fisher–Bingham distribution, a close analog of the bivariate Gaussian distribution. The computation is only slightly more complicated than the bivariate Gaussian. In addition, previous studies in extrastriate cortex have reported asymmetrical receptive fields such as teardrop-shaped receptive fields (Maguire & Baizer, 1984; Pigarev et al., 2002). For these situations, we also propose an approximating procedure that provides a good description of the receptive field size and shape. Plastic changes of the visual representations in the cortex, induced by lesions, genetic manipulations, or even normal developmental processes can lead to neurons that respond to stimuli falling on two or more isolated regions of visual space (Gilbert & Wiesel, 1992; Haustead et al., 2008; Heinen & Skavenski, 1991; Kaas et al., 1990; Luhmann et al., 1990; Rosa et al., 2000; Schmid et al., 1995). To quantify this type of receptive field, the disjointed regions can be mapped individually and analyzed individually. In cases where these regions are spatially close, an algorithm for fitting multiple Fisher–Bingham distributions can be used (Peel, Whiten, & McLachlan, 2001). In summary, the methods we describe in the present paper have the potential to provide a much higher level of statistical rigor to studies where asymmetrical or multipeaked receptive field configurations are likely to exist and, in this way, point the way to resolution of current controversies surrounding the plasticity of cortical representations (Wandell & Smirnakis, 2009). 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Supplementary Movie - Supplementary Movie 
Supplementary File - Supplementary File 
Acknowledgments
The authors would like to acknowledge the contribution of Rowan Tweedale in correcting the text for style and grammar. This study was funded by research grants from the Australian Research Council (DP0878965, SRI1000006) and the National Health and Medical Research Council (491022). Equipment items purchased with funds provided by the Clive and Vera Ramaciotti Foundation and by the ANZ Charitable Trust were vital for the completion of this project. We would also like to thank Janssen-Cilag Pty Limited for the donation of sufentanil citrate, which made these experiments possible. 
Commercial relationships: none. 
Corresponding author: Dr. Hsin-Hao Yu. 
Email: hhyu00@gmail.com. 
Address: Faculty of Medicine, Nursing and Health Sciences Monash University, Clayton, VIC 3009, Australia. 
References
Albright T. D. (1984). Direction and orientation selectivity of neurons in visual area MT of the macaque. Journal of Neurophysiology, 52, 1106–1130. [PubMed] [PubMed]
Allman J. M. Kaas J. H. (1971). A representation of the visual field in the caudal third of the middle temporal gyrus of the owl monkey (Aotus trivirgatus). Brain Research, 31, 85–105. [PubMed] [CrossRef] [PubMed]
Azzalini A. (1985). A class of distributions which includes the normal ones. Scandinavian Journal of Statistics, 12, 171–178.
Batschelet E. (1981). Circular statistics in biology. London: Academic Press.
Bourke P. (2005). Using a spherical mirror for projection into immersive environments (Mirrordome). In Spencer S. N. (Ed.), Proceedings of the 3rd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia (pp. 281–284). Dunediin, New Zealand: ACM Press.
Bourne J. A. Rosa M. G. P. (2003). Preparation for the in vivo recording of neuronal responses in the visual cortex of anaesthetised marmosets (Callithrix jacchus). Brain Research Protocols, 11, 168–177. [PubMed] [CrossRef] [PubMed]
Calford M. B. Schmid L. M. Rosa M. G. P. (1999). Monocular focal retinal lesions induces short-term topographic plasticity in adult cat visual cortex. Proceedings of the Royal Society of London B: Biological Sciences, 266, 499–507. [PubMed] [CrossRef]
Chino Y. M. Smith E. L. Kaas J. H. Sasaki Y. Cheng H. (1995). Receptive-field properties of deafferentated visual cortical neurons after topographic map reorganization in adult cats. Journal of Neuroscience, 15, 2417–2433. [PubMed] [PubMed]
Collins C. E. Lyon D. C. Kaas J. H. (2003). Responses of neurons in the middle temporal visual area after long-standing lesions of the primary visual cortex in adult New World monkeys. Journal of Neuroscience, 15, 2251–2264. [PubMed]
Cuénod M. Casey K. L. MacLean P. D. (1964). Unit analysis of visual input to posterior limbic cortex. I. Photic stimulation. Journal of Neurophysiology, 28, 1101–1171. [PubMed]
Desimone R. Ungerleider L. G. (1986). Multiple visual areas in the caudal superior temporal sulcus of the macaque. Journal of Comparative Neurology, 48, 164–189. [PubMed] [CrossRef]
Duffy C. J. Wurtz R. H. (1995). Response of monkey MST neurons to optic flow stimuli with shifted centers of motion. Journal of Neuroscience, 15, 5192–5208. [PubMed] [PubMed]
Fisher N. I. Lewis T. Embleton B. J. J. (1987). Statistical analysis of spherical data. Cambridge, UK: Cambridge University Press.
Fritsches K. A. Rosa M. G. P. (1996). Visuotopic organization of striate cortex in marmoset monkey (Callithrix jacchus). Journal of Comparative Neurology, 372, 264–282. [PubMed] [CrossRef] [PubMed]
Fujimoto K. Yagi A. (2007). Backscroll illusion in far peripheral vision. Journal of Vision, 7, (8):16, 1–7, http://www.journalofvision.org/content/7/8/16, doi:10.1167/7.8.16. [PubMed] [Article] [CrossRef] [PubMed]
Gattass R. Gross C. G. Sandell J. H. (1981). Visual topography of V2 in the macaque. Journal of Comparative Neurology, 201, 519–539. [PubMed] [CrossRef] [PubMed]
Giannikopoulos D. V. Eysel U. T. (2006). Dynamics and specificity of cortical map reorganization after retinal lesions. Proceedings of the National Academy of Sciences, 103, 10805–10810. [PubMed] [CrossRef]
Gilbert C. D. Wiesel T. N. (1992). Receptive field dynamics in adult primary visual cortex. Nature, 356, 150–152. [PubMed] [CrossRef] [PubMed]
Harvey C. D. Collman F. Dombeck D. A. Tank D. W. (2009). Intracellular dynamics of hippocampal place cells during virtual navigation. Nature, 461, 941–949. [PubMed] [CrossRef] [PubMed]
Haustead D. Lukehurst S. S. Clutton G. T. Bartlett C. A. Dunlop S. A. Arresse C. A. et al. (2008). Functional topography and integration of contralateral and ipsilateral retinocollicular projections of Ephrin-A−/− mice. Journal of Neuroscience, 28, 7376–7386. [PubMed] [CrossRef] [PubMed]
Heinen S. J. Skavenski A. A. (1991). Recovery of visual responses in foveal V1 neurons following bilateral foveal lesions in adult monkey. Experimental Brain Research, 83, 670–674. [PubMed] [CrossRef] [PubMed]
Hoffman H. G. Richards T. Coda B. Richards A. Sharar S. R. (2003). The illusion of presence in immersive virtual reality during an fMRI brain scan. Cyberpsychology & Behavior, 6, 127–131. [PubMed] [CrossRef]
Jones J. P. Palmer L. A. (1987a). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1233–1258. [PubMed]
Jones J. P. Palmer L. A. (1987b). The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1187–1221. [PubMed]
Kaas J. A. Krubitzer L. A. Chino Y. M. Langston A. L. Polley E. H. Blair N. (1990). Reorganization of retinotopic cortical maps in adult mammals after lesions of the retina. Science, 1990, 229–231. [PubMed] [CrossRef]
Kent J. T. (1982). The Fisher–Bingham distribution on the sphere. Journal of the Royal Statistical Society B, 44, 71–80.
Larsen D. D. Luu J. D. Burns M. E. Krubitzer L. (2009). What are the effects of severe visual impairment of the cortical organization and connectivity of primary visual cortex? Frontiers in Neuroanatomy, 3, 30. [PubMed]
Luhmann H. J. Greuel J. M. Singer W. (1990). Horizontal interactions in cat striate cortex: III. Ectopic receptive fields and transient exuberance of tangential interactions. European Journal of Neuroscience, 2, 369–377. [PubMed] [CrossRef] [PubMed]
Lui L. L. Bourne J. A. Rosa M. G. P. (2007). Spatial and temporal frequency selectivity of the middle temporal visual area of new world monkeys (Callithrix jacchus). European Journal of Neuroscience, 25, 1780–1792. [PubMed] [CrossRef] [PubMed]
Maguire W. M. Baizer J. S. (1984). Visuotopic organization of the prelunate gyrus in rhesus monkey. Journal of Neuroscience, 4, 1690–1704. [PubMed] [PubMed]
Maling D. H. (1973). Coordinate systems and map projections. London: George Philip and Son.
Marmarelis P. Z. Marmarelis V. Z. (1978). Analysis of physiological systems: The white noise approach. New York: Plenum Press.
Motter B. C. Mountcastle V. B. (1981). The functional properties of the light-sensitive neurons of the posterior parietal cortex studied in waking monkeys: Foveal sparing and opponent vector organization. Journal of Neuroscience, 1, 3–26. [PubMed] [PubMed]
Newsome W. T. Paré E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211. [PubMed] [PubMed]
Ohzawa I. DeAngelis G. C. Freeman R. D. (1996). Encoding of binocular disparity of complex cells in the cat's visual cortex. Journal of Neurophysiology, 75, 1779–1805. [PubMed] [PubMed]
Palmer S. M. Rosa M. G. P. (2006). A distinct anatomical network of cortical areas for analysis of motion in far peripheral vision. European Journal of Neuroscience, 24, 2390–2405. [PubMed] [CrossRef]
Peel D. Whiten W. J. McLachlan G. J. (2001). Fitting mixtures of Kent distributions to aid in joint set identification. Journal of the American Statistical Association, 96, 56–63. [CrossRef]
Pigarev I. N. Nothdurft H.-C. Kastner S. (2002). Neurons with radial receptive fields in monkey area V4A: Evidence of a subdivision of prelunate gyrus based on neuronal response properties. Experimental Brain Research, 145, 199–206. [PubMed] [CrossRef] [PubMed]
Press W. H. Teukolsky S. A. Vetterling W. T. Flannery B. P. (2007). Numerical recipes (3rd ed.). Cambridge, UK: Cambridge University Press.
Quiroga R. Q. Kreuz T. Grassberger P. (2002). Event synchronization: A simple and fast method to measure synchronicity and time delay patterns. Physical Reviews E, 66, e041904.
Rosa M. G. Sousa A. P. Gattass R. (1988). Representation of the visual field in the second visual area in the Cebus monkey. Journal of Comparative Neurology, 275, 326–345. [PubMed] [CrossRef] [PubMed]
Rosa M. G. P. Casagrande V. A. Preuss T. Kaas J. H. (1997). Visual field representation in striate and prestriate cortices of prosimian primate (Galago garnetti). Journal of Neurophysiology, 77, 3193–3217. [PubMed] [PubMed]
Rosa M. G. P. Elston G. N. (1998). Visuotopic organization and neuronal response selectivity for direction of motion in visual areas of the caudal temporal lobe of the marmoset monkey (Callithrix jacchus): Middle temporal area, middle temporal crescent, and surrounding cortex. Journal of Comparative Neurology, 393, 505–527. [PubMed] [CrossRef] [PubMed]
Rosa M. G. P. Fritsches K. A. Elston G. N. (1997). The second visual area in the marmoset monkey: Visuotopic organization, magnification factors, architectonical boundaries, and modularity. Journal of Comparative Neurology, 398, 547–567. [PubMed] [CrossRef]
Rosa M. G. P. Schmid L. M. (1994). Topography and extent of visual field representation in the superior colliculus of the megachiropteran Pteropus . Visual Neuroscience, 11, 1037–1057. [PubMed] [CrossRef] [PubMed]
Rosa M. G. P. Schmid L. M. Krubitzer L. A. Pettigrew J. D. (1993). Retinotopic organization of the primary visual cortex of flying foxes (Pteropus poliocephalus and Pteropus scapulatus). Journal of Comparative Neurology, 335, 55–72. [PubMed] [CrossRef] [PubMed]
Rosa M. G. P. Tweedale R. (2005). Brain maps, great and small: Lessons from comparative studies of primate visual cortical organization. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 360, 665–691. [PubMed] [CrossRef]
Rosa M. G. P. Tweedale R. Elston G. N. (2000). Visual responses of neurons in the middle temporal area of new world monkey after lesions of striate cortex. Journal of Neuroscience, 20, 5552–5563. [PubMed] [PubMed]
Rust N. C. Schwartz O. Movshon J. A. Simoncelli E. P. (2005). Spatiotemporal elements of macaque V1 receptive fields. Neuron, 46, 945–956. [PubMed] [CrossRef] [PubMed]
Schmid L. M. Rosa M. G. P. Calford M. B. (1995). Retinal detachment induces massive immediate reorganization in visual cortex. Neuroreport, 6, 1349–1353. [PubMed] [CrossRef] [PubMed]
Sereno M. I. McDonald C. T. Allman J. M. (1994). Analysis of retinotopic maps in extrastriate cortex. Cerebral Cortex, 4, 601–620. [PubMed] [CrossRef] [PubMed]
Steinmetz M. A. Motter B. C. Duffy C. J. Mountcastle V. B. (1987). Functional properties of parietal visual neurons: Radial organization of directionalities within the visual field. Journal of Neuroscience, 7, 177–191. [PubMed] [PubMed]
Swindale N. V. (1998). Orientation tuning curves: Empirical description and estimation of parameters. Biological Cybernetics, 78, 45–56. [PubMed] [CrossRef] [PubMed]
Thorpe S. J. Gegenfurtner K. R. Fabre-Thorpe M. Bülthoff H. H. (2001). Detection of animals in natural images using far peripheral vision. European Journal of Neuroscience, 14, 869–876. [PubMed] [CrossRef] [PubMed]
Trevelyan A. J. Upton A. L. Cordery P. M. Thompson I. D. (2007). An experimentally induced duplication of retinotopic mapping within the hamster primary visual cortex. European Journal of Neuroscience, 26, 3277–3290. [PubMed] [CrossRef] [PubMed]
Wandell B. A. Smirnakis S. M. (2009). Plasticity and stability of visual field maps in adult primary visual cortex. Nature Reviews on Neuroscience, 10, 873–884. [PubMed]
Weisstein W. E. (2010). Sphere point picking. In MathWorld—A Wolfram web resource. Retrieved from http://mathworld.wolfram.com/SpherePointPicking.html.
Wood A. T. A. (1988). Some notes on the Fisher–Bingham family on the sphere. Communications in Statistics—Theory and Methods, 17, 3881–3897. [CrossRef]
Yu H.-H. Verma R. Yang Y. Tibballs H. A. Lui L. L. Reser D. et al. (2010). Spatial and temporal frequency tuning in striate cortex: Functional uniformity and specialization related to receptive field eccentricity. European Journal of Neuroscience, 31, 1043–1062. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Two implementations of the stimulus projection design described in the paper. (A) A large hemisphere (90 cm in diameter) for primate vision. The dots on the hemisphere indicate intersections of lines of longitude and circles of latitude, in 10° interval. They are also used for calibrating the projector. The projector (not shown) is positioned 2.1 m away from the hemisphere. (B) A small hemisphere (40 cm in diameter) for rodent vision. The distance between the hemisphere and the projector (arrow) is about 1 m.
Figure 1
 
Two implementations of the stimulus projection design described in the paper. (A) A large hemisphere (90 cm in diameter) for primate vision. The dots on the hemisphere indicate intersections of lines of longitude and circles of latitude, in 10° interval. They are also used for calibrating the projector. The projector (not shown) is positioned 2.1 m away from the hemisphere. (B) A small hemisphere (40 cm in diameter) for rodent vision. The distance between the hemisphere and the projector (arrow) is about 1 m.
Figure 2
 
The paper describes a technique that allows stimuli to be designed as geometrical shapes specified in spherical coordinates (λ: longitude, φ: latitude). Function f (Equation 4) transforms the spherical coordinates to image coordinates (x and y) such that when the image is projected to a hemisphere, the intended geometrical shape is produced.
Figure 2
 
The paper describes a technique that allows stimuli to be designed as geometrical shapes specified in spherical coordinates (λ: longitude, φ: latitude). Function f (Equation 4) transforms the spherical coordinates to image coordinates (x and y) such that when the image is projected to a hemisphere, the intended geometrical shape is produced.
Figure 3
 
A point p on the hemisphere can be expressed as (A) its longitude (−90° ≤ λ ≤ 90°) and its latitude (−90° ≤ φ ≤ 90°), or (B) its eccentricity (0° ≤ r ≤ 90°) and its polar angle (−90° ≤ θ ≤ 270°). The center of the two eyes is positioned at (x 1, x 2, x 3) = (0, 0, 0) facing the direction of (0, 0, 1), which is also the point of zero longitude and latitude. The right visual field is represented by −90° ≤ λ ≤ 0° and the left visual field by 0° ≤ λ ≤ 90°. The upper visual field is represented by 0° ≤ φ ≤ 90° and the lower visual field by −90° ≤ φ ≤ 0°.
Figure 3
 
A point p on the hemisphere can be expressed as (A) its longitude (−90° ≤ λ ≤ 90°) and its latitude (−90° ≤ φ ≤ 90°), or (B) its eccentricity (0° ≤ r ≤ 90°) and its polar angle (−90° ≤ θ ≤ 270°). The center of the two eyes is positioned at (x 1, x 2, x 3) = (0, 0, 0) facing the direction of (0, 0, 1), which is also the point of zero longitude and latitude. The right visual field is represented by −90° ≤ λ ≤ 0° and the left visual field by 0° ≤ λ ≤ 90°. The upper visual field is represented by 0° ≤ φ ≤ 90° and the lower visual field by −90° ≤ φ ≤ 0°.
Figure 4
 
Derivation of the transformation f. (A) In the special case where p is on the equator, the transformation is Equation 2. (B) For an arbitrary point on the hemisphere, f is expressed by Equation 4.
Figure 4
 
Derivation of the transformation f. (A) In the special case where p is on the equator, the transformation is Equation 2. (B) For an arbitrary point on the hemisphere, f is expressed by Equation 4.
Figure 5
 
(A) The free parameters (d, m, and n) in Equation 4 can be estimated by fitting the model to a set of registration points on the hemisphere (expressed by the spherical coordinate λ and φ) and their corresponding image coordinates (x and y). In our particular implementation, the (x, y) coordinates of a small patch of bright square projected to match the 56 registration points (indicated by black dots in the figure) on the hemisphere were recorded. The (x, y) coordinates corresponding to lines of longitude and circles of latitude of the best-fit model are plotted as gray lines. The 640 × 480 image space covers 100° × 75° of visual space on the hemisphere. (B) In many applications, it is useful to direct the center of projection to (±30°, 0°) on the equator to provide better coverage of the far periphery.
Figure 5
 
(A) The free parameters (d, m, and n) in Equation 4 can be estimated by fitting the model to a set of registration points on the hemisphere (expressed by the spherical coordinate λ and φ) and their corresponding image coordinates (x and y). In our particular implementation, the (x, y) coordinates of a small patch of bright square projected to match the 56 registration points (indicated by black dots in the figure) on the hemisphere were recorded. The (x, y) coordinates corresponding to lines of longitude and circles of latitude of the best-fit model are plotted as gray lines. The 640 × 480 image space covers 100° × 75° of visual space on the hemisphere. (B) In many applications, it is useful to direct the center of projection to (±30°, 0°) on the equator to provide better coverage of the far periphery.
Figure 6
 
Two examples of the Fisher–Bingham distribution in the standard reference frame (Equation 6). Parameters: (A) κ = 5, β = 0; (B) κ = 26, β = 10. Gray level represents the magnitude of the function. White is 1; black is 0.
Figure 6
 
Two examples of the Fisher–Bingham distribution in the standard reference frame (Equation 6). Parameters: (A) κ = 5, β = 0; (B) κ = 26, β = 10. Gray level represents the magnitude of the function. White is 1; black is 0.
Figure 7
 
The receptive field is first mapped with a bright square stimulus (2.5° × 2.5°) in an 8 × 8 grid, centered at (72°, −12°). The stimulus was on for 0.2 s, off for 0.2 s, and was repeated 10 times. (A) The raster plot of the response of a representative neuron in V1. The shaded cells in the grid represent an initial estimation of the receptive field, based on the event synchronization (threshold = 0.2). The response profile was then fitted to a response model based on the Fisher–Bingham distribution (Equation 10). The coefficient of determination (r 2) of the fitted model was 0.93, with best-fit parameters κ = 515.1 and β = 88.3. The contour of the best-fit model is plotted in (B) and (C). The receptive field center, eccentricity, width, and length can be calculated from the fitted model. We also projected a moving bar stimulus onto the hemisphere to measure the direction tuning curve of the neuron, as illustrated in (D). The black line connects mean spike rates at each condition, and the error bars represent standard error of the mean. The tuning curve was fitted to the von Mises function (Swindale, 1998), represented by the green line. Similarly, drifting sinusoidal gratings were used to estimate the (E) spatial frequency and (F) temporal frequency tuning curves. The responses were fitted to log-transformed skewed Gaussian functions (Lui, Bourne, & Rosa, 2007). The estimated tuning parameters are compatible with published data collected with conventional CRT stimulation (Yu et al., 2010).
Figure 7
 
The receptive field is first mapped with a bright square stimulus (2.5° × 2.5°) in an 8 × 8 grid, centered at (72°, −12°). The stimulus was on for 0.2 s, off for 0.2 s, and was repeated 10 times. (A) The raster plot of the response of a representative neuron in V1. The shaded cells in the grid represent an initial estimation of the receptive field, based on the event synchronization (threshold = 0.2). The response profile was then fitted to a response model based on the Fisher–Bingham distribution (Equation 10). The coefficient of determination (r 2) of the fitted model was 0.93, with best-fit parameters κ = 515.1 and β = 88.3. The contour of the best-fit model is plotted in (B) and (C). The receptive field center, eccentricity, width, and length can be calculated from the fitted model. We also projected a moving bar stimulus onto the hemisphere to measure the direction tuning curve of the neuron, as illustrated in (D). The black line connects mean spike rates at each condition, and the error bars represent standard error of the mean. The tuning curve was fitted to the von Mises function (Swindale, 1998), represented by the green line. Similarly, drifting sinusoidal gratings were used to estimate the (E) spatial frequency and (F) temporal frequency tuning curves. The responses were fitted to log-transformed skewed Gaussian functions (Lui, Bourne, & Rosa, 2007). The estimated tuning parameters are compatible with published data collected with conventional CRT stimulation (Yu et al., 2010).
Figure 8
 
Results from an experiment that mapped the spatial extent of the scotoma induced by cortical lesion (Rosa et al., 2000). Fifty-one receptive fields recorded from 11 penetrations in V1 are plotted. The ellipses represent the 80% maximal magnitude contours of the fitted Fisher–Bingham functions. Lambert projection (Equation 20) was used to project the spherical ellipses to a 2D plane. Tuning parameters such as direction tuning bandwidth and response latency were measured for each neuron in the sample.
Figure 8
 
Results from an experiment that mapped the spatial extent of the scotoma induced by cortical lesion (Rosa et al., 2000). Fifty-one receptive fields recorded from 11 penetrations in V1 are plotted. The ellipses represent the 80% maximal magnitude contours of the fitted Fisher–Bingham functions. Lambert projection (Equation 20) was used to project the spherical ellipses to a 2D plane. Tuning parameters such as direction tuning bandwidth and response latency were measured for each neuron in the sample.
Figure 9
 
Two examples of receptive field maps obtained in extrastriate areas. (A, B) The receptive field of a neuron recorded in area MT, in the same V1-lesioned animal whose scotoma is plotted in Figure 8. The spike trains triggered by the 8 × 8 flashing square (5° × 5° in size, 0.2 s ON, 0.2 s OFF) are plotted in (A), where color represents mean spike rate. The mean spike rates associated with the spherical coordinate of the center of the flashing square are also displayed as a contour plot in (B), using Lambert projection (Equation 20). Superimposed in (B) is the scotoma estimated from data plotted in Figure 8 (dashed red contour). Interestingly, the V1 lesion seems to have torn a hole in one single MT receptive field. (C, D) The receptive field of one neuron recorded in the prostriata, a small cortical area located at the tip of the calcarine sulcus. The receptive field envelope is very large and teardrop-shaped (C). The envelope, fitted to Equation 17, is plotted in (D).
Figure 9
 
Two examples of receptive field maps obtained in extrastriate areas. (A, B) The receptive field of a neuron recorded in area MT, in the same V1-lesioned animal whose scotoma is plotted in Figure 8. The spike trains triggered by the 8 × 8 flashing square (5° × 5° in size, 0.2 s ON, 0.2 s OFF) are plotted in (A), where color represents mean spike rate. The mean spike rates associated with the spherical coordinate of the center of the flashing square are also displayed as a contour plot in (B), using Lambert projection (Equation 20). Superimposed in (B) is the scotoma estimated from data plotted in Figure 8 (dashed red contour). Interestingly, the V1 lesion seems to have torn a hole in one single MT receptive field. (C, D) The receptive field of one neuron recorded in the prostriata, a small cortical area located at the tip of the calcarine sulcus. The receptive field envelope is very large and teardrop-shaped (C). The envelope, fitted to Equation 17, is plotted in (D).
Supplementary PDF
Supplementary Movie
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×