Abstract
What determines the shape of the spatiotemporal tuning curves of visual neurons? In V1, direction-selective simple cells have selectivity that is roughly separable in orientation, spatial frequency, and temporal frequency (“frequency separable”). Models for tuning in area MT predict that signals from V1 inputs with suitable spatial and temporal frequency selectivity are combined to create tuning curves organized around tilted planes through the origin, representing stimuli translating at a particular direction and speed (“velocity separable”). In these models, this transformation is critical to build “pattern selective” neurons, which are neurons that respond best to simple and compound stimuli moving with the preferred velocity. We measured spatiotemporal frequency selectivity in single macaque MT neurons responding to sinusoidal gratings whose drift direction varied while their drift rate was either held constant (frequency separable organization) or varied along the preferred velocity plane (velocity separable). Most MT neurons’ grating tuning was fit equally well by both the frequency and velocity separable model, regardless of the degree of pattern selectivity. We also measured responses to plaids (sums of two gratings oriented 120º apart). MT responses to velocity separable plaids were stronger and more broadly tuned than those to frequency separable plaids. Velocity separable model fits to these plaid responses were better than corresponding frequency separable model fits for almost every cell. Fitting the velocity separable model to gratings alone failed to predict pattern selectivity, whereas fitting to plaids alone predicted pattern selectivity well. We conclude that velocity separable models better describe the responses of most MT cells, though this superiority is only evident when complex stimuli are used to expose the nonlinear elements of these models.
Meeting abstract presented at VSS 2015