August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
A semi-automated solution for increasing the reliability of manually defined visual area boundaries
Author Affiliations
  • Sandhitsu Das
    Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania
  • Robyn Oliver
    Department of Psychology, University of Pennsylvania
  • Brian Avants
    Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania
  • Petya Radoeva
    Department of Neuroscience, University of Pennsylvania
  • David Brainard
    Department of Psychology, University of Pennsylvania
  • Geoffrey Aguirre
    Center for Functional Neuroimaging, Department of Neurology, University of Pennsylvania
  • James Gee
    Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania
Journal of Vision August 2009, Vol.9, 771. doi:https://doi.org/10.1167/9.8.771
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sandhitsu Das, Robyn Oliver, Brian Avants, Petya Radoeva, David Brainard, Geoffrey Aguirre, James Gee; A semi-automated solution for increasing the reliability of manually defined visual area boundaries. Journal of Vision 2009;9(8):771. https://doi.org/10.1167/9.8.771.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Increasing accuracy and reproducibility in determining visual area (VA) boundaries will improve vision studies based on retinotopy. Manual VA definitions are likely to be corrupted by a complex interaction between noisy data and variations in human perception. Semi-automated methods (Dougherty et al., 2003) have the potential to increase reliability of VA boundaries without sacrificing the validity contributed by a human rater.

We present a template-based method that deforms a canonical retinotopy to polar angle and eccentricity data from fMRI-based retinotopy experiments. Here, VA boundaries traced by human experts are used not only in initializing the canonical map, similar to Dougherty et al., but are also directly incorporated in template fitting by probabilistic curve matching. In contrast, Dougherty et al.'s method, after initialization, is driven only by the image data. Thus, our method is unique in that it strikes a balance between user-labeled VA boundaries and the statistically defined quality of the match between the smooth template and the noisy subject data.

This novel methodology improved overall reliability across three raters. Each rater labeled six visual area borders and the foveal confluence on an inflated 3D surface. Despite an effort to use similar segmentation criteria, considerable variability between tracings by different raters existed before template mapping. Template mapping significantly (p=0.002) decreased the variability of the traced borders across a dataset of 12 hemispheres, when variability was measured by the minimum distance sum across VA boundaries. Reliability was highest in dorsal V2 and lowest in ventral V3 both before and after template mapping. In conclusion, combining optimized template-based models with manual tracings of VA borders can improve the accuracy of retinotopic mapping. However, our work also indicates that fundamental issues of inter-rater reliability should be more carefully considered in retinotopy studies. More effort on defining optimization and evaluation criterion is also required.

Das, S. Oliver, R. Avants, B. Radoeva, P. Brainard, D. Aguirre, G. Gee, J. (2009). A semi-automated solution for increasing the reliability of manually defined visual area boundaries [Abstract]. Journal of Vision, 9(8):771, 771a, http://journalofvision.org/9/8/771/, doi:10.1167/9.8.771. [CrossRef]
Footnotes
 NIH RO1 EY10016.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×