May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
An interface between language and vision: Quantifier words and set-based processing
Author Affiliations
  • Justin Halberda
    Johns Hopkins University
  • Tim Hunter
    University of Maryland
  • Paul Pietroski
    University of Maryland
  • Jeffrey Lidz
    University of Maryland
Journal of Vision May 2008, Vol.8, 234. doi:https://doi.org/10.1167/8.6.234
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Justin Halberda, Tim Hunter, Paul Pietroski, Jeffrey Lidz; An interface between language and vision: Quantifier words and set-based processing. Journal of Vision 2008;8(6):234. https://doi.org/10.1167/8.6.234.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

While limits of visual processing are interesting in their own right, these limits take on a deeper meaning where vision integrates with other cognitive systems. It is at this point that limits within vision become limits that can affect the whole of cognition. We present one such case. Subjects viewed briefly flashed arrays of 2–6 colors. Arrays always contained some number of blue dots among other dots. Subjects evaluated the verbal statement, “most of the dots are blue”. The concept MOST requires subjects to evaluate whether the number of blue dots is greater than the number of non-blue dots, but there are multiple ways to specify what counts as a ‘non-blue dot’: Hypothesis 1, these items are specified directly as the ‘yellow, green and red dots’; Hypothesis 2, these items are specified via a negation of the ‘blue dots’ (i.e. ‘non-blue dots’ are computed as ‘all dots’ minus ‘blue dots’). Hypothesis 2 is consistent with prevailing linguistic theory for the word ‘most’. We found that subjects behaved in accord with Hypothesis 2, selecting the blue dots and the superset of all dots. Psychophysical modelling revealed that subjects performed two operations: first taking the difference of two Gaussian numerical representations to compute the cardinality of the remainder set (superset − blue set = non-blue set) then comparing this computed Gaussian and the focused set to evaluate ‘most’ (blue set [[gt]] non-blue set). These two steps add error to the discrimination and the Weber Fraction for evaluating ‘most’ was twice as large as that for evaluating ‘more’ in a similar task. That is, which word subjects thought they were evaluating changed the observed Weber Fraction for this essentially visual discrimination. This difference highlights a case where non-visual cognition (lexical meanings) impacts vision and visual limits (tracking multiple sets) constrain later cognition.

Halberda, J. Hunter, T. Pietroski, P. Lidz, J. (2008). An interface between language and vision: Quantifier words and set-based processing [Abstract]. Journal of Vision, 8(6):234, 234a, http://journalofvision.org/8/6/234/, doi:10.1167/8.6.234. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×