September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Estimating human color-concept associations from multimodal language models
Author Affiliations & Notes
  • Kushin Mukherjee
    University of Wisconsin Madison
  • Timothy T. Rogers
  • Karen B. Schloss
  • Footnotes
    Acknowledgements  NSF award BCS-1945303 to K.B.S.
Journal of Vision September 2024, Vol.24, 946. doi:https://doi.org/10.1167/jov.24.10.946
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kushin Mukherjee, Timothy T. Rogers, Karen B. Schloss; Estimating human color-concept associations from multimodal language models. Journal of Vision 2024;24(10):946. https://doi.org/10.1167/jov.24.10.946.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Color-concept associations are important for many facets of visual cognition from object recognition to interpretation of information visualizations. Thus, a key goal in vision science is developing efficient methods for estimating color-concept association distributions over color space. Such methods may also inform how people form associations between colors and abstract concepts despite these color-concept pairs never co-occurring in the natural world. To this end, we investigated the extent to which GPT-4, a multimodal large language model (LLM), could estimate human-like color-concept associations without any additional training. We first collected human association ratings between 70 concepts and a set of 71 colors spanning perceptual color space (UW-71 colors). We then queried GPT-4 to generate analogous ratings when given concepts as words and colors as hexadecimal codes, and compared these association ratings to the human data. Color-concept association ratings generated by GPT-4 were correlated with human ratings (mean r across concepts = .67) at a level comparable to state-of-the-art methods for automatically estimating such associations from images. The correlations between GPT-4 and human ratings varied across concepts (range: r = .08 – .93), with the correlation strength itself predicted by the specificity (inverse entropy) of the color-concept concept association distributions (r = .57, p < .001). Although GPT-4’s performance was also predicted by concept abstractness (r = -.42, p < .001), this effect was dominated by specificity when both factors were entered into a model together (specificity: p < .001, abstractness: p = .25). These results suggest that GPT-4 can be used as a tool for estimating associations between concepts and perceptual properties, like color, with better accuracy for high-specificity concepts. They also suggest that learning both word-to-percept structure and word-to-word structure, as multimodal LLMs do, might be one way to acquire associations between colors and abstract concept words.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×