Abstract
Pupil size changes are often used in the context of object recognition and affective processing. Compared to those evoked by physical stimulus properties, the magnitude of pupil size changes due to cognitive processes is small. Yet, many existing paradigms do not dissociate these effects. We propose a paradigm for dissociating the effects due to stimulus properties and cognitive processing that adapts the pupil to physical stimulus properties prior to the presentation of an intact stimulus image. To test this paradigm, we compared pupil responses of 15 adults living in Canada (age: M = 23.2, SD = 4.33; female = 10) to validated familiar (Canadian) and unfamiliar (European) product images in a passive viewing task. Intact images (3000 ms) were masked by pixel scrambled image versions (1000 ms before and after) to adjust the pupil to the images’ luminance, contrast, and colours. After preprocessing (e.g., blink interpolation, outlier detection and downsampling), pupil dilation at all time points within each single trial was subtractive baseline corrected using the median of the last 500 ms of the preceding scrambled image mask of the same trial. Results indicate successful tracing of a cognitive brand familiarity effect on the individual participant level (up to 90% of participants showing the same effect) resulting in mean pupil size changes of about 15% compared to the individual average dynamic pupil size range during the experiment. A temporal cluster-based bootstrapping analysis identified and dissociated two temporal effects (500–800 and 1400–3000 ms post onset of the intact image) originating from separate product categories. The proposed paradigm successfully traced cognitive effects while precluding typical stimulus property confounds. This paradigm could be applied to any visual image that elicits cognitive responses, such as product images used to understand consumers’ cognitive processing across various viewing, search, and choice tasks.