Abstract
Recently we have shown that performing comfortable and uncomfortable reaching acts produces dramatic changes in the perception of facial expressions consistent with motor action induced mood congruency. More anger is needed to perceive neutrality in a happy-to-anger morph continuum (Fantoni & Gerbino, 2014), and a lower absolute threshold for the detection of happiness is required after performing a sequence of comfortable than uncomfortable reaches (Fantoni, Cavallero & Gerbino, 2014). Here we asked whether such influences generalize across the entire affect domain or whether they are specific for faces. In two experiments considering the biphasic representation of motor actions, we tested whether performing comfortable vs. uncomfortable motor actions might change the way in which the valence and the arousal components of natural scenes influence our affective choice. In order to induce comfort/discomfort, we used our Motor Action Mood-Induction Procedure (MAMIP). This included two successive visually guided reaching blocks each involving a sequence of reaches with the depth extent randomly selected in the 0.65-to-0.75 (Comfortable) or the 0.90-to-1.00 (Uncomfortable) arm length range. After each block participants performed a sequential image selection task on a randomized set of 27 IAPS natural scenes varying in valence and arousal. The likelihood of affective selection was well described by a combination of sigmoid functions of valence and arousal: per cent selection monotonically increased with valence, with the rate of increase decreasing as arousal grew larger. Importantly, we found a mood-congruent effect following MAMIP. Action comfort enhanced the quality of observer's global experience, with perceived scene neutrality after comfortable reaches requiring less valence and arousal than after uncomfortable reaches. Relative to inaction (Experiment 2), action enhanced scene attractiveness with an increment of selection choices in Experiment 1. We conclude that influences by action-induced mood are general and not restricted to the domain of facial expression.
Meeting abstract presented at VSS 2015