Purchase this article with an account.
Sepehr Jalali, Kielan Yarrow, Joshua Solomon; Predicting the outcome of an opponent’s tennis stroke: Insights from a classification-sequence analysis. Journal of Vision 2015;15(12):745. doi: 10.1167/15.12.745.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Experts are able to predict the outcome of their opponent’s next action (e.g. a tennis stroke) based on kinematic cues that are “read” from preparatory body movements. Traditionally, this ability has been investigated by manipulating a video of the opponent, but this can reveal only the information sources that have been anticipated by the experimenter. Here, we instead use classification-image techniques in order to find out how participants discriminate sporting scenarios as they unfold. Videos were taken of three competent tennis players making services and forehand shots, each with two possible directions. The videos were presented to novices and club-level amateur participants for a period from 800ms before to 200ms after racquet-ball contact. Participants stepped off force plates in a tennis-appropriate manner to report shot direction. We established a time limit for responses that was consistent with 90% accuracy in a training phase. Participants then viewed videos through randomly placed temporal Gaussian windows ("Bubbles"). The number of windows was varied to ensure ~75% accuracy. A comparison of Bubbles from correct and incorrect trials allowed us to estimate the relative contribution of each cluster of video frames toward a correct response. Two clusters had a significant impact on accuracy. One extended from ~50 ms before ball contact to 100+ ms afterwards. Interestingly, a second cluster suggested that for forehands, information was also accrued from around the time of swing initiation, ~300 ms before ball contact. Clusters were derived based on data from all participants, as an amateur minus novice contrast was not significant. Although still under development, our technique has potential to help players improve in two ways: By showing them 1) from when/where they read information, and 2) their “gives” when making a shot. Ongoing experiments will generate classification images to complement our current classification sequences with spatial information.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only