Free
Article  |   April 2013
Why do the eyes prefer the index finger? Simultaneous recording of eye and hand movements during precision grasping
Author Affiliations
Journal of Vision April 2013, Vol.13, 15. doi:https://doi.org/10.1167/13.5.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cristiana Cavina-Pratesi, Constanze Hesse; Why do the eyes prefer the index finger? Simultaneous recording of eye and hand movements during precision grasping. Journal of Vision 2013;13(5):15. https://doi.org/10.1167/13.5.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Previous research investigating eye movements when grasping objects with precision grip has shown that we tend to fixate close to the contact position of the index finger on the object. It has been hypothesized that this behavior is related to the fact that the index finger usually describes a more variable trajectory than the thumb and therefore requires a higher amount of visual monitoring. We wished to directly test this prediction by creating a grasping task in which either the index finger or the thumb described a more variable trajectory. Experiment 1 showed that the trajectory variability of the digits can be manipulated by altering the direction from which the hand approaches the object. If the start position is located in front of the object (hand-before), the index finger produces a more variable trajectory. In contrast, when the hand approaches the object from a starting position located behind it (hand-behind), the thumb produces a more variable movement path. In Experiment 2, we tested whether the fixation pattern during grasping is altered in conditions in which the trajectory variability of the two digits is reversed. Results suggest that regardless of the trajectory variability, the gaze was always directed toward the contact position of the index finger. Notably, we observed that regardless of our starting position manipulation, the index finger was the first digit to make contact with the object. Hence, we argue that time to contact (and not movement variability) is the crucial parameter which determines where we look during grasping.

Introduction
In order to plan and perform a skilful action successfully, the visual system and the motor system need to interact in a complex manner. During the last few decades, a great deal of research has been devoted to investigating how eye and hand movements are coordinated in complex natural tasks such as tea or sandwich making (Hayhoe, Shrivastava, Mruczek, & Pelz, 2003; for review, see Land, 2009; Land & Hayhoe, 2001; Land, Mennie, & Rusted, 1999). Consistent with the idea that object manipulation requires information from the visual system, it was observed that in natural everyday tasks, eye movements are made to the action relevant positions prior to movement initiation (e.g., Land et al., 1999). Furthermore, the eyes tend to move toward the next relevant object in the action sequence shortly before the current task is completed, a movement suggesting that visual information is stored over short time intervals until it is needed by the motor system. To put it simply, during hand-object interaction, the primary role of vision is to guide, supervise, correct, and terminate action sequences (Land, 2009). 
The link between gaze- and goal-directed hand movements has also been investigated in more standardized experimental situations by focusing on simple motor actions such as pointing. Similar to everyday hand action, pointing errors increase if participants don't make eye and/or head movements toward the target relatively independently of whether the hand is visible or not during movement execution (Biguer, Jeannerod, & Prablanc, 1982; Biguer, Prablanc, & Jeannerod, 1984; Vercher, Magenes, Prablanc, & Gauthier, 1994). Further confirming the tight coupling between both systems, it was observed that there are strong correlations between the direction and the variability of eye movements and hand-movement errors, especially after delay (Admiraal, Keijsers, & Gielen, 2003; Frens & Erkelens, 1991). Interestingly, pointing studies in the lab and studies of natural actions lead to discrepant results too. For example, it was reported that in pointing tasks in which participants were asked to sequentially point to two targets, they were unable to make an eye movement away from the first pointing target (toward the second target) until the movement toward the first target was completed (Neggers & Bekkering, 2000, 2001). This finding suggests that eye movements during pointing are actively inhibited to stabilize the gaze at the target location. Even though experiments have shown that it is possible to dissociate the hand position and the fixation location under specific circumstances (Bernardis, Knox, & Bruno, 2005), specifically when visual illusions are used as target stimuli (cf. Binsted, Chua, Helsen, & Elliott, 2001; de Grave, Franz, & Gegenfurtner, 2006), humans naturally seem to prefer to look at the location at which a pointing movement will be directed. 
Whereas it seems quite self-evident that looking at the target location of the finger during pointing helps to improve performance (less variable and more accurate movements), the question of where we look when we grasp objects seems less straightforward as grasping requires the simultaneous placement of several digits at different object locations. Interestingly, there is comparatively little research investigating eye movements during grasping. One of the pioneering studies addressing the question of where we look when we grasp an object and perform a simple manipulation task (i.e., grasping a bar and moving it to press a target switch whilst avoiding obstacles) was published by Johansson and colleagues (2001). This study showed that during grasping, participants mainly fixate at landmarks that are critical to control the upcoming task. Surprisingly, fixations at the moving hand or towards the moving target object are avoided. Similar to pointing, it was found that at the moment the target object was grasped, the gaze was primarily directed at the position at which the finger made contact with the object. However, the study of Johansson, Westling, Backstrom, and Flanagan (2001) did not provide any useful insights into the relationship between finger positioning and gaze when several digits are used as only one of the grasping contact locations was visible. When participants grasped the target bar, only the contact position of the thumb was visible while the contact location of the index finger was occluded from view (at the back of the object). 
Employing an elegant experimental design in which both contact positions of the digits were visible, Brouwer et al. (2009) were the first to cast light on where humans look when performing a precision grasp with index finger and thumb. In this study, participants were asked to grasp objects of different shapes that were presented in front of a computer screen whilst their eye movements were monitored. Data was compared to a condition in which participants were instructed to simply look at the same objects without performing a concurrent grasping movement. It was observed that during grasping, the gaze was primarily directed at the location on the object at which the index finger was making contact; remarkably, this behavior was barely influenced by the object's shape and its center of gravity. In contrast, during the viewing only condition, the gaze was directed close to the center of gravity of the object fixated. Surprisingly, the tendency to fixate in the direction of the index finger during grasping was found to persist also when the contact position of the index finger was actually occluded from view (de Grave, Hesse, Brouwer, & Franz, 2008), suggesting that the occlusion of object parts has no fundamental effect on the fixation strategy. Brouwer et al. (2009) suggested that the attraction of the gaze to the contact location of the index finger might be attributed to the fact that the index finger—describing a more variable and curved movement trajectory during grasping (Galea, Castiello, & Dalwood, 2001; Haggard & Wing, 1997; Wing & Fraser, 1983)—needs a higher amount of visual monitoring. The difference in trajectory shape and trajectory variability between the index finger and thumb is associated to the different roles the fingers play during grasping. While the thumb is assumed to guide the hand straight to the object's location (i.e., leading the transport component of the movement), the primary role of the index finger is to close the grip around the object and ensure a safe grasp (Haggard & Wing, 1997). 
If this interpretation is correct, we would therefore predict that an inversion of the variability pattern generated by the index finger and the thumb would lead to an inversion on the pattern of eye movements too. In other words, if the roles of the index finger and the thumb are reversed with the index finger guiding the hand to the contact position and the thumb describing a more variable trajectory around the object, then the eye should follow the thumb instead of the index finger. If the pattern of eye movement does not change, then the visual feedback interpretation should be revised. 
We designed two experiments with the intention to test the above prediction. 
Experiment 1 is a pilot experiment in which we aimed to identify movement conditions in which either the index finger or the thumb generates a more variable trajectory. We tried to achieve this by varying the starting position of the hand prior to movement initiation such that either the index finger or the thumb was the guiding digit during the movement. The hand could either be placed in front of the object, thus requiring a movement away from the body to reach the target; or behind the target object, thus requiring a movement toward the body to reach the target (for a similar paradigm, see Cavina-Pratesi, Ietswaart, Humphreys, Lestou, & Milner, 2010; Cavina-Pratesi, Monaco, et al., 2010). We hypothesized that the finger that has to be moved around the object in order to reach its contact position will show the longer and thus more variable movement path. Specifically, our prediction is that when a movement from a starting position located in front of the object is required (hereby called “hand-before”), the thumb would lead the hand straight toward the object while the trajectory of the index finger needs to be adjusted in order to surmount the object and be positioned opposite to the thumb (standard grasping task; see Hesse & Deubel, 2009). Conversely, when the grasping movement is performed by starting from a position located behind the object (hereby called “hand-behind”), we predict that the index finger would guide the hand toward the object while the trajectory of the thumb would need adapting in order to surmount the object and to be positioned opposite to the index finger. The results of the pilot Experiment 1 confirmed that the trajectory variability produced by each finger can be altered by changing the direction from which the hand approaches the target object. In Experiment 2, we used this manipulation in an adapted design and recorded eye and hand movements simultaneously. This method allowed us to test whether fixation locations during grasping differ in conditions in which the thumb is the digit with the more variable trajectory. 
Experiment 1
Methods
Participants, apparatus and stimuli, and data analysis
Eight right-handed participants from Durham University (Edinburgh laterality quotient: 95.25, 5 female, mean age: 27 years, age range: 21–39) were tested. Informed consent was given prior to the experiments in accordance with Durham University Review Ethics Board. Experiments were approved by the local ethics committee and in accordance with the principles of the Declaration of Helsinki. 
Participants sat comfortably in front of a 60 × 60 cm board laid horizontally on a table. Prior to each trial, participants were asked to place their pinched index finger and thumb at the starting position marked by a small disc. The digits were positioned such that the thumb would always face the body and the index finger would face straight ahead. When the hand's starting position was located near to the body, the movements would be directed away from the body (hand-before condition). Conversely, when the starting position was located behind the object and thus further away from the body, the reach would be directed towards the body (hand-behind condition). 
Target objects were placed at a midlocation equidistant (25 cm) from both starting positions. Three different objects were used in order to prevent participants from adopting an automated grasping strategy and were presented in a random order. Each object was made out of Lego pieces, and had a different shape and size ranging between 3 × 1 × 2 cm and 3 × 3 × 1 cm in length, depth, and height, respectively. Liquid crystal shutter glasses (Plato System, Translucent Technologies, Toronto, Canada) were used in order to control viewing time. At the beginning of each trial, the shutter glasses opened and after a preview period of 2 s, a tone instructed the participant to pick up the object and place it in the left space. Movements were performed with full vision of the target and the hand. No instructions were given about the speed of the movement. Movements were recorded by sampling the position of three markers attached to the nail of the thumb, the nail of the index finger, and the wrist at a frequency of 86 Hz, using an electromagnetic motion analysis system (Minibird, Ascension Technology Ltd). Hand-before and hand-behind movements were manipulated in a between block fashion (ABBA) and each block was composed of 30 trials (10 for each object). 
We analyzed the following variables: Trajectory Variability (TV), Reaction Time (RT), Movement Time (MT), Time to Contact (TTC), Peak Velocity (PV), and Time to PV (TPV). 
TV was calculated using a procedure introduced by Paulignan and coworkers (Paulignan, MacKenzie, Marteniuk, & Jeannerod, 1991). Movement trajectories were normalized from start to end in 100 time frames. For each frame the standard deviation of the X and Y position was computed for each digit, each starting location, and each participant separately. The surface area of each ellipse formed by the standard deviation was then calculated as follows: A = pi* a*b (with a = SD in x and b = SD in y). All values were then summed across the 100 time frames and are reported in mm2. RT (ms) was recorded as the time of movement onset after the tone cued the participant to start. Movement onset was defined for the index finger and the thumb marker separately as the point at which the velocity of the marker exceeded 25mm/s respectively. MT (ms) was computed as the duration of the transport of each finger between movement onset and movement offset. Movement offset was defined by the velocity dropping below 50mm/s. The sum of MT and RT was calculated for both digits and indicated the time until the target object was reached after the start beep. We refer to this variable as “time to contact” (TTC in ms). PV (mm/sec) was defined as the highest velocity recorded during MT. TPV (ms) was defined as the time at which PV is reached after movement onset. TV, MT, MD, PV, and TPV were chosen as key variables to test whether changing the relative path length of the digits alters the movement kinematics between the fingers (i.e., index finger and thumb) involved in the movement. 
Statistical analyses were performed using 2 × 2 repeated-measures ANOVAs with the main factors digits (index finger and thumb) and movement type (hand-behind and hand-before). A significance level of α = 0.05 was used for all statistical analysis. All values are presented as mean ± standard errors of the mean. 
Results
If, as expected, spatial variability, movement time, and peak velocity scale with the path length of each digit, we would predict that the manipulation of the movement type changes the relative values for these measures between index finger and thumb (indicated by a significant interaction effect between digit and movement type). For conciseness, we will report significant results only. 
Trajectory variability (TV)
As expected, we found a significant interaction between digits and movement type, F(1, 7) = 52.42, p < 0.001. As shown in Figure 1a and confirmed by post-hoc tests, the TV of the index finger was higher for hand-before (369 mm2 ± 28 mm2) than for hand-behind movements (207 mm2 ± 25 mm2), while the thumb showed the opposite pattern with higher variability for hand-behind (318 mm2 ± 28 mm2) than for hand-before (232 mm2 ± 34 mm2) movements. There was also a significant effect of movement type, F(1, 7) = 5.82, p = 0.047, with hand-before movements (301 mm2 ± 28 mm2) generating higher variability than hand-behind movements (262 mm2 ± 23 mm2). 
Figure 1
 
Experiment 1: Trajectory variability (A), time to contact (B), and peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Figure 1
 
Experiment 1: Trajectory variability (A), time to contact (B), and peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Reaction time (RT), movement time (MT), and time to contact (TTC)
Reaction times were not influenced by our manipulation (all p > 0.15). On average, participants started their movements after about 457 ms ± 31 ms. Regarding MT, we found that regardless of the movement type, the index finger (581 ms ± 26 ms) was significantly faster than the thumb (702 ms ± 31 ms), F(1, 7) = 33, p = 0.001. Similarly, we also observed a significant main effect of the factor digits on the absolute time to contact (TTC), F(1, 7) = 21.3, p = 0.002. After the go signal, the index finger reached the object first (index finger = 1038 ms ± 37 ms; thumb = 1159 ms ± 47 ms) and on average around 121 ms ± 20 ms before the thumb (Figure 1b). There were no significant interaction effects for MT and TTC (both p > 0.33). 
Peak velocity (PV) and time to peak velocity (TPV)
For PV, the interaction between digits and movement type was significant, F(1, 7) = 145.4, p < 0.001. The PV was higher for the index finger when the start position was located in front of the object (hand-before: 977 mm/s ± 124 mm/s vs. hand-behind: 817 mm/s ± 77 mm/s). The thumb showed the opposite pattern with higher PV for hand-behind (905 mm/s ± 75 mm/s) than for hand-before (721 mm/s ± 97 mm/s) movements (see also Figure 1c). The main factor of digits also reached significance, F(1, 7) = 27, p = 0.001, with the index finger generating higher PV (897 mm/s ± 98 mm/s) than the thumb (813 mm/s ± 84 mm/s). TPV was not affected by our manipulations (all p > 0.24). 
Discussion
Our overall aim is to test the hypothesis that the eyes invariably fixate at the contact location of the index finger during grasping. This behavior might be related to the observation that the index finger's trajectory is more variable than the thumb's trajectory when approaching the target. It has thus been assumed that the digit with the higher variability might require a higher amount of online monitoring during movement execution (Brouwer et al., 2009). 
Experiment 1, which focused only on the kinematic aspects of the grasping movement, was designed to test whether performing a hand-behind as compared to a hand-before type of movement would invert the dynamics between index finger and thumb during reach-to-grasp actions. Previous experiments have shown that during standard grasps (hand-before), the index finger shows higher trajectory variability than the thumb (Galea et al., 2001). Here we asked participants to perform reach-to-grasp actions in conditions which either required moving the hand toward the object from a starting position located either in front of the object (hand-before) or from a starting position located behind the object (hand-behind). 
In accordance with previous results, we found that trajectory variability was higher for the index finger than for the thumb during movements that started with the hand located in front of the object (Galea et al., 2001; Paulignan, Jeannerod, MacKenzie, & Marteniuk, 1991). For the same type of grasp, we also observed that PV was higher for the index finger than for the thumb. Importantly, however, this pattern of results reversed when a hand-behind movement was required. That is, when the object was grasped with the hand located behind it, TV and PV were higher for the thumb than for the index finger. Our hypothesis is that during hand-before and hand-behind actions, the index finger and thumb switch their role as to which finger has to move around the target object to close the grasp safely. Notably, the time to contact (TTC) was the only kinematic parameter that was not affected by our primary manipulation of the hand's starting position. Critically, regardless of whether participants performed a hand-before or a hand-behind movement, TTC was found to be shorter for the index finger than for the thumb. One might be puzzled as to why the two fingers are desynchronized. To gain further insight in why the contact time of the thumb is delayed relative to the index finger, we additionally calculated the TTC of the hand transport component (determined via the wrist marker applying the same velocity thresholds as used for the digits). This analysis revealed that the hand transport ended after about 641 ms ± 22 ms (i.e., 66 ms after TTC was determined for the index finger, and 55 ms before TTC was determined for the thumb). These findings indicate that the delayed contact of the thumb—relative to the index finger—with the object is partly due to the thumb still approaching the object after the end of the hand transport. Regarding the question of why the index finger is the first finger to make contact with the object, we can only speculate. One conceivable explanation could be that it might be easier to guide (and, if necessary, to adjust) the index finger to a stable contact position, as it has more degrees of freedoms and is longer than the thumb. 
Experiment 2
In Experiment 2, we built and extended upon the observation that variability of the finger trajectory varies with the direction from which the hand approaches the target, in order to test the hypothesis that the eyes monitor the finger which exhibits the higher variability. Therefore, we recorded eye and hand movements simultaneously in conditions requiring either a hand-behind or a hand-before action. In order to record eye and hand movements within the same frame of reference, we had to adapt our paradigm slightly (see methods section of Experiment 2 and Figure 2a for further information). The hypotheses are very straightforward: If the eyes fixate at the contact position of the index finger during hand-before movements and at the contact position of the thumb during hand-behind movements, then the idea that the eyes monitor the digit that shows the more variable movement path when approaching the target would be supported. Conversely, if the eyes continue to follow the index finger, regardless of the starting position of the hand, then the visual feedback interpretation must be revised. 
Figure 2
 
Experiment 2: Schematic drawing of the experimental apparatus. A: Grasping movements were made in front of a TFT monitor and eye movements were recorded simultaneously. B: Target objects.
Figure 2
 
Experiment 2: Schematic drawing of the experimental apparatus. A: Grasping movements were made in front of a TFT monitor and eye movements were recorded simultaneously. B: Target objects.
Moreover, one could also hypothesize that fixations are biased toward locations at which the digit needs to be placed more precisely (for example, small contact positions might increase the difficulty to place the digit). In order to investigate whether eye movements during grasping vary dependent on the required contacting accuracy, we chose a triangle as one of the grasping objects. For a triangle, much more accuracy is required to make contact with its point than with its base. Additionally, it was shown that eye movements when viewing objects are preferably directed toward the center of gravity (COG) of a shape (Brouwer et al., 2009). In order to test whether the COG also affects the fixation behavior during grasping, we included the asymmetric t-shape as a target object. Finally, a symmetric cross served as a baseline object as it is a shape with no special salient features and its COG is located in the center of the shape. 
Methods
Subjects
Ten right-handed participants (by self-report) from Durham University participated in the experiment (5 female, mean age: 31, age range: 21–53). All participants were in good health with normal or corrected-to-normal visual acuity. All participants gave informed consent prior to the experiment, and the study was approved by the local ethics committee. 
Apparatus and stimuli
Target objects were three different white wooden shapes: t-shape, triangle, and cross. Both the triangular shape and the t-shaped object could be presented in two different orientations, i.e., upward or downward (see Figure 2b), resulting in a total of five different shapes grasped by the participants. All objects were 6 cm along their grasp axis and 1 cm in depth. All objects had a small pin on the back of them so that they could be attached to the setup. 
The setup consisted of a HANNS.G HX191D 19″ LCD monitor (1280 × 1024 pixel, 19″, 75 Hz) and a Plexiglas frame which was mounted in front of the monitor. The Plexiglas frame was held in place by a custom-built outer frame attached to the monitor (cf. Figure 2a). The Plexiglas frame had a small hole in the middle which allowed the objects to be pinned centrally in front of the screen. Objects were always presented at the same (central) location. 
Participants sat comfortably in front of the monitor with a viewing distance of 50 cm. A chin rest was used to maintain a constant head position throughout the experiment. Before each trial, participants had to rest their hand on the starting position which could either be below the object and toward the left on the screen (hand-before) or above the object and toward the right (hand-behind). These starting positions were chosen in order to make sure that the hand and the arm were not occluding the view of the object during the preview period as well as when approaching the target. The best starting positions were determined in a pilot study (N = 5). An armrest was used in order to ensure that participants could rest their arm comfortably at both start-positions on the screen. The distance between the start position and the center of the target was 17 cm in all conditions. 
Eye movements were recorded using a head-mounted Eyelink II system (SR Research, Ontario, Canada). The pupil location of the right eye was sampled at a rate of 250 Hz. The spatial resolution of the system was 0.2°. The trajectories of the finger movements were recorded by an Optotrak 3020 system (Northern Digital Incorporation, Waterloo, Ontario, Canada) at a sampling rate of 250 Hz. Two infrared light-emitting diodes (IREDs) were attached on the right hand, one to the nail of the index finger and one to the nail of the thumb in order to measure the grasp component of the movement. An additional IRED was placed on the back of the hand in order to measure the transport component of the grasp. Prior to the experiment, a calibration procedure was used to align the Cartesian coordinate system of the Optotrak system to the plane of the monitor. The experiment was programmed in MATLAB using the Psychophysics Toolbox (Brainard, 1997; Kleiner, 2010), the Eyelink Toolbox (Cornelissen, Peters, & Palmer, 2002), and the custom-built Optotrak Toolbox (Franz, 2004). 
Procedure
Participants sat comfortably in an adjustable chair in a well-lit room. They placed their head in the chin rest and looked straight ahead at the monitor to which the objects were attached. At the beginning of each trial, participants placed their hand at the start position and closed their eyes. Participants were instructed to “look” in the direction of the start location of the hand whilst having their eyes closed. Then the experimenter attached the target object to the Plexiglas in front of the monitor (displaying a black screen). Subsequently, the experimenter initiated the trial manually by pressing a key which started the synchronous recording of the Optotrak and the Eyelink system. An auditory signal (100 ms, 500 Hz) signaled the participants to open their eyes (and initially look at the direction of the start position of the hand). After a preview period of 1 s (during which participants were allowed to move their eyes) an auditory go signal (100 ms, 1000 Hz) was presented in response to which participants were instructed to grasp the object and to lift it upward (about 5 mm) within its attached fixture. We introduced a preview period as we wanted to create a natural grasping situation. That is, before a goal-directed movement is initiated, the gaze is directed from some different position in the workspace toward the targeted object and subsequently a grasp is initiated. We instructed the participants to grasp all objects vertically with the index finger on the upper part and the thumb at the lower part of the shape. No instructions were given about the speed of the movement. Participants were allowed 3 s to execute the movement. Three seconds after the go signal the Eyelink and the Optotrak stopped measuring and an auditory signal (100 ms, 500 Hz) signaled the participants to close their eyes again. The experimenter removed the object and attached a new (or the same) object on the Plexiglas frame. 
We used two different conditions: hand-before and hand-behind movements. Similarly to Experiment 1, in the hand-before condition, participants placed their hand below and to the left of the target object and in the hand-behind condition they placed their hand above and to the right of the target object at the beginning of each trial. Both conditions were blocked with half of the participants starting with the hand-before grasping condition. Each block of trials was preceded by an individual adjustment of the armrest and a calibration of the Eyelink system. The five different objects were presented randomly within each block with each object being presented ten times per block resulting in a total of 50 trials per block. A drift correction was applied every 5 trials. Before each block of trials participants were allowed five practice trials in order to familiarize themselves with the task. 
Data analysis
Eye movement and hand movement data was plotted online after each trial to check for missing data points. Trials in which the markers were occluded were discarded and repeated later in the experiment at a random position (that happened in less than 5% of all trials). All eye and hand movement data was stored for offline analysis. 
The hand movement data was analyzed identically to Experiment 1 with the variables of interest being Trajectory Variability (TV), Reaction Time (RT), Movement Time (MT), Time to Contact (TTC), Peak Velocity (PV), and Time to PV (TPV). Additionally, we determined the time at which the maximum grip aperture (computed as the maximum distance in 3D space between index finger and thumb markers during the MT) was reached (TMGA). This parameter was used in order to determine the position of fixation during grasping (see below). 
For the eye movement data, we analyzed the fixations made until the end of the grasping movement was registered. An eye movement was considered a saccade when the velocity of the eye was at least 30°/s with an acceleration of 8000°/s2. An amplitude threshold of 1° was applied to remove small-correction saccades. A fixation was defined as the interval between saccades. The main variables of interest were the number of fixations during grasping, the position of the fixation at the moment maximum grip aperture (MGA) was reached, the duration of the fixation that included the occurrence of MGA, and the time this fixation was started relative to the occurrence of MGA. 
All data was analyzed using repeated-measures ANOVAs. If a factor had more than two levels (e.g., object shape), a Greenhouse-Geisser correction was applied (Greenhouse & Geisser, 1959), resulting in a more conservative testing procedure. A significance level of α = 0.05 was used for all statistical analysis. All values are presented as mean ± standard errors of the mean. 
Results
Grasping kinematics
Pre-analysis of the effects of object shape on grasp kinematics
In the experiment, participants had to grasp five different objects. The variation of object shape was mainly introduced in order to test whether eye movements are affected by the difficulty of grasping a given object depending on the size of the contact location. To test whether object shape affected the movement kinematics, we applied a 2 (digit: index finger/thumb) × 2 (movement type: hand-before/hand-behind) ×5 (object shape) repeated-measures ANOVA on our data. This analysis revealed that there were no effects of object shape on MT (with main effect of shape: F(4, 36) = 0.49, p = 0.68, and p > 0.09 for all interaction effects), TTC (with main effect of shape: F(4, 36) = 1.81, p = 0.17, and p > 0.09 for all interaction effects), PV (with main effect of shape: F(4, 36) = 0.52, p = 0.65, and p > 0.55 for all interaction effects), and TPV (with main effect of shape: F(4, 36) = 1.43, p = 0.25, and p > 0.17 for all interaction effects). However, there was a significant main effect of object shape on RT, F(4, 36) = 3.12, p = 0.037. The interaction effects on RT were not significant (all p > 0.18). 
Data was merged across all object shapes for further analysis of the movement data. 
Trajectory variability (TV)
The 2 (digit: index finger/thumb) × 2 (movement type: hand-before/hand- behind) repeated-measures ANOVA revealed no significant main effect of digit, F(1, 9) = 0.08, p = 0.78, and movement type, F(1, 9) = 3.30, p = 0.10, on the variability of the trajectory. Importantly, we replicated the finding from Experiment 1 that there is a significant interaction effect between movement type and the digit measured on movement variability, F(1, 9) = 5.59, p = 0.042 (Figure 3a). The movement variability was higher for the index finger when a hand-before movement was required (hand-behind: 209.3 mm2 ± 31.1 mm2 vs. hand-before: 373.3 mm2 ± 64.2 mm2) and higher for the thumb when a hand-behind movement was required (hand-behind: 293.4 mm2 ± 23.8 mm2 vs. hand-before: 269.7 mm2 ± 34.5 mm2). This finding confirms the hypothesis that the finger which has to pass over the object in order to reach its contact position shows the more variable movement path. 
Figure 3
 
Experiment 2: Trajectory variability (A), Time to contact (B), and Peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Figure 3
 
Experiment 2: Trajectory variability (A), Time to contact (B), and Peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
RT, MT, and TTC
For RT, the 2 (digit: thumb/index finger) × 2 (movement type: hand-before/hand-behind) repeated-measures ANOVA revealed a significant main effect of the digit, F(1, 9) = 12.16, p = 0.007. There was no main effect of movement type, F(1, 9) = 0.09, p = 0.77. On average, movement onset for the thumb was found to be 290 ms ± 19 ms after the presentation of the go signal whereas the onset of the index finger occurred slightly later, after about 303 ms ± 17 ms. There was also a significant interaction effect, F(1, 9) = 6.63, p = 0.03, indicating that the thumb started to move earlier when a hand-before movement was required. 
Regarding MT, the 2 (digit: thumb/index finger) × 2 (movement type: hand-before/hand-behind) repeated-measures ANOVA replicated our finding from Experiment 1 that MT was significantly shorter for the index finger than for the thumb, F(1, 9) = 34.91, p < 0.001. On average, MT was 528 ms ± 20 ms for the index finger and 596 ms ± 18 ms for the thumb. Additionally, we also observed a significant effect of movement type on MT, F(1, 9) = 9.65, p = 0.013. Movements were significantly faster when the hand started from behind the object (536 ms ± 15 ms) than when the hand started from before the object (588 ms ± 24 ms). There was no significant interaction effect, F(1, 9) = 0.54, p = 0.48. 
As in Experiment 1, we analyzed the total time it took each finger after the go signal to reach the target object (TTC). The 2 (digit: thumb/index finger) × 2 (movement type: hand-before/hand-behind) repeated-measures ANOVA revealed a significant main effect of digit, F(1, 9) = 37.94, p < 0.001. Again, the index finger reached the object first after about 831 ms ± 28 ms after the go signal, independent of the movement type. The thumb reached the object approximately 55 ms later after 886 ms ± 26 ms (Figure 3b). There also was a significant interaction effect, F(1, 9) = 6.16, p = 0.035, indicating that the time the index finger arrived before the thumb was larger in the hand-before conditions (index finger 58 ms earlier) than in the hand-behind conditions (index finger 39 ms earlier). There was no significant main effect of movement type, F(1, 9) = 4.20, p = 0.07. As for Experiment 1 we calculated the TTC for the wrist marker and its relationship with the TTC of the index finger and thumb. Similar as in Experiment 1, the hand transport ended before the TTC was determined for the thumb and after TTC was determined for the index finger (wrist: 846 ms ± 25 ms; thumb: 886 ms ± 26 ms; index finger: 831 ms ± 28 ms). Post-hoc tests confirmed that the differences between the effectors (index finger, thumb, wrist) were significant (all p < 0.03). 
PV and TPV
Two separate 2 × 2 repeated-measure ANOVAs with the main factors digits (index finger and thumb) and movement type (hand-before/hand-behind) were applied on the data for PV and TPV. We replicated our finding from Experiment 1 that there was a significant interaction effect between the digit involved and movement type, F(1, 9) = 149.16, p < 0.001 (Figure 3c). Posthoc tests confirmed that the index finger showed the higher PV when a hand-before movement was required (hand-before: 813 mm/s ± 48 mm/s vs. hand-behind: 697 mm/s ± 28 mm/s) while the thumb showed the opposite pattern (hand-before: 629 mm/s ± 36 mm/s vs. hand-behind: 844 mm/s ± 37 mm/s). In contrast to Experiment 1, the main effect of digit was not significant, F(1, 9) = 3.08, p = 0.11. There was also no significant main effect of movement type on PV, F(1, 9) = 2.18, p = 0.17. 
Regarding the TPV, there was a significant main effect of digit, F(1, 9) = 8.49, p = 0.017. The index finger on average reached peak velocity earlier than the thumb (index finger: 207 ms ± 10 ms; thumb: 225 ms ± 12 ms). There was no main effect of movement type, F(1, 9) = 1.76, p = 0.22, and no significant interaction effect, F(1, 9) = 2.43, p = 0.15. 
Eye movement data
This experiment was designed to test whether an inversion of the dynamics between index finger and thumb during reach-to-grasp actions performed from a start location located either before or behind the target object would affect the pattern of eye fixation. 
Number of fixations during grasping
We determined the number of fixations that occurred during the grasping movement. A 2 × 5 repeated-measures ANOVA with the factors movement type (hand-before/hand-behind) and objects shape (cross, upright triangle, upside-down triangle, upright “T” shape, and upside-down “T” shape) revealed no significant main effects and no interactions (all p > 0.39). The average number of fixations occurring between the go signal and the end of the movement was 2.52 ± 0.12. Interestingly, after movement onset, participants tended to keep fixation at the same location until they reached the target object. On average, participants made 1.65 ± 0.09 fixations between movement onset and movement offset. 
Fixation locations
We calculated the fixation position at the moment of MGA. In 96.9% ± 0.5% of the trials fixation at MGA corresponded to the location of fixation recorded at the end of the movement. Given that occasionally, though quite rarely, participants already had looked away (toward the direction of the start position of the hand) at the moment in which the hand made contact with the object, we decided that fixation at MGA was the most reliable measure to determine where participants were looking during grasping. This rationale is also supported by the fact that the average fixation duration at MGA was 1106 ms ± 72 ms, which is considerably longer than the duration of the grasping movement (MT). That means that in many trials, participants started their fixation well before the MGA was reached in grasping, and also often kept fixation on the shape after they had finished their movement. The comparably long fixation times might also be partly owed to the fact that participants in our study were not instructed to remove the target object from the monitor but to simply lift it slightly within its attachment. A 2 (movement type) × 5 (object shape) repeated-measures ANOVA revealed that fixation duration at MGA was neither influenced by object shape, F(4, 36) = 1.16, p = 0.34, nor by movement type, F(1, 9) = 4.48, p = 0.063. There was no interaction effect, F(4, 36) = 1.05, p = 0.38. 
As participants, when opening their eyes at the beginning of the trial, were looking in the direction of the start position of the hand, most fixations made during the preview period fell in between the start location of the hand and the target object. In order to find out whether participants already reached their final fixation location during the preview period, we calculated in how many trials the last fixation during the preview period was the same as the fixation location at MGA. Analysis showed that both locations were only identical in 10.5% ± 4.5% of all trials. The position fixated at MGA was on average reached 497 ms ± 38 ms before the MGA was reached in grasping. As the MGA was reached on average 498 ms after movement onset, this finding seems to suggest that participants often reached the final fixation location around the time of movement initiation (further supported by the fact that we observed only very few fixations between movement onset and movement offset). Therefore, we also computed the number of trials in which the location fixated at movement onset was identical to the location fixated at MGA. Our results show that in 50.5% ± 4.6% of all trials these two locations were identical, confirming that in about half of all trials, participants did not make any eye movements between movement onset and the time of MGA. 
Figure 4 shows the average fixation location of all participants on the five different objects during MGA, separated for movements started from before and behind the target object. Generally, we replicated previous findings, observing that during grasping fixation is mainly kept at the position of the index finger (Brouwer et al., 2009; de Grave et al., 2008). Note that this seemed to be the case for all object shapes and movement types (hand-before/hand-behind). The 2 (movement type) × 5 (object shape) repeated-measures ANOVA on the vertical position of the fixation during MGA revealed no significant effect of movement type, F(1, 9) = 0.96, p = 0.35. However, the vertical fixation position was significantly influenced by object shape, F(4, 36) = 5.65, p = 0.004. On average, the fixation was directed slightly more toward the upper end of the shape (direction of the index finger) when the downward pointing triangle and the upright t-shape were grasped. There was no significant interaction effect, F(4, 36) = 1.43, p = 0.26. 
Figure 4
 
Average fixation position for all object shapes at the time maximum grip aperture was reached during grasping as a function of movement type. The fixation position during hand-before movements is indicated by the small black upward pointing triangles, and the average fixation position during hand-behind movements is indicated by the small gray downward pointing triangle (error bars reflecting standard errors between subjects in the horizontal and vertical directions).
Figure 4
 
Average fixation position for all object shapes at the time maximum grip aperture was reached during grasping as a function of movement type. The fixation position during hand-before movements is indicated by the small black upward pointing triangles, and the average fixation position during hand-behind movements is indicated by the small gray downward pointing triangle (error bars reflecting standard errors between subjects in the horizontal and vertical directions).
Regarding the horizontal fixation position, Figure 4 seems to suggest that the fixation position is slightly biased in the direction from which the hand is approaching the target object (i.e., slightly to the right of the object's center for the movements starting from the upper right and slightly leftward of the object's center for movements starting from the lower left side of the object). However, this effect did not reach significance, F(1, 9) = 4.50, p = 0.063. There was also no significant effect of object shape on the horizontal fixation condition, F(4, 36) = 2.89, p = 0.064, and there was no interaction effect, F(4, 36) = 1.05, p = 0.40. 
Discussion
The aim of this second experiment was to examine how the variability of the trajectory affects the eye movement pattern during grasping. Previous research has reported that eye movements during reach-to-grasp actions are biased toward the index finger (Brouwer et al., 2009; de Grave et al., 2008). These authors have suggested that this bias occurs because the index finger needs to be monitored more carefully when approaching the target as it shows the greater variability in its trajectory (Galea et al., 2001; Haggard & Wing, 1997). In Experiment 1, we have shown that it is possible to manipulate the variability of the trajectory of the index finger and the thumb by altering the direction from which the hand approaches the target object. The results provided support for the idea that higher trajectory variability is not unique to the index finger. In fact, we showed that trajectory variability of the digits during grasping is likely to depend on which digit has to pass over the object in order to reach its contact position. Specifically, we found that the index finger has a higher trajectory variability and peak velocity when it has to pass over the object during the hand-before movements, whereas the thumb has higher trajectory variability and peak velocity when it has to pass over the object during the hand-behind movements. In Experiment 2, we used this knowledge to test whether fixation locations during grasping vary depending on the variability and peak velocity of the digits. Specifically, we were interested in whether fixations are made toward the contact position of the index finger during hand-before movements and the contact position of the thumb during hand-behind movements. 
In order to be able to measure hand and eye movements simultaneously and within the same frame of reference, we had to adapt our setup to allow participants to perform grasping movements toward real objects in front of a computer monitor. Critically, we were able to replicate the main findings from Experiment 1. If a hand-before movement was required, the peak velocity and the movement variability were higher for the index finger than for the thumb. Conversely, if a hand-behind movement was required, the thumb showed the higher peak velocity as well as trajectory variability. However, the above variation had no effect on the concurrent pattern of fixation. Independent of the direction from which the hand approached the target object, fixations were biased in the direction of the index finger. Given that we also found that the index finger was the first digit to reach the target object and this occurred independent of movement type (on average more than 50 ms before the thumb), we suggest an alternative to the visual feedback interpretation, i.e., that fixations mainly occur at the position at which the hand makes first contact with the object (first-to-contact interpretation). Consistent with our findings, Brouwer et al. (2009) also reported that the index finger reached the object about 80 ms earlier then the thumb in all conditions. 
Overall, the differences in movement kinematics between hand-before and hand-behind movements were less pronounced than in Experiment 1. This distinction can most likely be attributed to the differences between the setups used in both experiments. In Experiment 1, the grasping points were more or less aligned with the direction of the movement (i.e., the grasping target was placed directly in front or behind the start position of the hand) whereas this was not the case in Experiment 2 (i.e., the hand approached the target from the side). Hence, the relative difference in path length between the digits was larger in Experiment 1 than in Experiment 2. As we assume that PV and TV of the digits vary dependent on the distance each finger has to cover to reach the target object, it is not surprising that differences are less pronounced when the difference in path length between the fingers is reduced. However, even under these relatively restricted conditions (grasping in vertical plane and in front of a computer monitor), we observed a reliable inversion in trajectory variability and peak velocity between hand-before and hand-behind movements for index finger and thumb. Our finding that movement times were shorter for hand-behind movements possibly relates to the way the setup was constructed. Given that grasping actions were performed in the vertical plane, it is possible that movements executed against gravity (hand-before movements performed upwards from the lower left screen) might have been delayed as compared to movements executed toward gravity (hand-behind movements performed downward from the upper right screen). 
As discussed above, the observed alterations in movement kinematics associated with a variation of the start position of the hand relative to the target, occur very consistently and reliably (i.e., when grasping different objects in different grasping setups). However, as TTC was always determined via the same velocity threshold (50 mm/s), the question arises to what extent our observations might depend on the specific threshold chosen to determine the end of the movement. In order to rule out the possibility that our findings are a result of the selected velocity threshold, we also calculated the TTC using velocity thresholds of 2.5 cm/s (half the original threshold) and 10 cm/s (double the initial threshold). For both calculations we replicated the finding that the index finger reached the object first (highly significant for both thresholds). Interestingly, the difference between the digits increased the lower the selected velocity threshold was (2.5 cm/s = index finger is on average 89 ms earlier, 5 cm/s = index finger is 55 ms earlier, 10 cm/s = index finger is 31 ms earlier). These results indicate that the desynchronization between the digits gets larger the closer they move toward the target object. 
Furthermore, we also hypothesized that the properties of the contact position such as its size (and the resulting difficulty to placing the digit) could affect the fixation location during grasping. We observed that fixations were directed further towards the upper end of the shape when grasping the downward pointing triangle and the upright t-shape as compared to the other shapes. These observations are contradictory to what one would expect if the gaze were to be attracted to the position at the target at which the finger needs to be placed most accurately during grasping. Rather, this finding suggests that fixations during grasping are shifted in direction of the COG of the object. Alternatively, one could hypothesize that it might be more difficult to find a suitable/stable contact position on the larger surface area as the contact position is less clearly defined than the contact point on the smaller surface area. Our finding is, however, different from Brouwer et al. (2009) who observed that fixations during grasping are drawn toward the smaller contact position. This difference can possibly be attributed to the fact that we used smaller objects in our experiment (the distance between contact positions was only about 6.9° of visual angle in our study compared with 11.5° of visual angle in Brouwer et al., 2009). Hence, one might argue that even when participants fixated at the contact position of the index finger, they were still well able to view the opposite contact position without making additional eye movements. 
In summary, Experiment 2 replicated our initial observation that fixations during reach-to-grasp movements are biased toward the contact position of the index finger. However, our study clearly shows that this fixation behavior cannot be attributed to the fact that the index finger follows the more variable trajectory, therefore requiring additional visual guidance. In situations in which the thumb described the more variable trajectory, fixations were nevertheless directed toward the contact position of the index finger. As the index finger is always the first finger to arrive at the object (consistent within this study but also across other studies) regardless of movement directions and shapes, we suggest that the gaze is directed at the location at which the first object contact is expected (first-to-contact interpretation). 
General discussion
Eye movements during reaching and grasping serve at least two primary purposes. Firstly, eye movements monitor the ongoing action to secure accuracy. Visual information about the approaching hand is used to adjust the movement online (Binsted et al., 2001; Paillard, 1996; Riek, Tresilian, Mon-Williams, Coppard, & Carson, 2003; Woodworth, 1899). Secondly, fixating at the target helps the motor system to specify the spatial location of the target object, i.e., providing information on where to direct the effector (Hayhoe et al., 2003; Johansson et al., 2001; Soechting, Engel, & Flanders, 2001). Early studies which focused on the investigation of movement kinematics during grasping suggested that the position of the thumb is the key variable that the motor system controls when transporting the hand to the target location. This suggestion was primarily based on the finding that the thumb trajectory is typically straighter than the trajectory of the index finger (Wing & Fraser, 1983) and less variable (Galea et al., 2001; Haggard & Wing, 1998; Paulignan, Jeannerod, et al., 1991). Indeed, Wing and Fraser (1983) originally hypothesized that the motor system might stabilize the thumb in order to increase the accuracy of visual judgments about positioning errors. By ensuring a straight movement path of the thumb, the visual feedback about the relative position between target object and the grasping hand becomes less variable and thus provides a reliable predictor of the upcoming hand contact location. More recent studies which have measured eye and hand movements simultaneously (Brouwer et al., 2009; de Grave et al., 2008) have reported that fixations are directed toward the location of the index finger and not the thumb—thus challenging Wing and Fraser's hypothesis that fixations during grasping are directed toward the digit with the steadier movement path. It was therefore concluded that fixations toward the more variable index finger are preferred as this finger needs a higher amount of visual monitoring as corrections to the movement path are more likely to happen. The present data contests this hypothesis, as we show that fixations toward the index finger persist in situations in which the thumb is the digit with the more variable movement path. Alternatively, we suggest that the eyes use a very parsimonious rule by following the faster finger. As noted, we report that fixations are made toward the object contact position that is touched first. 
Brouwer et al. (2009) also considered the possibility that the index finger might be fixated as it is the finger that makes first contact with the object. However, these authors argued that this interpretation is unlikely, as there were no significant correlations between the extent of the saccade in direction of the contact position of the index finger and the extent to which the index finger arrived before the thumb. However, in our opinion a lack of a correlation does not necessarily indicate that the hypothesis that time to contact determines the fixation location during grasping is incorrect. As the saccade is initiated some time before the hand reaches its contact position, there is no reason to assume that the end position of the saccade would change dependent on how much earlier the finger will reach the object. A saccade toward the contact position of the index finger will simply relate to the fact that this finger is due to arrive at the object first. However, we also have to admit that our conclusion is primarily reached by a process of elimination. That is, by altering the direction of the movement, we were able to invert most of the kinematic variables between the index finger and the thumb (namely peak velocity and trajectory variability)—variables which were previously assumed to affect the fixation behavior during grasping. As the only kinematic variable which remained unchanged by the variations of the grasping task was time to contact, we suggest that this might be the crucial parameter which determines where we look during grasping. 
A more direct test of this hypothesis would involve designing a grasping task in which the thumb reaches the object first. At first blush, this appears to be a straightforward procedure, yet there are problems with this approach. First, grasping set ups in which it is likely that the thumb reaches the object first normally involve moving the index finger behind the object (e.g., grasping a mug on the table), thus occluding the contact position of the index finger from view. In such a setup, participants have no choice but to fixate at the contact position of the thumb. (Indeed, this is what was reported by previous studies applying such grasping tasks, c.f. Johansson et al., 2001.) Second, one could think of instructing participants to ensure that the thumb contacts the target object first. This would however result in an unnatural grasping behavior, thus most likely causing changes in the accompanying fixation strategy for reasons other than the velocity of the thumb. Hence, it would be relatively unsurprising that in such a task eye movements are made toward the contact position of the thumb as this is the position participants are instructed to pay attention to (Posner, 1980). 
Moreover, the notion that the primary contact location of the finger is fixated fits well with the observation that in bimanual reaching tasks, participants prefer to perform the tasks sequentially accompanied by a serial fixation of both target locations (Hesse, Nakagawa, & Deubel, 2010). That is, fixations are made toward the position that is reached first by one of the hands, before eye movements are made toward the second target location and the movement is finished (Hesse et al., 2010; Riek et al., 2003). Thus, one could hypothesize that also during grasping, fixations are directed at the location of initial contact, although there is no need for a subsequent fixation to the second contact location. The findings that, in grasping, participants do not make a second saccade toward the contact location of the thumb might be attributed to two factors. First, compared to bimanual movements, both contact locations are spatially close to one another. This proximity means that participants are well able to see the contact position of the thumb whilst fixating at the contact position of the index finger (in the current study, both positions were separated by less than 7° of visual angle). Second, the contact position of the thumb is further specified by the tactile feedback that is received when the index finger touches the object, especially when symmetric target objects are grasped. 
Conclusion
In summary, our study revealed two primary findings. First, we observed that a higher variability in the trajectory of the index finger is likely owed to the fact that movements are usually investigated in a standardized grasping task requiring hand-before movements. In such a task, participants are instructed to initiate their grasp with both digits holding a starting position that is either vertically aligned to the body midline or slightly to the right of their body midline and subsequently grasp a target placed in front of them on the table. Here we show that the role attributed to each finger in grasping (i.e., which finger guides the movement and which finger is more variable) can be altered when the grasping setup is varied (i.e., requiring movement from behind the object). This finding strengthens the view that the description of grasping as a stereotypical movement pattern is mainly owing to the stereotypical conditions used to investigate grasping movements (for discussion see also, Hesse & Deubel, 2009). Secondly, we show that the fixation pattern during grasping is unaffected by the variability of the movement path of the digits. This finding challenges the view that the digit with the higher variability requires a greater amount of visual monitoring. Rather, our findings provide support for the hypothesis that fixations are directed toward the location at which the hand makes first contact with the target object (first-to-contact hypothesis). 
Acknowledgments
The authors would like to thank Mr. David Knight and Mr. Andrew Long for their technical support. Constanze Hesse held a post-doctoral research fellowship of the German Research Council (DFG/HE 6011/1-1) at the time the experiments were conducted. 
Commercial relationships: none. 
Corresponding author: Constanze Hesse. 
Email: c.hesse@abdn.ac.uk. 
Address: School of Psychology, Kings College, University of Aberdeen, Aberdeen, U.K. 
References
Admiraal M. A. Keijsers N. L. W. Gielen C. (2003). Interaction between gaze and pointing toward remembered visual targets. Journal of Neurophysiology, 90 (4), 2136–2148, doi:10.1152/jn.00429.2003. [CrossRef] [PubMed]
Bernardis P. Knox P. Bruno N. (2005). How does action resist visual illusion? Uncorrected oculomotor information does not account for accurate pointing in peripersonal space. Experimental Brain Research, 162, 133–144. [CrossRef] [PubMed]
Biguer B. Jeannerod M. Prablanc C. (1982). The coordination of eye, head, and arm movements during reaching at a single visual target. Experimental Brain Research, 46, 301–304. [CrossRef] [PubMed]
Biguer B. Prablanc C. Jeannerod M. (1984). The contribution of coordinated eye and head movements in hand pointing accuracy. Experimental Brain Research, 55, 462–469. [CrossRef] [PubMed]
Binsted G. Chua R. Helsen W. Elliott D. (2001). Eye-hand coordination in goal-directed aiming. Human Movement Science, 20 (4–5), 563–585, doi:10.1016/s0167-9457(01)00068-9. [CrossRef] [PubMed]
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10 (4), 433–436. [CrossRef] [PubMed]
Brouwer A. M. Franz V. H. Gegenfurtner K. R. (2009). Differences in fixations between grasping and viewing objects. Journal of Vision, 9 (1): 18, 1–24, http://171.67.113.220/content/9/1/18.full, doi:10.1167/9.1.18. [PubMed] [Article] [CrossRef] [PubMed]
Cavina-Pratesi C. Ietswaart M. Humphreys G. W. Lestou V. Milner A. D. (2010). Impaired grasping in a patient with optic ataxia: Primary visuomotor deficit or secondary consequence of misreaching? Neuropsychologia, 48 (1), 226–234, doi:10.1016/j.neuropsychologia.2009.09.008. [CrossRef] [PubMed]
Cavina-Pratesi C. Monaco S. Fattori P. Galletti C. McAdam T. D. Quinlan D. J. (2010). Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans. Journal of Neuroscience, 30 (31), 10 306–10 323, doi:10.1523/jneurosci.2023-10.2010.
Cornelissen F. W. Peters E. M. Palmer J. (2002). The Eyelink Toolbox: Eye tracking with MATLAB and the psychophysics toolbox. Behavior Research Methods Instruments & Computers, 34 (4), 613–617. [CrossRef]
de Grave D. D. J. Franz V. H. Gegenfurtner K. R. (2006). The influence of the Brentano illusion on eye and hand movements. Journal of Vision, 6 (7): 5, 727–738, http://www.journalofvision.org/content/6/7/5.full, doi:10.1167/6.7.5. [PubMed] [Article] [CrossRef]
de Grave D. D. J. Hesse C. Brouwer A. M. Franz V. H. (2008). Fixation locations when grasping partly occluded objects. Journal of Vision, 8 (7), 1–11, doi:10.1167/8.7.5. [CrossRef] [PubMed]
Franz V. H. (2004). The Optotrak Toolbox. Accessed April 15, 2010 from http://www.allpsych.unigiessen.de/vf/OptotrakToolbox/
Frens M. A. Erkelens C. J. (1991). Coordination of hand movements and saccades: Evidence for a common and a separate pathway. Experimental Brain Research, 85, 682–690. [CrossRef] [PubMed]
Galea M. P. Castiello U. Dalwood N. (2001). Thumb invariance during prehension movement: Effects of object orientation. Neuroreport, 12 (10), 2185–2187, doi:10.1097/00001756-200107200-00028. [CrossRef] [PubMed]
Greenhouse S. W. Geisser S. (1959). On methods in the analysis of profile data. Psychometrika, 24 (2), 95–112. [CrossRef]
Haggard P. Wing A. (1997). On the hand transport component of prehensile movements. Journal of Motor Behavior, 29 (3), 282–287. [CrossRef] [PubMed]
Haggard P. Wing A. (1998). Coordination of hand aperture with the spatial path of hand transport. Experimental Brain Research, 118, 286–292. [CrossRef] [PubMed]
Hayhoe M. M. Shrivastava A. Mruczek R. Pelz J. B. (2003). Visual memory and motor planning in a natural task. Journal of Vision, 3 (1): 6, 49–63, http://www.journalofvision.org/content/3/1/6.full, doi:10.1167/3.1.6. [PubMed] [Article] [CrossRef] [PubMed]
Hesse C. Deubel H. (2009). Changes in grasping kinematics due to different start postures of the hand. Human Movement Science, 28 (4), 415–436, doi:10.1016/j.humov.2009.03.001. [CrossRef] [PubMed]
Hesse C. Nakagawa T. T. Deubel H. (2010). Bimanual movement control is moderated by fixation strategies. Experimental Brain Research, 202 (4), 837–850, doi:10.1007/s00221-010-2189-3. [CrossRef] [PubMed]
Johansson R. S. Westling G. R. Backstrom A. Flanagan J. R. (2001). Eye-hand coordination in object manipulation. Journal of Neuroscience, 21, 6917–6932. [PubMed]
Kleiner M. (2010). Visual stimulus timing precision in Psychtoolbox-3: Tests, pitfalls and solutions. [Abstract]. Perception, 39, 189.
Land M. F. (2009). Vision, eye movements, and natural behavior. Visual Neuroscience, 26 (1), 51–62, doi:10.1017/s0952523808080899. [CrossRef] [PubMed]
Land M. F. Hayhoe M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41 (25–26), 3559–3565, doi:10.1016/s0042-6989(01)00102-x. [CrossRef] [PubMed]
Land M. F. Mennie N. Rusted J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28 (11), 1311–1328, doi:10.1068/p2935. [CrossRef] [PubMed]
Neggers S. F. W. Bekkering H. (2000). Ocular gaze is anchored to the target of an ongoing pointing movement. Journal of Neurophysiology, 83, 639–651. [PubMed]
Neggers S. F. W. Bekkering H. (2001). Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal. Journal of Neurophysiology, 86, 961–970. [PubMed]
Paillard J. (1996). Fast and slow feedback loops for the visual correction of spatial errors in a pointing task: A reappraisal. Canadian Journal of Physiology and Pharmacology, 74 (4), 401–417. [CrossRef] [PubMed]
Paulignan Y. Jeannerod M. MacKenzie C. Marteniuk R. (1991). Selective perturbation of visual input during prehension movements: 2. The effects of changing object size. Experimental Brain Research, 87, 407–420. [CrossRef] [PubMed]
Paulignan Y. MacKenzie C. Marteniuk R. Jeannerod M. (1991). Selective perturbation of visual input during prehension movements: 1. The effects of changing object position. Experimental Brain Research, 83, 502–512. [CrossRef] [PubMed]
Posner M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32 (1), 3–25. [CrossRef] [PubMed]
Riek S. Tresilian J. R. Mon-Williams M. Coppard V. L. Carson R. G. (2003). Bimanual aiming and overt attention: One law for two hands. Experimental Brain Research, 153 (1), 59–75, doi:10.1007/s00221-003-1581-7. [CrossRef] [PubMed]
Soechting J. F. Engel K. C. Flanders M. (2001). The Duncker illusion and eye-hand coordination. Journal of Neurophysiology, 85 (2), 843–854. [PubMed]
Vercher J. L. Magenes G. Prablanc C. Gauthier G. M. (1994). Eye-head-hand coordination in pointing at visual targets: Spatial and temporal analysis. Experimental Brain Research, 99, 507–523. [CrossRef] [PubMed]
Wing A. M. Fraser C. (1983). The contribution of the thumb to reaching movements. Quarterly Journal of Experimental Psychology, 35 (A), 297–309. [CrossRef] [PubMed]
Woodworth R. S. (1899). The accuracy of voluntary movement. Psychological Review Monograph, 3 (2), 1–114.
Figure 1
 
Experiment 1: Trajectory variability (A), time to contact (B), and peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Figure 1
 
Experiment 1: Trajectory variability (A), time to contact (B), and peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Figure 2
 
Experiment 2: Schematic drawing of the experimental apparatus. A: Grasping movements were made in front of a TFT monitor and eye movements were recorded simultaneously. B: Target objects.
Figure 2
 
Experiment 2: Schematic drawing of the experimental apparatus. A: Grasping movements were made in front of a TFT monitor and eye movements were recorded simultaneously. B: Target objects.
Figure 3
 
Experiment 2: Trajectory variability (A), Time to contact (B), and Peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Figure 3
 
Experiment 2: Trajectory variability (A), Time to contact (B), and Peak velocity (C) as a function of digit (index/thumb) and movement type (hand-before/hand-behind). Error bars indicate ±1 SEM (between subjects).
Figure 4
 
Average fixation position for all object shapes at the time maximum grip aperture was reached during grasping as a function of movement type. The fixation position during hand-before movements is indicated by the small black upward pointing triangles, and the average fixation position during hand-behind movements is indicated by the small gray downward pointing triangle (error bars reflecting standard errors between subjects in the horizontal and vertical directions).
Figure 4
 
Average fixation position for all object shapes at the time maximum grip aperture was reached during grasping as a function of movement type. The fixation position during hand-before movements is indicated by the small black upward pointing triangles, and the average fixation position during hand-behind movements is indicated by the small gray downward pointing triangle (error bars reflecting standard errors between subjects in the horizontal and vertical directions).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×