Visual perception can be manipulated using a methodology called a gaze-contingent display (GCD) that updates the system's display depending on the gaze, i.e., head and eye movements (Duchowski, Cournia, & Murphy,
2004; Reder,
1973). The general process of a GCD is first to detect the gaze direction using an eye tracker, and then to manipulate the image on the display synchronously according to the gaze direction (Aguilar & Castet,
2011; Han, Saunders, Woods, & Luo,
2013; Santini, Redner, Iovin, & Rucci,
2007). GCD paradigms have been used in a variety of applications, including vision science research (Loschky & McConkie,
2002; Pidcoe & Wetzel,
2006; Rayner,
2014; Zang, Jia, Müller, & Shi,
2015), virtual reality (Sheldon, Abegg, Sekunova, & Barton,
2012; Wade et al.,
2016), video transmission (Duchowski et al.,
2004), and driving simulators (Reingold, Loschky, McConkie, & Stampe,
2003).