August 2024
Volume 24, Issue 8
Open Access
Article  |   August 2024
Consistent metacognitive efficiency and variable response biases in peripheral vision
Author Affiliations
Journal of Vision August 2024, Vol.24, 4. doi:https://doi.org/10.1167/jov.24.8.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joseph Pruitt, J. D. Knotts, Brian Odegaard; Consistent metacognitive efficiency and variable response biases in peripheral vision. Journal of Vision 2024;24(8):4. https://doi.org/10.1167/jov.24.8.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Across the visual periphery, perceptual and metacognitive abilities differ depending on the locus of visual attention, the location of peripheral stimulus presentation, the task design, and many other factors. In this investigation, we aimed to illuminate the relationship between attention and eccentricity in the visual periphery by estimating perceptual sensitivity, metacognitive sensitivity, and response biases across the visual field. In a 2AFC detection task, participants were asked to determine whether a signal was present or absent at one of eight peripheral locations (±10°, 20°, 30°, and 40°), using either a valid or invalid attentional cue. As expected, results revealed that perceptual sensitivity declined with eccentricity and was modulated by attention, with higher sensitivity on validly cued trials. Furthermore, a significant main effect of eccentricity on response bias emerged, with variable (but relatively unbiased) c′a values from 10° to 30°, and conservative c′a values at 40°. Regarding metacognitive sensitivity, significant main effects of attention and eccentricity were found, with metacognitive sensitivity decreasing with eccentricity, and decreasing in the invalid cue condition. Interestingly, metacognitive efficiency, as measured by the ratio of meta-d′a/d′a, was not modulated by attention or eccentricity. Overall, these findings demonstrate (1) that in some circumstances, observers have surprisingly robust metacognitive insights into how performance changes across the visual field and (2) that the periphery may be subject to variable detection biases that are contingent on the exact location in peripheral space.

Introduction
What are the characteristics of perceptual decision-making and metacognition across the visual periphery? It is well established that peripheral vision is subject to phenomena which limit the resolution of perceptual details (Rosenholtz, 2016) and influence perceptual decisions. Physical limitations such as decreased cone density (Curcio, Sloan, Kalina, & Hendrickson, 1990) contribute to decreased contrast and color sensitivity as distance from the fovea increases (Daitch & Green, 1969; Gutwin, Cockburn, & Coveney, 2017; Hansen, Pracejus, & Gegenfurtner, 2009; Thibos, Still, & Bradley, 1996). Beyond basic sensitivity deficits, crowding, or the difficulty to perceive and recognize objects in the visual periphery that are surrounded by similar items (Keshvari & Rosenholtz, 2016; Whitney & Levi, 2011), pervades the periphery and places constraints on the details observers are aware of. Crowding itself can influence perception of attributes like size, saturation, hue, orientation, and other features (Andriessen & Bouma, 1976; Parkes, Lund, Angelucci, Solomon, & Morgan, 2001; Van Berg, Roerdink, & Cornelissen, 2007; Wilkinson, Wilson, & Ellemberg, 1997). In addition to crowding, it has been shown that some averaging or summarizing of low level visual features occurs in the periphery, resulting in summary statistic or ensemble representations that limit conscious perception of orientation, motion, position, size, and other fine details in the periphery (Alvarez & Oliva, 2008; Ariely, 2001; Cohen, Dennett, & Kanwisher, 2016; Dakin, 2001; Haberman & Whitney, 2007; Parkes et al., 2001; Watamaniuk, Sekuler, & Williams, 1989; Watamaniuk & Duchon, 1992; Williams & Sekuler, 1984). 
Recent literature in this area has provided conflicting anecdotes and results regarding decision-making in the periphery across an array of paradigms. For instance, according to one influential account, participants are often surprised at the mismatch between the visual eccentricity where they think they will be able to make accurate perceptual judgments (such as identifying a playing card) and how close to the fovea objects must actually be before judgments are correct (Cohen et al., 2016). Empirical support for overconfidence in the periphery has been demonstrated in one task which showed that on incorrect trials, participants were more confident in judgments of crowded visual stimuli compared to single visual stimuli, despite performance being lower overall in the crowded condition (Odegaard, Chang, Lau, & Cheung, 2018). Further, research on redundancy-masking found overconfidence in perception of lines 10° into the visual periphery (Yildirim & Sayam, 2022). These findings of overestimations of visual experiences have been extended to color perception. In a phenomenon known as the “pan-field color illusion,” observers tend to perceive images with a chromatic center and an achromatic periphery (chimera images) as fully colored (Balas & Sinha, 2007). This overestimation of color perception was also observed in dynamic real-world scenery; around a third of participants failed to notice when researchers desaturated up to 95% of a 360° real-world scene presented via a VR headset (Cohen, Botch, & Robertson, 2020). Moreover, the pan-field color illusion has been shown to change as a function of eccentricity. Okubo and Yokosawa (2023) found that the tendency to perceive chimera images as fully colored increased as the monochromatic aspect was confined further to the periphery; this finding was modulated by the attentional load of the main task (how fast stimuli were presented) for some eccentricities and not for others. However, being too confident in our judgments in the periphery does not extend across all paradigms: other recent work has demonstrated underconfidence in the periphery (Toscani, Mamassian, & Valsecchi, 2021) or even shown that metacognitive sensitivity can track task performance reasonably well, at least in the domain of color diversity judgments (Hawkins et al., 2022). Research has also shown that when peripheral stimuli are magnified to match the increased V2 receptive fields for central vision (and thereby match sensitivity), participants rate the color of two identical-color stimuli more similarly (Zeleznikow-Johnston, Aizawa, Yamada, & Tsuchiya, 2023). Thus, to date, the literature is characterized by a lack of consensus regarding how well observers can perform visual tasks in the periphery, the degree to which confidence may (or may not) track task accuracy, and whether human observers display optimal or suboptimal decision criteria (Rahnev & Denison, 2018) for perceptual decisions in this region of space. 
Some have posited that factors such as decision complexity interact with crowding and peripheral encoding and at least partially explain perceptual decision making in the periphery (Rosenholtz, 2020). Specifically, the complexity of the question that is asked (e.g., detection, discrimination, or identification) may influence decision making in eccentric regions of visual space. This approach is based on the idea that all vision inherently involves performing a task and paying attention; thus, it follows that in order to understand peripheral perception, we need to better characterize how task-based decision-making (Vater, Wolfe, & Rosenholtz, 2022) and the gradation of attention (Shulman, Remington, & McLean, 1979) influence perception across the visual field. The importance of understanding task-related requirements and metacognition for understanding the visual periphery has been previously argued (Odegaard & Lau, 2016), and we agree that understanding both decisional and confidence-based profiles can be useful in revealing how well observers can perform tasks across the visual field, and how certain they may be in those judgments. Here, to make progress in characterizing perceptual decision-making outside the fovea, we start with a simple level of complexity: we conduct a one-dimensional yes/no visual detection task across an array of peripheral locations, ranging from −40° to 40°. 
On each trial, participants reported whether a Gabor grating or noise patch was presented at a specific location and rated their confidence in that judgment on a 1 (low confidence) to 4 (high confidence) scale. To investigate how attention influences decision-making, we utilized pre-stimulus cues: on a large percentage of trials, the location participants were asked about matched a pre-stimulus cue (valid trials), but on a small percentage of trials, participants were asked about the location opposite to the pre-cue (invalid trials) on the other side of the midline. With this data, four metrics could be estimated at each eccentricity for both the valid and invalid attention conditions: perceptual sensitivity (d′a), perceptual criterion (c′a), metacognitive sensitivity (meta-d′a) (Maniscalco & Lau, 2012; Maniscalco & Lau, 2014), and metacognitive efficiency (M-ratio, or meta-d′a/d′a). 
The measure d′a estimates the underlying ability of a subject to discriminate signals (i.e., the Gabor grating) from noise (i.e., the noise patch), separate from any response bias. The better the participant performs on the detection task, the higher da will be. Our measure of response bias, c′a (i.e., the perceptual criterion), indicates the amount of sensory evidence a participant needs to respond that a signal is present (Heeger & Landy, 1997). The perceptual criterion may also be conceptualized as the decisional boundary between subjects giving “signal absent” and “signal present” responses. Thus two participants who perform a detection task equally well (i.e., matched d′a) can have opposing c′a values if one is biased towards choosing “target present” (c′a < 0) whereas the other is biased toward choosing “target absent” (c′a > 0). Our measure of metacognitive sensitivity (meta-d′a) gives an estimate of how effectively confidence judgments discriminate between correct and incorrect responses. Participants who endorse more correct answers with high confidence and more incorrect answers with low confidence will have higher meta-d′a scores. Our measure of metacognitive efficiency (M-ratio, or meta-d′a/d′a) gives an indication of a subject's level of metacognitive sensitivity, given a specific task performance level (Fleming & Lau, 2014; Rahnev, 2023). This normalized measure of metacognition adjusts for the fact that participants who perform better on a task (higher d′a) have a higher ceiling on potential meta-d′a scores. 
Given previous findings, we hypothesized that perceptual sensitivity (d′a) and metacognitive sensitivity (meta-d′a) would decline in the periphery, and that observers may exhibit liberal perceptual criteria (c′a) and suboptimal metacognitive efficiency (meta-d′a/d′a) in the periphery. The decreases in perceptual sensitivity (and by extension, metacognitive sensitivity) would replicate previous research showing declining visual acuity into the periphery (Rosenholtz, 2016). Whereas, decreases in metacognitive efficiency would show an increasing loss of sensory evidence from the initial detection judgment (“Signal Present” or “Signal Absent”) to the subsequent metacognitive assessment (confidence on a scale of 1–4). Further, we hypothesized that perceptual sensitivity, metacognitive sensitivity, metacognitive efficiency, and the perceptual criterion would be significantly lower for invalid cue trials than for valid cue trials. Since both eccentricity (Solovey, Graney, & Lau, 2015) and attention (Rahnev et al., 2011) have been found to impact these measures of sensitivity, bias, and metacognition, it is expected that they will interact so that invalid trials have lower values of d′a, meta-d′a, m-ratio, and c′a than valid trials at each eccentricity. This finding would indicate attention's role in moderating decision-making behaviors, biases, and metacognitive insight. We test these hypotheses here to establish an initial taxonomy of how attention and eccentricity impact perception and metacognition for the relatively simple decision-making task of visual detection. 
As anticipated, valid trials exhibited higher perceptual and metacognitive sensitivity than invalid trials, and both perceptual and metacognitive sensitivity declined as eccentricity increased. Further, participants were more prone to false alarms on invalid trials. However, unexpectedly, participants exhibited variable decision criteria across different eccentricities in our task, and markedly stable metacognitive efficiency across different locations in the periphery. Together, these results demonstrate that observers may have more insight into how performance capacity changes across the periphery than previously supposed. 
Methods
Participants
Our power analysis used MorePower 6.0.4 software (Campbell & Thompson, 2012). For the fully factorial 2 (attentional cues) x 4 (eccentricities) within-subject design, this study used a two-way repeated measures analysis of variance (ANOVA) with the effect of interest being the interaction between two factors for the sample size calculation. Power (1 − β) was assumed to be 0.8 and we set the alpha (α) value at 0.05, with the assumption of a large effect size of partial eta-squared (η2) to be 0.14 (Field, 2013; Richardson, 2011). The effect of a condition was considered significant if the p value was less than the alpha value. Using these parameters, MorePower yielded a sample size of 24. Thus the study aimed to collect 24 total participants, excluding any data that met the exclusion criteria described below. 
All experiment procedures were approved by the University of Florida's Institutional Review Board (IRB no. 202000030), and participants were recruited via the University of Florida's SONA participant recruitment system. Participants filled out an informed consent form prior to participation. The experiment lasted up to 120 minutes, and participants were compensated 5 SONA credits for their time. Participants were required to be 18 years of age or older, have normal or corrected-to-normal vision (without the use of glasses), and have no history of seizures. Data were collected from 29 total participants. Five participants were excluded from analyses due to negative d′ values at 10°, which we used as an indicator of task negligence. Thus the final sample size (N = 24) reached the goal yielded by our power analysis. 
Apparatus and stimuli
To display the visual stimuli at extreme visual angles, this study used a 49″ Samsung Odyssey G9 ultrawide curved monitor (1000R curvature). Monitor contrast was linearized using the Konica Minolta LS-150 photometer (Konica Minolta, Tokyo, Japan). Furthermore, this study used a Lenovo ThinkCentre M90t laptop computer (Lenovo, Morrisville, NC, USA) with an NVIDIA GeForce GTX 1660 super graphics card (NVIDIA Corp., Santa Clara, CA, USA) to control the EyeLink 1000+ eye tracker, which was used to enforce fixation on a central cross throughout the main task. Last, a Dell Precision 3640 Tower PC (Dell, Inc., Round Rock, TX, USA) with an Intel 2.9Ghz processor to run the experiment. 
Experiments were programmed using the PsychoPy3 package. Each experiment was displayed in Fullscreen mode (5120 × 1440 resolution) on a middle gray background (i.e., RGB set at [0,0,0] in PsychoPy). Two sets of stimuli were created for this study: a noisy Gabor patch and a noise patch. Both patches had a diameter of 2.6° of visual angle (DVA), used a Gaussian mask, and presented as circular apertures with pixel-based binary noise. In each trial, the Gabor grating's orientation was randomly presented as either −45° or 45° to limit a potential confound of cardinal line effects. The noisy Gabor patch was created by overlaying a Gabor grating on top of a noise patch; the opacity of the Gabor grating was manipulated in a staircase procedure (described below) to change the signal-to-noise ratio of the patch, while the noise patch opacity remained constant (with a 1.0 opacity value). In PsychoPy, manipulation of stimulus opacity changes the stimulus’ Michelson contrast (Peirce, 2019), such that the opacity of the stimulus represents a weighted average of the stimulus (the Gabor patch) and its background (the noise patch). Four visual angle conditions of 10°, 20°, 30°, and 40° were chosen to encompass near-peripheral vision (8°–30°) and mid-peripheral vision (30°–60°) as described in Gutwin et al. (2017). The stimuli were presented at each of these visual angles along a horizontal axis to the left and right of fixation. Unless otherwise stated, we combined data from corresponding left and right locations to obtain enough trials to accurately estimate d′a and meta-d′a in the invalid condition in this task. 
Procedure
Before the experimental task, participants underwent a nine-point eye tracking calibration for the EyeGaze Eye Tracker. Following calibration, participants completed a QUEST staircasing procedure (Watson & Pelli, 1983). The goal of the staircase was to find a single stimulus value for each participant that elicited performance somewhere between ceiling and floor for all the visual angles, since we used the same signal to noise ratio for the stimulus across each location. Following piloting of lab participants, it became clear that staircasing performance at 15° often produced below-ceiling performance at 10° and above-floor performance at 40°. We thresholded performance to achieve approximately 82% correct performance at 15° by adjusting stimulus opacity, and following this procedure, we used an average of the last 10 stimulus opacity values from the staircase across the entire visual field in the experiment. The pre-task staircase included 225 total trials, with 150 valid trials and 75 invalid trials (to mimic the structure of the experiment itself), but only the valid trials were used in the final stimulus opacity calculation. 
After completing the eye-tracking calibration and reading the instructions, participants began the main experimental task. Each trial in the experiment proceeded as follows (Figure 1): participants were first required to look at a fixation cross in the middle of the monitor. If subjects didn't look at the fixation cross, the trial ended and a screen reminded them to look at the fixation cross before beginning the next trial. Trials that were terminated early due to fixation error were repeated; in repeated trials the location and validity of the cue remained the same, but the signal's presence or absence was randomly selected. After a 500ms interstimulus interval, two white arrows (the “pre-cue”) cued participants to attend to one of eight possible display locations to the left or right of fixation (±10°, 20°, 30°, and 40°). After one second, the arrows disappeared and a noisy Gabor patch or a noise patch was presented for 300 ms at the same DVA on both sides of the screen. After the stimulus presentation, a second set of arrows (the “post-cue”) revealed which location participants should respond to. At the post-cue location, half of the trials contained a noisy Gabor, and half the trials contained pure noise. The location of the secondary stimulus (i.e., the DVA on the opposite side) always contained pure noise. 
Figure 1.
 
Task design. Each trial began with a 500 ms presentation of the fixation cross. Following fixation, a pre-cue appeared for 1000ms, indicating where participants should attend. Following the pre-cue, two stimuli were presented, with both items placed at the opposite visual angle on each side of the midline. If subjects violated fixation at any time during the last 200 ms of the pre-cue screen or 300ms stimulus presentation, an error message appeared and the trial was not used in further analysis. Next, two “post-cue” arrows appeared, which prompted participants to respond about the stimulus at the post-cued location, which was either the same location as the pre-cue (valid trials) or the opposite location (invalid trials). Following their response, participants were also asked to rate confidence in their judgment on a scale using 1 (not at all confident), 2 (somewhat confident), 3 (quite confident), or 4 (extremely confident). These words were shown next to each number in the real task; in this figure, we only show the numbers from 1–4.
Figure 1.
 
Task design. Each trial began with a 500 ms presentation of the fixation cross. Following fixation, a pre-cue appeared for 1000ms, indicating where participants should attend. Following the pre-cue, two stimuli were presented, with both items placed at the opposite visual angle on each side of the midline. If subjects violated fixation at any time during the last 200 ms of the pre-cue screen or 300ms stimulus presentation, an error message appeared and the trial was not used in further analysis. Next, two “post-cue” arrows appeared, which prompted participants to respond about the stimulus at the post-cued location, which was either the same location as the pre-cue (valid trials) or the opposite location (invalid trials). Following their response, participants were also asked to rate confidence in their judgment on a scale using 1 (not at all confident), 2 (somewhat confident), 3 (quite confident), or 4 (extremely confident). These words were shown next to each number in the real task; in this figure, we only show the numbers from 1–4.
To incentivize the allocation of attention to the pre-cue arrows’ location, on most trials, the pre-cue and post-cue arrows were in the same location. These trials were labeled as “valid trials,” and comprised 82% of all trials in the task. Invalid trials were characterized by the pre-cue arrows and response-cue arrows appearing at the same visual angle, but on opposite sides of the midline (e.g., the pre-cue could appear at 20° and the post-cue at −20°). 
Following the post-cue, an answer screen appeared, which prompted the participant to decide whether the signal was present (by pressing the “1” key) or absent (by pressing the “2” key) in the patch of interest. After providing this judgment, participants used the mouse to specify how confident they were in their decision on a scale from 1 to 4 (“1: not at all confident,” “2: somewhat confident,” “3: quite confident,” “4: extremely confident”). 
To ensure fixation throughout the trial, if the participant looked outside of a rectangle with a length of 4.1 DVAs centered on the fixation cross for more than 200 ms, the trial ended and instructions on the screen alerted participants to keep their eyes on the fixation cross. The 200 ms requirement was determined via pilot testing; it was chosen as the time in which all true saccades towards the stimuli would trigger the fixation error message while minimizing spurious fixation error triggers. 
Results
Performance metrics for each of the 24 participants were calculated using the MATLAB meta-d’ toolbox (Maniscalco & Lau, 2012; Maniscalco & Lau, 2014). Furthermore, because the toolbox provides an estimate of the ratio of the variance of the signal and noise distributions in SDT (or, “s”), we incorporated the estimate of this ratio into the meta-d′a estimate, rather than using a fixed ratio of 1. Average s values for all conditions fell below the assumed “equal-variance” value of 1. For the valid trials, the average s values were 0.84 for 10°, 0.81 for 20°, 0.77° for 30°, and 0.96 for 40°. The average s values for the invalid trials were even lower: 0.77 for 10°, 0.78 for 20°, 0.74° for 30°, and 0.93 for 40°. We used JASP to compare metrics across conditions with 2 × 4 (attention × eccentricity) repeated-measures ANOVAs with post-hoc pairwise comparisons using a Holm-Bonferroni correction. Any violations in the assumption of sphericity were accounted for using the Greenhouse-Geisser correction. Python was used to create manuscript figures. 
To estimate perceptual sensitivity, we used the metric d'a (which takes into account our estimates of s) instead of d′, because detection tasks often violate the equal-variance assumption in Signal Detection Theory (Macmillan & Creelman, 2005). Our 2 × 4 repeated measures ANOVA on d′a values indicated significant main effects of attention [F(1, 23) = 16.541, p < 0.001], eccentricity [F(3, 69) = 56.04, p < 0.001], and their interaction [F(3, 69) = 4.704, p = 0.005]. Decomposing the interaction, post-hoc pairwise comparisons revealed significant differences in d'a (pholm < 0.05) between attention conditions at 10°, 20°, and 30°. As shown in Figure 2A, lower d′a was found for the 10° invalid (M = 1.78, SD = 1.45) condition compared to the 10° valid (M = 2.33, SD = 1.23) condition, the 20° invalid condition (M = 1.82, SD = 0.89) compared to the 20° valid condition (M = 2.19, SD = 1.08), and the 30° invalid condition (M = 1.32, SD = 0.99) compared to the 30° valid condition (M = 1.66, SD = 1.18). 
Our measure of response bias was c′a; with this metric, negative values indicate a tendency to report target presence, and positive values indicate a tendency to report target absence. To assess response bias, our 2 × 4 repeated measures ANOVA on c′a values (Figure 2B) indicated a significant main effect of eccentricity [F(2, 44) = 29.00, p < .05] and a significant interaction between attention and eccentricity [F(3, 69) = 3.91, p < 0.05]. Yet, a main effect of attention was not found [F(1, 23) = 0.06, p > 0.05]. Post-hoc pairwise comparisons revealed nonsignificant differences (pholm > 0.05) between attention conditions for c′a at each of the four eccentricity conditions (see Figure 2B). One-sided t-tests were performed on c′a values to uncover whether or not c′a was significantly different than 0 at each eccentricity; the results revealed that only the 40° condition had significantly different (higher, and more conservative) c′a values than 0 [t(47) = 9.62, p < 0.001]. 
We also compared hit rates and false alarm rates using a 2 × 4 repeated measures ANOVA. For hit rates (Figure 3A), the results indicated a significant main effect of eccentricity [F(3, 69) = 49.91, p < 0.001], a nonsignificant main effect of attention [F(1, 23) = 0.34, p = 0.57], and a nonsignificant interaction effect [F(3, 69) = 0.42, p = 0.74]. Whereas for false alarm rates (Figure 3B), results indicated significant main effects of attention [F(1, 23) = 18.84, p < 0.001] and eccentricity [F(3, 69) = 11.03, p < 0.001]. The interaction effect between attention and eccentricity was marginally significant [F(3, 69) = 2.62, p = 0.057]. Decomposing this interaction effect, post-hoc pairwise comparisons revealed significant differences in false alarm rates (pholm < 0.05) between attention conditions at 10° and 20°. Specifically, significantly higher false alarm rates were found at the 10° invalid condition (M = 0.13, SD = 0.17) compared to the 10° valid condition (M = 0.07, SD = 0.10). Furthermore, significantly higher false alarm rates were found at the 20° invalid condition (M = 0.21, SD = 0.15) compared to the 20° valid condition (M = 0.13, SD = 0.10). 
Concerning metacognitive sensitivity (Figure 4A), our 2 × 4 repeated measures ANOVA on meta-d′a values indicated significant main effects of attention [F(1, 23) = 5.17, p < 0.05] and eccentricity [F(3, 69) = 27.29, p < 0.05]. Results also indicated a nonsignificant interaction effect between attention and eccentricity on meta-d′a [F(3, 51) = 2.34, p = 0.10]. Post-hoc pairwise comparisons for the marginally significant interaction revealed a marginally significant difference in meta-d′a (pholm = 0.06) between attention conditions at 10°, with lower meta-d′a found for the 10° invalid condition (M = 1.35, SD = 1.31) compared to the 10° valid condition (M = 1.89, SD = 1.27). Furthermore, there was significantly lower meta-d′a (pholm < .05) found for the 30° invalid condition (M = 1.42, SD = 1.20) compared to the 30° valid condition (M = 0.84, SD = 1.04). When looking at metacognitive efficiency (the M-ratio, or meta-d′a/d′a, in Figure 4B), our 2 × 4 repeated measures ANOVA on M-ratio values revealed nonsignificant main effects of attention (F(1,23) = 0.242, p > .05), eccentricity (F(3,69) = 0.30, p > 0.05), and their interaction (F(3,69) = 0.74, p > 0.05). 
To account for the relationship observed between eccentricity, attention, and perceptual sensitivity (as shown in Figure 2A), average confidence (computed for each subject) was separated into correct and incorrect trials (Figure 5). Our 2 × 4 repeated-measures ANOVA on correct confidence revealed significant main effects of attention [F(1, 23) = 15.27, p < 0.001] and eccentricity [F(1, 34) = 46.24, p < 0.001]. The interaction effect between attention and eccentricity was nonsignificant [F(3, 69) = 1.43, p > 0.05]. Post-hoc analyses of the main effect of eccentricity revealed a significant decrease (pholm > 0.05) in correct confidence from the near to far peripheral locations: 10° had the highest correct confidence (M = 3.12, SD = 0.50), followed by 20° (M = 2.91 SD = 0.50), 30° (M = 2.72, SD = 0.57), and 40° (M = 2.29, SD = 0.61). Further, the post hoc analyses revealed significantly lower (pholm > 0.05) correct confidence in invalid conditions (M = 2.71, SD = 0.63) than in valid conditions (M = 2.81, SD = 0.62). In contrast, our 2 × 4 repeated-measures ANOVA on incorrect confidence revealed nonsignificant main effects of attention (F(1,23) = 0.89, p > 0.05), eccentricity [F(3, 69) = 2.51, p = 0.067], and their interaction F(3,69) = 0.43, p > 0.05). Incorrect confidence only dropped 0.3 points on average from 10° (M = 2.40, SD = 0.66) to 40° (M = 2.16, SD = 0.61). across eccentricities and validity conditions. Thus, although there were significant decreases in correct confidence in the periphery, incorrect confidence remained relatively unchanged, because none of the post-hoc comparisons between incorrect confidence (for valid or invalid trials) at different eccentricities were significant. 
Figure 2.
 
Valid and Invalid d'a (A) and c'a (B) across eccentricities. There is a trend of decreasing d'a across eccentric conditions, with significantly lower d'a in the invalid condition compared to the valid condition at 10°, 20°, and 30°. For c'a, there are noisy differences (centered around 0) in c'a values from 10° to 30° and a large increase at 40°. No difference in c'a appears between valid and invalid cueing. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm < .05) between validity conditions at a given degree.
Figure 2.
 
Valid and Invalid d'a (A) and c'a (B) across eccentricities. There is a trend of decreasing d'a across eccentric conditions, with significantly lower d'a in the invalid condition compared to the valid condition at 10°, 20°, and 30°. For c'a, there are noisy differences (centered around 0) in c'a values from 10° to 30° and a large increase at 40°. No difference in c'a appears between valid and invalid cueing. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm < .05) between validity conditions at a given degree.
Discussion
In this investigation, we probed how perceptual sensitivity, metacognitive sensitivity, the perceptual criterion, and metacognitive efficiency scale with eccentricity. Using a simple task which required subjects to distinguish between a Gabor patch and noise, we evaluated these metrics under conditions where participants received a pre-cue that either matched the location they had to respond to (valid trials) or did not (invalid trials). As expected, perceptual sensitivity declined with eccentricity and was higher for valid trials compared to invalid trials. Metacognitive sensitivity also declined with eccentricity, with slight (and mostly insignificant) differences between valid and invalid trials. Interestingly, the perceptual criterion indicated that participants were not significantly biased in their detection judgments until the stimulus eccentricity reached 40°, at which point they showed a strong conservative bias. Attention made a minimal impact across valid and invalid trials on the perceptual criterion. Finally, our measure of metacognitive efficiency (M-ratio) did not show any significant effects across attention or eccentricity, and the interaction was also not significant. 
These results can inform important debates about perception and metacognition in the visual periphery (Knotts, Odegaard, Lau, & Rosenthal, 2019; Knotts, Michel, & Odegaard, 2020; Odegaard et al., 2018; Toscani et al., 2021). For example, one recent theory about peripheral vision, subjective inflation (Odegaard et al., 2018), highlights decision-making biases that have been reported in peripheral vision. Specifically, observers exhibit liberal detection criteria and make a high proportion of false alarms in detection-based tasks in peripheral and unattended locations in visual space (Li, Lau, & Odegaard, 2018; Rahnev et al., 2011; Solovey et al., 2015). Previous studies on subjective inflation have exploited performance-matching across central/peripheral (Solovey et al., 2015) and attended/unattended (Rahnev et al., 2011) conditions to reveal a liberal perceptual criterion in peripheral and unattended parts of visual space. Although we did not match performance across locations in our study, we still think insights that can be gained from this paradigm can (at least partially) inform this debate. We used the same signal-to-noise values in our stimulus across visual locations to probe how the perceptual criterion changes in this “stimulus-matched” paradigm. Our data indicate that the periphery is likely resistant to any simple story about perceptual decision-making: the criterion bias is variable across eccentric locations and may not always be influenced by attentional allocation. We do note that at 20°, there is a trend toward liberal response biases, so perhaps inflation-like effects may be contingent on the exact location in the periphery. Further, in our paradigm, observers make more false alarms in the invalid trials, but this is likely due to the decrease in perceptual sensitivity in the invalid condition and not a shift in the perceptual criterion itself. Overall, we think the current data motivate the need to match performance at various performance levels and eccentricities to reveal if and how inflation-like effects extend across the periphery. 
Of additional interest is how attention made the largest impact on false alarm rates at the 10° and 20° locations, with lower false alarms rates in the valid (compared to invalid) condition (Figure 3B). An analogous finding was not revealed in the hit rates, as hit rates were not significantly different between the valid and invalid attention conditions across any of the eccentric locations that we probed. The differential impact in these metrics indicates that the equal variance assumption in Signal Detection Theory is likely violated (Macmillan & Creelman, 2005), because one would normally expect both the false alarm rate and hit rate to change as c (the decision boundary between signal and noise responses) changes. Indeed, when we modeled the variance of the signal + noise and noise distributions using the meta-d’ toolbox (Maniscalco & Lau, 2012; Maniscalco & Lau, 2014), we saw that the ratio of these two distributions was frequently different from 1, indicating the equal variance assumption had been violated. 
Figure 3.
 
Valid and invalid hit rates and false alarm rates across eccentricities. (A) Hit rates were similar between valid and invalid cueing, and relatively stable across eccentricities except for a large decline at 40°. (B) False alarm rates were significantly different between valid and invalid cueing conditions at 10° and 20° (i.e., higher in the invalid conditions), and increased across eccentricities. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > .05) between validity conditions at a given degree.
Figure 3.
 
Valid and invalid hit rates and false alarm rates across eccentricities. (A) Hit rates were similar between valid and invalid cueing, and relatively stable across eccentricities except for a large decline at 40°. (B) False alarm rates were significantly different between valid and invalid cueing conditions at 10° and 20° (i.e., higher in the invalid conditions), and increased across eccentricities. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > .05) between validity conditions at a given degree.
Figure 4.
 
Valid and invalid (A) meta-d'a and (B) M-ratio across eccentricities. On average, meta-d'a decreased across eccentricities with lower meta-d'a observed for the invalid cue conditions. The 30° valid condition had significantly higher meta-d'a than the 30° invalid condition. Further, there was a marginally significant difference between the 10° valid and 10° invalid conditions (i.e., lower meta-d'a in the 10° invalid condition). The M-ratio had no discernible trends across conditions. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > 0.05) between validity conditions at a given degree.
Figure 4.
 
Valid and invalid (A) meta-d'a and (B) M-ratio across eccentricities. On average, meta-d'a decreased across eccentricities with lower meta-d'a observed for the invalid cue conditions. The 30° valid condition had significantly higher meta-d'a than the 30° invalid condition. Further, there was a marginally significant difference between the 10° valid and 10° invalid conditions (i.e., lower meta-d'a in the 10° invalid condition). The M-ratio had no discernible trends across conditions. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > 0.05) between validity conditions at a given degree.
Figure 5.
 
Confidence by attention, eccentricity, and accuracy. Correct trial confidence decreases across eccentricities and is lower for invalid conditions than for valid. In contrast, incorrect trial confidence remains relatively constant across attention and eccentricity conditions. Error bars represent one standard error of the mean (SEM).
Figure 5.
 
Confidence by attention, eccentricity, and accuracy. Correct trial confidence decreases across eccentricities and is lower for invalid conditions than for valid. In contrast, incorrect trial confidence remains relatively constant across attention and eccentricity conditions. Error bars represent one standard error of the mean (SEM).
Importantly, our results challenge accounts of a mismatch between confidence in perceptual abilities and how well we can perform perceptual tasks (Cohen et al., 2016; Odegaard et al., 2018; Yildirim & Sayam, 2022). Specifically, our data do not support a simple account of overconfidence in the periphery; while perceptual sensitivity and metacognitive sensitivity decline with eccentricity, metacognitive efficiency does not. Stable metacognitive efficiency across conditions indicates that metacognitive and perceptual sensitivity both decrease at equivalent rates, and that there is no true divergence between metacognition and task performance as a function of eccentricity or attention. Thus, as they become increasingly unable to discriminate between signals and noise, their confidence judgments are similarly unable to discriminate between their correct and incorrect responses. There are a few alternative explanations of why this may be the case: perhaps a default assumption for many observers about the periphery is that we might be overconfident in how well we can perform tasks, but when faced with hundreds of trials in a psychophysics experiment (when fixation is enforced at a central location), we quickly learn from firsthand experience how difficult making judgments in peripheral locations truly is. Additionally, observers may be overconfident about some domains in the periphery more than others. It has recently been shown that observers have a liberal detection bias for color in the periphery (Okubo & Yokosawa, 2023), but based on our data, perhaps this liberal perceptual criterion (and potential for overconfidence in judgments) does not attend across all perceptual tasks and stimuli, such as our use of simple Gabor patches here. 
It is also possible that inflation-like effects in perceptual decision-making occur solely between fovea and periphery, in that these effects do not scale as a function of eccentricity. Perhaps, in comparison to a foveal condition, the c′a values at 10°–30° would be significantly more liberal or the m-ratio significantly less optimal. Our study did not include a foveal condition due to our task design that required presenting two stimuli on the screen with valid and invalid cues, neither of which could be done at 0°. Further, we were primarily interested in how decision-making changes across the attended and unattended periphery, rather than between fovea and periphery. Nevertheless, these results are unable to comment on any divergence in decision-making behaviors and metacognition between fovea and periphery. Future research with a different task design, including a foveal eccentricity condition, should be conducted to more explicitly probe these differences. 
We note another limitation of this study being an information leak present in this task design, wherein participants are always cued to attend to noise patches in invalid conditions. With this constant noise patch, we anticipate two possible strategies participants may have used in ambiguous situations: (1) participants may be biased towards responding to noise in invalid conditions as (by definition) they are always attending to a noise patch during a trial, or (2) participants may believe that attending to noise on one side would mean that the response patch should be signal. Strategy 1 would lead to us finding more conservative c′a values than we would have found otherwise, because it would have led to increased misses and fewer hits. With near equivalent hit rates between attention conditions, this concern is reduced. If participants used Strategy 2, it could potentially inflate false alarm rates for invalid conditions, leading to more liberal c′a values. However, we think it is unlikely they fully adopted the second strategy because higher false alarms in the invalid case were not evident at the 30° and 40° eccentricities and participants were still making correct rejections in the invalid condition. Still, further research is necessitated regarding how this lack of independence between signal and noise may have altered decision making. 
Our results spur more questions than they currently answer. For example, does the variable profile of the perceptual criterion extend to more naturalistic tasks in the periphery? (Vater et al., 2022). In some circumstances, naturalistic tasks have provided further evidence for certain perceptual phenomena, like a tendency towards false alarms in peripheral and unattended locations (Li et al., 2018). Here, we demonstrate that a tendency toward false alarms with inattention may not be present across all eccentricities, and the perceptual criterion itself may not be as influenced by attention as previously thought. Future work will be needed to address this question, and determine if the criterion effects we show here extend across different tasks and paradigms. Furthermore, whether our metacognitive results extend to more naturalistic tasks remains to be determined. Additionally, it remains unclear whether the largely conservative response bias in the periphery at 40° would decrease if sensitivity was increased at this location. 
Finally, can these findings inform the debate about the degree to which we consciously perceive fine details in the visual periphery? Previously, this question has been framed as the “rich vs. sparse” debate in vision science (Gross, 2018) and has focused on whether perceptual experience overflows cognitive access (Block, 2011; Fazekas & Overgaard, 2018; Lamme, 2010). Empirical work has been cited to justify many positions, including the claim that we perceive many details but forget them quickly (Sligte, Vandenbroucke, Scholte, & Lamme, 2010; Sperling, 1960), the claim that we perceive details across the visual field worse than we might expect (Balas & Sinha, 2007; Cohen et al., 2020; Rensink, O'Regan, & Clark, 1997; Simons & Chabris, 1999; Simons & Rensink, 2005), and the claim that we have partial awareness of peripheral details that is influenced by low-level filling-in mechanisms (Komatsu, 2006) or expectations, prior knowledge, and other higher order biases (Knotts et al., 2019; Kouider, de Gardelle, Sackur, & Dupoux, 2010; Odegaard et al., 2018) even in unattended regions of space. Attempts to characterize the precision of peripheral detail have studied how much we perceive in brief glimpses (Cohen & Rubenstein, 2020) but a complete account of peripheral richness has remained elusive. Here, we show that observers have a surprising amount of metacognitive insight into how performance declines in the periphery. We think that a complete account of peripheral phenomenology must continue to document how task, stimulus, and decisional complexity may shape the conclusions we reach about the degree of detail we perceive, and whether confidence tracks performance outside the fovea. Based on our results here, observers might have better knowledge of performance impairments in the periphery than many currently presume. 
Acknowledgments
Supported by an Office of Naval Research Young Investigator Award to B.A.O. (N00014-22-1-2534). 
Commercial relationships: none. 
Corresponding author: Joseph Pruitt. 
Address: University of Florida Psychology Department, Gainesville, FL 32603, USA. 
References
Alvarez, G. A., & Oliva, A. (2008). The representation of simple ensemble visual features outside the focus of attention. Psychological Science, 19(4), 392–398, https://doi.org/10.1111/j.1467-9280.2008.02098.x. [CrossRef] [PubMed]
Andriessen, J. J., & Bouma, H. (1976). Eccentric vision: Adverse interactions between line segments. Vision Research, 16(1), 71–78, https://doi.org/10.1016/0042-6989(76)90078-x. [CrossRef] [PubMed]
Ariely, D. (2001). Seeing sets: Representation by statistical properties. Psychological Science, 12(2), 157–162, https://doi.org/10.1111/1467-9280.00327. [CrossRef] [PubMed]
Balas, B., & Sinha, P. (2007). “Filling-in” colour in natural scenes. Visual Cognition, 15(7), 765–778, http://www.ingentaconnect.com/content/routledg/pvis/2007/00000015/00000007/art00001. [CrossRef]
Block, N. (2011). Perceptual consciousness overflows cognitive access. Trends in Cognitive Sciences, 15(12), 567–575, https://doi.org/10.1016/j.tics.2011.11.001. [CrossRef] [PubMed]
Campbell, J. I. D., & Thompson, V. A. (2012). MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis. Behavior Research Methods, 44(4), 1255–1265, https://doi.org/10.3758/s13428-012-0186-0. [CrossRef] [PubMed]
Cohen, M. A., Botch, T. L., & Robertson, C. E. (2020). The limits of color awareness during active, real-world vision. Proceedings of the National Academy of Sciences of the United States of America, https://doi.org/10.1073/pnas.1922294117.
Cohen, M. A., Dennett, D. C., & Kanwisher, N. (2016). What is the bandwidth of perceptual experience? Trends in Cognitive Sciences, 20(5), 324–335, https://doi.org/10.1016/j.tics.2016.03.006. [CrossRef] [PubMed]
Cohen, M. A., & Rubenstein, J. (2020). How much color do we see in the blink of an eye? Cognition, 200, 104268, https://doi.org/10.1016/j.cognition.2020.104268. [CrossRef] [PubMed]
Curcio, C. A., Sloan, K. R., Kalina, R. E., & Hendrickson, A. E. (1990). Human photoreceptor topography. The Journal of Comparative Neurology, 292(4), 497–523, https://doi.org/10.1002/cne.902920402. [CrossRef] [PubMed]
Daitch, J. M., & Green, D. G. (1969). Contrast sensitivity of the human peripheral retina. Vision Research, 9(8), 947–952, https://doi.org/10.1016/0042-6989(69)90100-x. [CrossRef] [PubMed]
Dakin, S. C. (2001). Information limit on the spatial integration of local orientation signals. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 18(5), 1016–1026, https://doi.org/10.1364/JOSAA.18.001016. [CrossRef] [PubMed]
Fazekas, P., & Overgaard, M. (2018). Perceptual consciousness and cognitive access: An introduction. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 373(1755), 20170340, https://doi.org/10.1098/rstb.2017.0340. [CrossRef] [PubMed]
Field, A. (2013). Discovering Statistics Using IBM SPSS Statistics. Thousand Oaks, CA: SAGE, https://play.google.com/store/books/details?id=c0Wk9IuBmAoC.
Fleming, S. M., & Lau, H. C. (2014). How to measure metacognition. Frontiers in Human Neuroscience, 8, 443, https://doi.org/10.3389/fnhum.2014.00443. [CrossRef] [PubMed]
Gross, S. (2018). Perceptual consciousness and cognitive access from the perspective of capacity-unlimited working memory. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 373(1755), 20170343, https://doi.org/10.1098/rstb.2017.0343. [CrossRef] [PubMed]
Gutwin, C., Cockburn, A., & Coveney, A. (2017). Peripheral popout: The influence of visual angle and stimulus intensity on popout effects. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 208–219, https://doi.org/10.1145/3025453.3025984.
Haberman, J., & Whitney, D. (2007). Rapid extraction of mean emotion and gender from sets of faces. Current Biology: CB, 17(17), R751–R753, https://doi.org/10.1016/j.cub.2007.06.039. [CrossRef] [PubMed]
Hansen, T., Pracejus, L., & Gegenfurtner, K. R. (2009). Color perception in the intermediate periphery of the visual field. Journal of Vision, 9(4), 26.1–26.12, https://doi.org/10.1167/9.4.26. [CrossRef] [PubMed]
Hawkins, B., Evans, D., Preston, A., Westmoreland, K., Mims, C. E., Lolo, K., ... Odegaard, B. (2022). Color diversity judgments in peripheral vision: Evidence against “cost-free” representations. PloS One, 17(12), e0279686, https://doi.org/10.1371/journal.pone.0279686. [CrossRef] [PubMed]
Heeger, D., & Landy, M. (1997). Signal detection theory. Department of Psychology, Stanford University. Stanford, CA: Teaching Handout.
Keshvari, S., & Rosenholtz, R. (2016). Pooling of continuous features provides a unifying account of crowding. Journal of Vision, 16(3), 39, https://doi.org/10.1167/16.3.39. [CrossRef] [PubMed]
Knotts, J. D., Michel, M., & Odegaard, B. (2020). Defending subjective inflation: An inference to the best explanation. Neuroscience of Consciousness, 2020(1), niaa025, https://doi.org/10.1093/nc/niaa025. [CrossRef] [PubMed]
Knotts, J. D., Odegaard, B., Lau, H., & Rosenthal, D. (2019). Subjective inflation: Phenomenology's get-rich-quick scheme. Current Opinion in Psychology, 29, 49–55, https://doi.org/10.1016/j.copsyc.2018.11.006. [CrossRef] [PubMed]
Komatsu, H. (2006). The neural mechanisms of perceptual filling-in. Nature Reviews. Neuroscience, 7(3), 220–231, https://doi.org/10.1038/nrn1869. [PubMed]
Kouider, S., de Gardelle, V., Sackur, J., & Dupoux, E. (2010). How rich is consciousness? The partial awareness hypothesis. Trends in Cognitive Sciences, 14(7), 301–307, https://doi.org/10.1016/j.tics.2010.04.006. [PubMed]
Lamme, V. A. F. (2010). How neuroscience will change our view on consciousness. Cognitive Neuroscience, 1(3), 204–220, https://doi.org/10.1080/17588921003731586. [PubMed]
Li, M. K., Lau, H., & Odegaard, B. (2018). An investigation of detection biases in the unattended periphery during simulated driving. Attention, Perception & Psychophysics, 80, 1325–1332, https://doi.org/10.3758/s13414-018-1554-3. [PubMed]
Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: A user's guide, 2nd ed. Mahwah, NJ: Lawrence Erlbaum Associates Publishers, http://psycnet.apa.org/psycinfo/2004-19022-000.
Maniscalco, B., & Lau, H. (2012). A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430, https://doi.org/10.1016/j.concog.2011.09.021. [PubMed]
Maniscalco, B., & Lau, H. (2014). Signal detection theory analysis of type 1 and type 2 data: Meta-d′, response-specific meta-d′, and the unequal variance SDT model. In The Cognitive Neuroscience of Metacognition (pp. 25–66). Berlin, Heidelberg: Springer, https://doi.org/10.1007/978-3-642-45190-4_3.
Odegaard, B., Chang, M. Y., Lau, H., & Cheung, S.-H. (2018). Inflation versus filling-in: Why we feel we see more than we actually do in peripheral vision. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 373(1755), 20170345, https://doi.org/10.1098/rstb.2017.0345. [PubMed]
Odegaard, B., & Lau, H. (2016). Methodological Considerations to Strengthen Studies of Peripheral Vision. Trends in Cognitive Sciences, https://doi.org/10.1016/j.tics.2016.06.005.
Okubo, L., & Yokosawa, K. (2023). Attentional allocation and the pan-field color illusion. Journal of Vision, 23(3), 13, https://doi.org/10.1167/jov.23.3.13. [PubMed]
Parkes, L., Lund, J., Angelucci, A., Solomon, J. A., & Morgan, M. (2001). Compulsory averaging of crowded orientation signals in human vision. Nature Neuroscience, 4(7), 739–744, https://doi.org/10.1038/89532. [PubMed]
Peirce, J. (2019). In most cases, yes, you can treat opacity as a way to set Michelson contrast but two caveats. [Comment on the online forum post GratingStim.opacity same as Michelson contrast?]. Psychopy, https://discourse.psychopy.org/t/gratingstim-opacity-same-as-michelson-contrast/3849/6.
Rahnev, D. (2023). Measuring metacognition: A comprehensive assessment of current methods. Psyarxiv.com, https://psyarxiv.com/waz9h/download?format=pdf.
Rahnev, D., & Denison, R. N. (2018). Suboptimality in Perceptual Decision Making. The Behavioral and Brain Sciences, 41,e223, https://doi.org/10.1017/S0140525x18000936. [PubMed]
Rahnev, D., Maniscalco, B., Graves, T., Huang, E., de Lange, F. P., & Lau, H. (2011). Attention induces conservative subjective biases in visual perception. Nature Neuroscience, 14(12), 1513–1515, https://doi.org/10.1038/nn.2948. [PubMed]
Rensink, R. A., O'Regan, J. K., & Clark, J. J. (1997). To see or not to see: The need for attention to perceive changes in scenes. Psychological Science, 8(5), 368–373, https://doi.org/10.1111/j.1467-9280.1997.tb00427.x.
Richardson, J. T. E. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review, 6(2), 135–147, https://doi.org/10.1016/j.edurev.2010.12.001.
Rosenholtz, R. (2016). Capabilities and Limitations of Peripheral Vision. Annual Review of Vision Science, 2, 437–457, https://doi.org/10.1146/annurev-vision-082114-035733. [PubMed]
Rosenholtz, R. (2020). Demystifying visual awareness: Peripheral encoding plus limited decision complexity resolve the paradox of rich visual experience and curious perceptual failures. Attention, Perception & Psychophysics, 83(3), 901–925, https://doi.org/10.3758/s13414-019-01968-1.
Shulman, G. L., Remington, R. W., & McLean, J. P. (1979). Moving attention through visual space. Journal of Experimental Psychology. Human Perception and Performance, 5(3), 522–526, https://doi.org/10.1037//0096-1523.5.3.522. [PubMed]
Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28(9), 1059–1074, https://doi.org/10.1068/p281059. [PubMed]
Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in Cognitive Sciences, 9(1), 16–20, https://doi.org/10.1016/j.tics.2004.11.006. [PubMed]
Sligte, I. G., Vandenbroucke, A. R. E., Scholte, H. S., & Lamme, V. A. F. (2010). Detailed sensory memory, sloppy working memory. Frontiers in Psychology, 1, 175, https://doi.org/10.3389/fpsyg.2010.00175. [PubMed]
Solovey, G., Graney, G. G., & Lau, H. (2015). A decisional account of subjective inflation of visual perception at the periphery. Attention, Perception & Psychophysics, 77(1), 258–271, https://doi.org/10.3758/s13414-014-0769-1.
Sperling, G. (1960). The Information Available in Brief Visual Presentations. Psychological Monographs: General and Applied, 74(11), 1–29, https://doi.org/10.1037/h0093759.
Thibos, L. N., Still, D. L., & Bradley, A. (1996). Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vision Research, 36(2), 249–258, https://doi.org/10.1016/0042-6989(95)00109-d. [PubMed]
Toscani, M., Mamassian, P., & Valsecchi, M. (2021). Underconfidence in peripheral vision. Journal of Vision, 21(6), 2, https://doi.org/10.1167/jov.21.6.2. [PubMed]
Van Berg, R., Roerdink, J., & Cornelissen, F.W. (2007). On the generality of crowding: Visual crowding in size, saturation, and hue compared to orientation. Journal of Atomic and Molecular Physics, 7(2), 14–14, https://jov.arvojournals.org/article.aspx?articleid=2121971.
Vater, C., Wolfe, B., & Rosenholtz, R. (2022). Peripheral vision in real-world tasks: A systematic review. Psychonomic Bulletin & Review, 29(5), 1531–1557, https://doi.org/10.3758/s13423-022-02117-w. [PubMed]
Watamaniuk, S. N., & Duchon, A. (1992). The human visual system averages speed information. Vision Research, 32(5), 931–941, https://doi.org/10.1016/0042-6989(92)90036-i. [PubMed]
Watamaniuk, S. N., Sekuler, R., & Williams, D. W. (1989). Direction perception in complex dynamic displays: The integration of direction information. Vision Research, 29(1), 47–59, https://doi.org/10.1016/0042-6989(89)90173-9. [PubMed]
Whitney, D., & Levi, D. M. (2011). Visual crowding: A fundamental limit on conscious perception and object recognition. Trends in Cognitive Sciences, 15(4), 160–168, https://doi.org/10.1016/j.tics.2011.02.005. [PubMed]
Watson, A. B., & Pelli, D. G. (1983). QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33(2), 113–120. [PubMed]
Wilkinson, F., Wilson, H. R., & Ellemberg, D. (1997). Lateral interactions in peripherally viewed texture arrays. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 14(9), 2057–2068, https://doi.org/10.1364/josaa.14.002057. [PubMed]
Williams, D. W., & Sekuler, R. (1984). Coherent global motion percepts from stochastic local motions. Vision Research, 24(1), 55–62, https://www.ncbi.nlm.nih.gov/pubmed/6695508. [PubMed]
Yildirim, F. Z., & Sayim, B. (2022). High confidence and low accuracy in redundancy masking. Consciousness and cognition, 102, 103349. [PubMed]
Zeleznikow-Johnston, A., Aizawa, Y., Yamada, M., & Tsuchiya, N. (2023). Are color experiences the same across the visual field? Journal of Cognitive Neuroscience, 35(4), 509–542. [PubMed]
Figure 1.
 
Task design. Each trial began with a 500 ms presentation of the fixation cross. Following fixation, a pre-cue appeared for 1000ms, indicating where participants should attend. Following the pre-cue, two stimuli were presented, with both items placed at the opposite visual angle on each side of the midline. If subjects violated fixation at any time during the last 200 ms of the pre-cue screen or 300ms stimulus presentation, an error message appeared and the trial was not used in further analysis. Next, two “post-cue” arrows appeared, which prompted participants to respond about the stimulus at the post-cued location, which was either the same location as the pre-cue (valid trials) or the opposite location (invalid trials). Following their response, participants were also asked to rate confidence in their judgment on a scale using 1 (not at all confident), 2 (somewhat confident), 3 (quite confident), or 4 (extremely confident). These words were shown next to each number in the real task; in this figure, we only show the numbers from 1–4.
Figure 1.
 
Task design. Each trial began with a 500 ms presentation of the fixation cross. Following fixation, a pre-cue appeared for 1000ms, indicating where participants should attend. Following the pre-cue, two stimuli were presented, with both items placed at the opposite visual angle on each side of the midline. If subjects violated fixation at any time during the last 200 ms of the pre-cue screen or 300ms stimulus presentation, an error message appeared and the trial was not used in further analysis. Next, two “post-cue” arrows appeared, which prompted participants to respond about the stimulus at the post-cued location, which was either the same location as the pre-cue (valid trials) or the opposite location (invalid trials). Following their response, participants were also asked to rate confidence in their judgment on a scale using 1 (not at all confident), 2 (somewhat confident), 3 (quite confident), or 4 (extremely confident). These words were shown next to each number in the real task; in this figure, we only show the numbers from 1–4.
Figure 2.
 
Valid and Invalid d'a (A) and c'a (B) across eccentricities. There is a trend of decreasing d'a across eccentric conditions, with significantly lower d'a in the invalid condition compared to the valid condition at 10°, 20°, and 30°. For c'a, there are noisy differences (centered around 0) in c'a values from 10° to 30° and a large increase at 40°. No difference in c'a appears between valid and invalid cueing. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm < .05) between validity conditions at a given degree.
Figure 2.
 
Valid and Invalid d'a (A) and c'a (B) across eccentricities. There is a trend of decreasing d'a across eccentric conditions, with significantly lower d'a in the invalid condition compared to the valid condition at 10°, 20°, and 30°. For c'a, there are noisy differences (centered around 0) in c'a values from 10° to 30° and a large increase at 40°. No difference in c'a appears between valid and invalid cueing. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm < .05) between validity conditions at a given degree.
Figure 3.
 
Valid and invalid hit rates and false alarm rates across eccentricities. (A) Hit rates were similar between valid and invalid cueing, and relatively stable across eccentricities except for a large decline at 40°. (B) False alarm rates were significantly different between valid and invalid cueing conditions at 10° and 20° (i.e., higher in the invalid conditions), and increased across eccentricities. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > .05) between validity conditions at a given degree.
Figure 3.
 
Valid and invalid hit rates and false alarm rates across eccentricities. (A) Hit rates were similar between valid and invalid cueing, and relatively stable across eccentricities except for a large decline at 40°. (B) False alarm rates were significantly different between valid and invalid cueing conditions at 10° and 20° (i.e., higher in the invalid conditions), and increased across eccentricities. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > .05) between validity conditions at a given degree.
Figure 4.
 
Valid and invalid (A) meta-d'a and (B) M-ratio across eccentricities. On average, meta-d'a decreased across eccentricities with lower meta-d'a observed for the invalid cue conditions. The 30° valid condition had significantly higher meta-d'a than the 30° invalid condition. Further, there was a marginally significant difference between the 10° valid and 10° invalid conditions (i.e., lower meta-d'a in the 10° invalid condition). The M-ratio had no discernible trends across conditions. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > 0.05) between validity conditions at a given degree.
Figure 4.
 
Valid and invalid (A) meta-d'a and (B) M-ratio across eccentricities. On average, meta-d'a decreased across eccentricities with lower meta-d'a observed for the invalid cue conditions. The 30° valid condition had significantly higher meta-d'a than the 30° invalid condition. Further, there was a marginally significant difference between the 10° valid and 10° invalid conditions (i.e., lower meta-d'a in the 10° invalid condition). The M-ratio had no discernible trends across conditions. Error bars represent one standard error of the mean (SEM). Asterisks indicate significant pairwise comparisons (pholm > 0.05) between validity conditions at a given degree.
Figure 5.
 
Confidence by attention, eccentricity, and accuracy. Correct trial confidence decreases across eccentricities and is lower for invalid conditions than for valid. In contrast, incorrect trial confidence remains relatively constant across attention and eccentricity conditions. Error bars represent one standard error of the mean (SEM).
Figure 5.
 
Confidence by attention, eccentricity, and accuracy. Correct trial confidence decreases across eccentricities and is lower for invalid conditions than for valid. In contrast, incorrect trial confidence remains relatively constant across attention and eccentricity conditions. Error bars represent one standard error of the mean (SEM).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×