**The continuous flash suppression (CFS) task can be used to investigate what limits our capacity to become aware of visual stimuli. In this task, a stream of rapidly changing mask images to one eye initially suppresses awareness for a static target image presented to the other eye. Several factors may determine the breakthrough time from mask suppression, one of which is the overlap in representation of the target/mask categories in higher visual cortex. This hypothesis is based on certain object categories (e.g., faces) being more effective in blocking awareness of other categories (e.g., buildings) than other combinations (e.g., cars/chairs). Previous work found mask effectiveness to be correlated with category-pair high-level representational similarity. As the cortical representations of hands and tools overlap, these categories are ideal to test this further as well as to examine alternative explanations. For our CFS experiments, we predicted longer breakthrough times for hands/tools compared to other pairs due to the reported cortical overlap. In contrast, across three experiments, participants were generally faster at detecting targets masked by hands or tools compared to other mask categories. Exploring low-level explanations, we found that the category average for edges (e.g., hands have less detail compared to cars) was the best predictor for the data. This low-level bottleneck could not completely account for the specific category patterns and the hand/tool effects, suggesting there are several levels at which object category-specific limits occur. Given these findings, it is important that low-level bottlenecks for visual awareness are considered when testing higher-level hypotheses.**

*first rank*is the fastest breakthrough times for hand–large object and tool–large object pairs, and

*second rank*is somewhat slower breakthrough times for hand–small object, tool–small object, and small object–large object pairs because these representations are closer together.

*Third rank*is even slower breakthrough times for hand–tool pairs due to the overlap, and

*fourth rank*is the slowest breakthrough times for within-category pairs because the high-level neural architecture model predicts the most overlap for stimuli from the same category.

*M*= 21.05 years,

*SD*= 2.65, range 18–27 years, two left-handed according to self-report, 12 right-dominant eye). All participants had normal or corrected-to-normal vision (contact lenses only) and received $20 for their participation. The data of one participant had to be replaced due to technical difficulties that resulted in an incomplete data set. This research was conducted in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and was approved by the Macquarie University ethics review committee (human research). Written informed consent was obtained from all participants prior to the start of the experiment.

*SD*= 1.07) correct across all trials. Finally, trials with response times <300 ms or >3

*SD*from the participant's mean across all trials were also excluded, leading on average to the removal of a further 1.64% of the trials.

_{A}(tau-a), which is the proportion of values that are consistently ordered in both variables and is suitable for comparisons of models that predict tied ranks (Nili et al., 2014). We programmed and ran these analyses in MATLAB. In addition to the paired data–model correlation tests, we also ran partial correlation analyses to test data–model correlations while controlling for other variables. To run partial nonparametric (Kendall's rank) correlation analyses, we used the R package ppcor (S. Kim, 2015). In addition, as a sanity check, we also ran these correlation and partial-correlation analyses using the available MATLAB functions for nonparamatric (Spearman's rank) correlations and found somewhat higher correlation coefficients but overall a consistent pattern of results.

*p*values as the proportion of absolute sampled correlation coefficients that were greater than or equal to the absolute observed correlation coefficient. Exact

*p*values were calculated with the R function permp (statmod package) (Phipson & Smyth, 2010). We corrected our analyses for multiple comparisons using Bonferroni adjusted alpha significance levels.

*p*= 0.93).

*SD*= 434 ms; tools mean RT = 1,526 ms,

*SD*= 508 ms; hammers mean RT = 1,475 ms,

*SD*= 488 ms; small objects mean RT = 1,712,

*SD*= 517 ms; phones mean RT = 1,694,

*SD*= 450 ms; large objects mean RT = 1,759,

*SD*= 782 ms; cars mean RT = 1,695,

*SD*= 510 ms). This suggests that hands and tools are both relatively inefficient masks. This is not just due to the category level (basic vs. supraordinate) as it is present for both levels (e.g., the tool category as well as the hammer) compared to all other categories. If we test a post hoc model that has hands and tools as relatively inefficient masks (fastest breakthrough times and first rank), there is a significant data–model correlation (exploratory analysis; Bonferroni-corrected significance threshold of

*p*= 0.025, tau-a mean = 0.191,

*p*< 0.001). These correlations were significantly higher than the correlations for the high-level representational architecture model (permutation test

*p*values for the difference:

*p*< 0.001).

*M*= 20.25 years,

*SD*= 1.48, range 18–24 years, two left-handed according to self-report, 13 right-dominant eye). All participants had normal or corrected-to-normal vision (contact lenses only) and received $20 for their participation. One participant had to be replaced due to technical difficulties.

*SD*= 3.06) correct across trials; in 0.63% of trials, there was no response, and data trimming as described for Experiment 1 led to the removal of a further 1.56% of the trials.

*SD*= 632 ms). We also found relatively fast RTs again for the tool masks (tools mean RT = 1,863 ms,

*SD*= 855 ms; hammers mean RT = 1,788 ms,

*SD*= 781 ms). In this experiment, the RTs in the large object conditions were also similar to the tool conditions (large objects mean RT = 1,813 ms,

*SD*= 598 ms; small objects mean RT = 1,919 ms,

*SD*= 606 ms; phone mean RT = 2,054 ms,

*SD*= 782 ms; car mean RT = 2,119 ms,

*SD*= 763 ms). Nevertheless, we found significant data–model correlations for the hands/tools inefficient mask model (Bonferroni-corrected significance threshold of

*p*= 0.025, tau-a mean = 0.212,

*p*< 0.001). These correlations were significantly higher than the high-level representational architecture model correlations (permutation test

*p*values for the difference:

*p*< 0.001), for which we did not find significant data–model correlations (Bonferroni-corrected significance threshold of

*p*= 0.025, tau-a mean = −0.0003,

*p*= 0.99). Overall, even with a nonmanual response modality, we found the fastest breakthrough times for the hand masks ruling out a simple manual response facilitation explanation. Furthermore, we found no significant correlations for the high-level representational architecture model and found that the hand/tool inefficient mask model yielded better data–model correlations.

*Canny*method with the low and high thresholds 0.1 and 0.2 (Canny, 1986). We also analyzed the edges employing the

*Prewitt*method (Prewitt, 1970) and found comparable results, suggesting that the pattern of results was not dependent on the particular choice of edge detector. We calculated the average percentage of edges across the target area and also across the entire image for each mask category. We used these edge-content mean values to create edge-content models as predictors for breakthrough times (Figure 3).

*p*= 0.01; Experiment 1 percentage of object entire image tau-a mean = 0.085,

*p*< 0.001, percentage of object target image tau-a mean = 0.148,

*p*< 0.001; Experiment 2: percentage of object entire image tau-a mean = 0.065,

*p*= 0.003, percentage of object target image tau-a mean = 0.178,

*p*< 0.001). However, our analysis also revealed that the data–model correlations for the object coverage for the entire image were significantly smaller compared to the hand/tool inefficient mask model (Experiment 1: tau-a mean = 0.191; Experiment 2: tau-a mean = 0.212, permutation test

*p*values for the difference, considering all four paired comparisons for object coverage and edge-content models Bonferroni-corrected significance threshold of

*p*= 0.0125: Experiment 1: percentage of object entire image

*p*< 0.001, percentage of object target area

*p*= 0.023; Experiment 2: percentage of object entire image

*p*< 0.001, percentage of object target area

*p*= 0.076).

*p*= 0.01; Experiment 1 percentage of edge entire image tau-a mean = 0.231,

*p*< 0.001, percentage of edge target area = 0.217,

*p*< 0.001; Experiment 2: percentage of edge entire image tau-a mean = 0.303,

*p*< 0.001, percentage of edge target area tau-a mean = 0.292,

*p*< 0.001). The edge-content model correlation coefficients (except for the edge-target area model in Experiment 1) were also significantly larger than for the hands/tools inefficient mask model (Experiment 1: tau-a mean = 0.191; Experiment 2: tau-a mean = 0.212, permutation test

*p*values for the difference, considering all four paired comparisons for object coverage and edge content models Bonferroni-corrected significance threshold of

*p*= 0.0125: Experiment 1: percentage of edge entire image

*p*< 0.001, percentage of edge target area

*p*= 0.088; Experiment 2: percentage of edge entire image

*p*< 0.001, percentage of edge target area

*p*< 0.001). Thus, this exploratory analysis suggests that the category-specific edge content provides the best predictor for the data and could underlie the hand- and tool-specific effects.

*p*< 0.001; Experiment 2: tau-a mean = 0.120,

*p*< 0.001) but still significant when taking all four variables (object coverage entire image and target area, edge content entire image and target area) into account. This was also the case when we ran partial-correlation analyses for all four variables separately (Bonferroni-corrected significance threshold of

*p*= 0.0125, Experiment 1: percentage of object entire image tau-a mean = 0.264,

*p*< 0.001; percentage of object target area tau-a mean = 0.237,

*p*< 0.001; percentage of edge entire image tau-a mean = 0.141,

*p*< 0.001; percentage of edge target area tau-a mean = 0.169,

*p*< 0.001; Experiment 2: percentage of object entire image tau-a mean = 0.305,

*p*< 0.001; percentage of object target area tau-a mean = 0.248,

*p*< 0.001; percentage of edge entire image tau-a mean = 0.102,

*p*< 0.001; percentage of edge target area tau-a mean = 0.148,

*p*< 0.001). This analysis suggests that category-specific image characteristics, such as edge content, cannot fully explain the hand- and tool-specific effects.

*M*= 23.55 years,

*SD*= 4.57, range 18–36 years, all right-handed according to self-report, 10 right-dominant eye). All participants had normal or corrected-to-normal vision (contact lenses only) and received Euro 7.50 for their participation. This research was conducted in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and was approved by the ethics committee of the faculty for social and behavioral sciences, Friedrich Schiller University Jena. Written informed consent was obtained from all participants prior to the start of the experiment.

*SD*= 0.91) correct across trials; in 0.20% of trials, there was no response, and data trimming as described above led to the removal of 1.78% of the trials.

*p*= 0.019).

*p*= 0.01), we found significant correlations between our data and the Cohen et al. (2015) order model (tau-a mean = 0.200,

*p*< 0.001). We also found significant correlations for the edge-content models (for both tau-a mean = 0.200,

*p*< 0.001), but not the object-coverage models (percentage of object entire image tau-a mean = −0.044,

*p*= 0.435 percentage of object target area tau-a mean = −0.053,

*p*= 0.350). The edge-content model correlation coefficients were numerically and statistically not different from the Cohen et al. (2015) order model correlations (permutation test

*p*values for the model differences, Bonferroni-corrected significance threshold of

*p*= 0.025: Cohen et al., 2015 vs. percentage of edge entire image

*p*= 0.9774, Cohen et al., 2015 vs. percentage of edge target area

*p*= 0.9774). This suggests that both the specific category order (related to category neural similarity) and the edge content are able to account for the data pattern in Experiment 3.

*p*= 0.002) but still significant when taking all four variables (object coverage entire image and target area, edge content entire image and target area) into account. This was also the case when we ran partial-correlation analyses for all four variables separately (Bonferroni-corrected significance threshold of

*p*= 0.0125, percentage of object entire image tau-a mean = 0.210,

*p*< 0.001; percentage of object target area tau-a mean = 0.204,

*p*< 0.001; percentage of edge entire image tau-a mean = 0.180,

*p*< 0.001; percentage of edge target area tau-a mean = 0.180,

*p*< 0.001). This analysis suggests that category-specific image characteristics, such as edge content, cannot fully explain the category-specific effects. Thus, it remains plausible that additional high-level representational or other factors influence visual awareness in CFS.

*SD*= 473 ms; mean mask breakthrough times for other categories: faces mean RT = 1,831,

*SD*= 574 ms; bodies mean RT = 1,798,

*SD*= 575 ms; buildings mean RT = 1,866,

*SD*= 531 ms; cars mean RT = 2,086,

*SD*= 746 ms; chairs mean RT = 1,815,

*SD*= 508 ms) and numerically had the lowest edge content in the context of the object categories employed by Cohen et al. (2015). In our analysis (Figure 5), we compared the full data set (including hands) to hands-inefficient, edge-content, and object-coverage models. We found the largest data–model correlations for the edge-content models (considering comparisons for five models Bonferroni-corrected significance threshold of

*p*= 0.01; percentage of edge target area tau-a mean = 0.240,

*p*< 0.001; percentage of edge entire image tau-a mean = 0.226,

*p*< 0.001; percentage of edge hand inefficient mask model tau-a mean = 0.148,

*p*< 0.001; percentage of object entire image tau-a mean = −0.002,

*p*= 0.945, percentage of object target area tau-a mean = 0.079,

*p*= 0.002). These were significantly larger than the correlations for the hands-inefficient mask model (permutation test

*p*values for the model differences, Bonferroni-corrected significance threshold of

*p*= 0.025: percentage of edge entire image

*p*< 0.001, percentage of edge target area

*p*< 0.001). Thus, in line with the previous experiments, edge-content models were the best predictor for the mask category RT order.

*p*< 0.001) when taking all four image-specific variables (object coverage entire image and target area, edge content entire image and target area) into account. This was also the case when we ran partial-correlation analyses for all four variables separately (Bonferroni-corrected significance threshold of

*p*= 0.0125, percentage of object entire image tau-a mean = 0.294,

*p*< 0.001; percentage of object target area tau-a mean = 0.282,

*p*< 0.001; percentage of edge entire image tau-a mean = 0.176,

*p*< 0.001; percentage of edge target area tau-a mean = 0.165,

*p*< 0.001). In line with the previous analyses, this suggests that category-specific image characteristics cannot fully explain the hand-specific effects.

*Proceedings of the National Academy of Sciences, USA*, 105 (39), 15214–15218.

*Journal of Neurophysiology*, 107 (5), 1443–1456, https://doi.org/10.1152/jn.00619.2011.

*Spatial Vision*, 10 (4), 433–436, https://doi.org/10.1163/156856897X00357.

*PLoS One*, 5 (5), e10773, https://doi.org/10.1371/journal.pone.0010773.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*, 8, 679–698.

*Journal of Neurophysiology*, 117 (1), 388–402, https://doi.org/10.1152/jn.00569.2016.

*Trends in Cognitive Sciences*, 20 (5), 324–335, https://doi.org/10.1016/j.tics.2016.03.006.

*Proceedings of the National Academy of Sciences, USA*, 111 (24), 8955–8960, https://doi.org/10.1073/pnas.1317860111.

*Journal of Cognitive Neuroscience*, 27 (11), 2240–2252, https://doi.org/10.1162/jocn_a_00855.

*Annual Review of Neuroscience*, 18, 193–222.

*Journal of Vision*, 18 (1): 12, 1–15, https://doi.org/10.1167/18.1.12. [PubMed] [Article]

*Journal of Motor Behavior*, 33 (1), 16–26, https://doi.org/10.1080/00222890109601899.

*Journal of Vision*, 16 (3): 17, 1–17, https://doi.org/10.1167/16.3.17. [PubMed] [Article]

*Journal of Physiology*, 160, 106–154.

*Trends in Cognitive Sciences*, 9 (8), 381–388, https://doi.org/10.1016/j.tics.2005.06.012.

*Communications for Statistical Applications and Methods*, 22 (6), 665–674, https://doi.org/10.5351/CSAM.2015.22.6.665.

*Neuron*, 74 (6), 1114–1124, https://doi.org/10.1016/j.neuron.2012.04.036.

*Neuron*, 60 (6), 1126–1141, https://doi.org/10.1016/j.neuron.2008.10.043.

*Perception*, 38 (1), 69–78.

*Human Brain Mapping*, 36 (1), 137–149, https://doi.org/10.1002/hbm.22618.

*Journal of General Psychology*, 3, 412–430.

*PLoS Computational Biology*, 10 (4), e1003553, https://doi.org/10.1371/journal.pcbi.1003553.

*Spatial Vision*, 10 (4), 437–442.

*Statistical Applications in Genetics and Molecular Biology, 9*(1):39, https://doi.org/10.2202/1544-6115.1585.

*Picture Processing and Psychopictorics*(pp. 75–149). New York: Academic.

*Cognitive Psychology*, 8, 382–439.

*Cognition*, 125 (1), 64–79, https://doi.org/10.1016/j.cognition.2012.06.005.

*Trends in Cognitive Sciences*, 10 (11), 502–511, https://doi.org/10.1016/j.tics.2006.09.003.

*Cognitive Psychology*, 12, 97–136.

*Nature Neuroscience*, 8 (8), 1096–1101, https://doi.org/10.1038/nn1500.

*Journal of Vision*, 6 (10): 6, 1068–1078, https://doi.org/10.1167/6.10.6. [PubMed] [Article]

*Behavior Research Methods*, 42 (3), 671–684, https://doi.org/10.3758/BRM.42.3.671.

*Journal of Experimental Psychology: Human Perception and Performance*, 15 (3), 419–433.

*PLoS One*, 11 (7), e0159206, https://doi.org/10.1371/journal.pone.0159206.