Following Luck and Vogel (
1997), investigations into capacity constraints have often examined the effect of memory load, as measured by the number of visual objects presented, on overall performance. Many of these studies have employed a change detection paradigm, designed to measure VSTM quality by an observer's ability to detect a change between a memory array and a subsequent probe array (e.g., Luck & Vogel,
1997; Phillips,
1974; Vogel, Woodman, & Luck,
2001). The item-based limitations are clear: Models with performance dependent on the total number of items have provided strong accounts of the decrease in overall performance with an increase in the number of items (Rouder et al.,
2008; Sewell, Lilburn, & Smith,
2014; Zhang & Luck,
2008); constraints on retrieval time scale with the number of encoded items (Sewell et al.,
2016); successively reporting features of different objects produces larger performance costs than successively reporting features of the same object (Egly, Driver, & Rafal,
1994; Woodman & Vecera,
2011); and performance seems more related to the number of items to be stored rather than the number of spatial locations (Lee & Chun,
2001; Woodman, Vecera, & Luck,
2003), mirroring similar results in the attentional literature (e.g., Duncan,
1984). Limitations have been found at the feature level, but their capacity constraints have yet to be simply characterized. Initial findings indicated that storage is contingent on feature complexity (Alvarez & Cavanagh,
2004; Eng, Chen, & Jiang,
2005) but that interitem similarity and the “resolution” of items within memory may also play a role (Awh, Barton, & Vogel,
2007; Barton, Ester, & Awh,
2009). More recent studies have also suggested that performance may be dependent on interitem, interfeature interactions (Brady & Alvarez,
2011; Brady, Konkle, & Alvarez,
2009), leading to hierarchical accounts of item–feature storage (Brady et al.,
2011; Orhan & Jacobs,
2013).