Free
Article  |   May 2013
When is it time to move to the next raspberry bush? Foraging rules in human visual search
Author Affiliations
Journal of Vision May 2013, Vol.13, 10. doi:https://doi.org/10.1167/13.3.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeremy M. Wolfe; When is it time to move to the next raspberry bush? Foraging rules in human visual search. Journal of Vision 2013;13(3):10. https://doi.org/10.1167/13.3.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Animals, including humans, engage in many forms of foraging behavior in which resources are collected from the world. This paper examines human foraging in a visual search context. A real-world analog would be berry picking. The selection of individual berries is not the most interesting problem in such a task. Of more interest is when does a forager leave one patch or berry bush for the next one? Marginal Value Theorem (MVT; Charnov, 1976) predicts that observers will leave a patch when the instantaneous yield from that patch drops below the average yield from the entire “field.” Experiments 1, 2, 3, and 4 show that MVT gives a good description of human behavior for roughly uniform collections of patches. Experiments 5 and 6 show strong departures from MVT when patch quality varies and when visual information is degraded.

Introduction
Over the past several decades, a very substantial body of research on visual search in humans has been published (Eckstein, 2011; Nakayama & Martini, 2011; J. M. Wolfe, 1998, 2010, 2012c; Wolfe & Reynolds, 2008). Most of this work has concerned search for a single target in displays that either do or do not contain that target. We studied search in the lab in order to understand search in the world, and the single target task has an obvious similarity to a large class of real-world search tasks: Where are my keys? Where is the salt? Am I in this photograph? And so forth. Reasonably enough, the analysis and models of single-target search tasks have focused on the speed and accuracy with which those targets are found (Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009; Elazary & Itti, 2010; Mozer & Baldwin, 2008; Verghese, 2001; J. M. Wolfe, 2007, 2012a; Zelinsky, 2008). 
Much less attention has been devoted to other questions that are relevant to search in the world, notably the question of when to end a search. If there is one and only one target, search termination on target-present trials is straightforward. In single-target tasks, target-absent trials have often been treated as an unfortunate by-product of standard experimental designs or as a possibly useful test of models (e.g., of serial vs. parallel models of search, though this effort has never been as definitive as we might like; Thornton & Gilden, 2007; Townsend & Wenger, 2004). 
This question of search termination becomes much more important if the observer does not know how many targets might be present. This is a characteristic of many real-world search tasks. A radiologist might be looking for all signs of cancer. An intelligence analyst might be trying to determine if anything of note has changed in a swath of territory. In search tasks like these, we remain very interested in the discovery of targets (Did the radiologist find the cancer?), but search termination rules are also important (Did the radiologist miss the cancer because he quit too soon? Did the radiologist fall behind in his work because he spent too much time on each case?). There is another related class of search tasks in which search termination becomes the primary concern. Consider the search for blueberries in a field of blueberry bushes. In season, the visual search is quite straightforward. Round objects of a certain size and color are the targets. There are many, many of these, they are not hard to find, and the berry picker is not under an obligation to pick every berry. The question of interest here is when it is time to move from one blueberry bush to the next. Intuition will tell you that you do not pick all of the berries off one bush before moving on. If intuition fails you in this case, Figure 1 suggests that one Massachusetts berry farm regards this as a problem. 
Figure 1
 
A Massachusetts blueberry farm would like you to search exhaustively even if Optimal Foraging Theory predicts otherwise. Reprinted by permission of Turkey Hill Farm, Haverhill, MA 01830.
Figure 1
 
A Massachusetts blueberry farm would like you to search exhaustively even if Optimal Foraging Theory predicts otherwise. Reprinted by permission of Turkey Hill Farm, Haverhill, MA 01830.
Berry picking is a “foraging” problem. There is very substantial animal literature on foraging, much of it centered on the question of whether or not animals are “optimal” foragers (Pyke, Pulliam, & Charnov, 1977; Stephens & Krebs, 1986). One of the most influential ideas in Optimal Foraging Theory (OFT), and one with very clear application to our blueberry example, is Charnov's “marginal value theorem” (MVT; Charnov, 1976). The basic idea is intuitively appealing. The animal wants to maximize his intake of food. As he forages in one location, he depletes the resource in that location. At some point, the rate of return from the current location drops below the average rate of return. At that point, MVT asserts that it is time to move. Note that the average return will depend on the rate with which resources can be extracted from patches of resource and the time it will take to get to the next patch. You can't collect resources while you are traveling to that next patch. Thus, if it is going to take a long time to get to the next patch, you should exploit the current patch for longer (Stephens & Krebs, 1986). 
There are endless complications and variations on basic foraging and MVT, starting with fundamental questions about what it would really mean to forage optimally (see Stephens, Brown, & Ydenberg, 2007, for a fairly recent summary of the challenges to the earlier foraging theories described in Stephens and Krebs, 1986). Beyond sweeping ideas about optimality, basic MVT assumes a uniform set of patches and an animal that knows the instantaneous and average rate. Obviously, an animal must learn those rates. What happens if patches vary in quality? What happens if others are foraging in the vicinity? What happens if something that wants to eat you is loitering near the better patches, and so forth (for reviews of many of the possibilities, see Stephens and Krebs, 1986; Stephens et al., 2007). Nevertheless, MVT is a foundationally important concept in foraging, and in this paper, we will focus on the basic MVT case and some modest variations in order to ask if humans, performing an easy visual search analog of a berry-picking task, behave as predicted by MVT. As we will see, to a first approximation (Experiments 1, 2, 3, and 4), the answer is that they do. This is, perhaps, not too surprising. Many animals in many situations behave as predicted by MVT. There is no obvious reason why we should not. When we move away from a world of roughly identical patches, however, human behavior, though highly rule-governed, begins to depart markedly from the predictions of MVT (Experiments 5 and 6). Other rules, like picking all items up to some particular stimulus value (Experiment 5) or probability matching (Experiment 6) seem to dominate. Human “patch-leaving” behavior is a rich, orderly, and complex domain. This behavior is not explained by a single rule, but our data show that MVT is an important determinant of patch-leaving behavior. 
The rules that govern our behavior in these tasks could have important consequences. The author of the blueberry patch sign in Figure 1 certainly realizes that fact. If MVT behavior is deeply ingrained in us, this could become a problem when we are faced with foraging tasks that demand that we pick all of the “berries.” The earlier examples from radiology and intelligence surveillance illustrate this potential problem. If a radiologist is looking for metastases of a cancer, we want him or her to find all of them. It would be obviously wrong to adopt a strategy of terminating search when the “yield” from the current patient drops below the average yield. Still, there must be rules, implicit or otherwise, that govern when it is time to move to the next patient. If those rules are influenced by deep-seated MVT tendencies, we can imagine that MVT behavior could be a source of search failures. 
Theoretical ideas from the foraging domain have been brought into the study of human cognition in some previous work. Hutchinson, Wilke, and Todd (2008) presented observers with a fishing task in which they had multiple ponds to pull fish from and where the central question was when it was time to move to the next pond. Their goal has been to apply similar rules to “fishing” for items in a more broadly cognitive sense: for example, in memory (Wilke, Hutchinson, Todd, & Czienskowski, 2009). Perhaps the most sweeping claims for the connection of the rules of animal foraging to human cognition come from Hills (2006). His argument is that the “molecular machinery that initially evolved for the control of foraging and goal-directed behavior was co-opted over evolutionary time to modulate the control of goal-directed cognition. What was once foraging in a physical space for tangible resources became, over evolutionary time, foraging in cognitive space for information related to those resources” (Hills, 2006, p. 4). By this argument, there is more than a merely analogical link between foraging for blueberries and, for example, searching memory in order to name all the animals that you can in a fixed period of time (Hills, Jones, & Todd, 2012). If you try the animal-naming task, you will find yourself naming a collection of animals from one “patch,” perhaps farm animals. You will leave the patch, not when you have named every farm animal that you know, but when the yield from the farm patch drops to a point that makes it worth “traveling” to the fish patch or the jungle patch. 
Returning to a more visual domain of search, the ideas of OFT have been effectively applied to “information foraging” on the world wide web (Pirolli, 2007; Pirolli & Card, 1999). How do we decide when to leave this webpage for another and how do we decide where to go next? Pirolli (1997) introduced the useful idea of “information scent” as a way to talk about navigation in foraging. He used “scent” by analogy to an animal sniffing out its food, but for his purposes and ours, these are typically visual cues to the presence of the object(s) of search. In visual search, information scent is similar to visual guidance (Wolfe, Cave, & Franzel, 1989). If you are looking for the letter “T” and you know that it is red in a display of red and black letters, the scent of red will guide your foraging for that target. 
In more traditional visual search studies, tasks are usually arranged around a succession of discrete trials in which observers search for a single target. It is possible to consider each trial as the equivalent of a “patch” in a foraging study, but typical features of foraging theory, such as the depletion of resources with continued foraging in a patch, are not captured in the standard visual search experiment. In the visual search literature, the aspect of foraging that has attracted the most work has been the study of the searcher's paths through the visual display. The question, which occurs in the animal literature as well, is how observers avoid perseverating on rejected or depleted locations in the field. Klein and MacInnes (1999) proposed that “inhibition of return” (IOR) serves as a “foraging facilitator” in visual search. After attention has been directed to an item and then withdrawn from that item, it is subsequently somewhat harder to get attention back to that item (Posner & Cohen, 1984). The inhibition is not absolute, but it might be enough to bias visual foraging toward new territory (Klein & MacInnes, 1999; Thomas et al., 2005), though there is debate on this point (Smith, Hood, & Gilchrist, 2008; Smith & Henderson, 2011). An alternative, suggested in the animal literature, is to move randomly in a neighborhood but then to intermittently take much larger, random jumps that land you in a new neighborhood (so-called “Levy Flights”; Viswanathan, Buldrev, Havlin, Da Luz, & Stanley, 1999). 
Cain, Vul, Clark, and Mitroff (2011) have applied OFTs directly to multiple target visual search tasks. They use tasks with a modest number of targets to study if human searchers respond optimally to changes in their expectations about the number of targets that might be present. Their observers behaved in a manner qualitatively predicted by foraging theory, though there were systematic deviations from the optimal predictions. As noted earlier, our goal is to examine human search behavior when there are a very large number of targets. Here, the discovery of an individual target is not much of an event. It is the rate of acquisition and the “patch-leaving” decisions that become our focus. 
Experiment 1
Methods
Figure 2 shows a screenshot of the berry patch simulation of Experiment 1. Some modest modifications have been made to the figure for the sake of clarity. At the start of a 10-min episode of picking, the observer was presented with the aerial view of the field, as shown on the right. The “current patch” field would be blank at the start of the foraging episode. The observer's cursor was positioned in the “Start” box. To begin foraging, the observer clicked on a region in the aerial view. The cursor would then “walk” to the clicked location, as though the observer was walking to that patch in the larger field. Walking speed could be fast (3 pixels/ms) or slow (0.3 pixels/ms), a 10× difference. Since one cannot pick berries while in transit, varying the travel time changes the overall rate of picking and should influence patch-leaving time according to MVT (Charnov, 1976). 
Figure 2
 
Stimulus configuration for Experiment 1: Modified screenshot.
Figure 2
 
Stimulus configuration for Experiment 1: Modified screenshot.
Once the observer arrived in the chosen location, that patch appeared in magnified form in the “current patch” field on the left. The observer could then use the mouse to “pick” berries. As shown in Figure 2, there are bluish and reddish berries of different sizes. (For illustrative purposes, the red in Figure 2 is more vivid than the red in the actual stimulus.) In addition, there are green, oval leaves. Observers were instructed that “good” berries were big and reddish. Bad berries were smaller and bluish. Good and bad berries were drawn from overlapping normal distributions of color and size with d' = 2.0. Thus, both color and size were correlated with value, but the correlation was imperfect. Big red berries were likely but not guaranteed to be good. The entire field was composed of an 8 × 8 array of patches. Each patch had either 8, 16, 32, or 64 berries. Of these, an average of 20% were good berries. 
Clicking on a berry was the analog of picking it. Feedback as to whether the berry was good or bad was given by two different tones. Observers received one point for a good berry and lost one point for bad berries. They were instructed to collect as many good berries as possible in 10 min. In a block of trials, berries could be easy or hard to pick. In easy picking conditions, berries could be picked by simply moving the cursor over the berry and clicking. In the hard condition, the cursor would not move through the green foliage, requiring a more circuitous route from berry to berry, reducing the rate of acquisition. 
When observers had picked as many berries as desired from the current patch, they clicked in the start box to leave the patch. They then clicked a new location in the larger field, selecting that patch and beginning another bout of berry picking. The time bar at top center of the display registered the time remaining. Observers continued this process of selecting a patch, harvesting its berries, and leaving until the time ran out. Figure 2 shows the moment just after the selection of the fourth patch in this session. 
The combination of “Easy” and “Hard” picking settings and “Fast” and “Slow” traveling settings yields four conditions. Observers were tested for five 10-min sessions in each of the four conditions. Ten observers were tested (five male), mean age 26 (18–44 years old). All observers gave informed consent approved by Brigham and Women's Hospital and consistent with the Declaration of Helsinki. All were paid $10/hr for their time. All had vision corrected to at least 20/25 and passed the Ishihara color vision screen. One observer failed to complete the experiment. 
Results
This paradigm generates a large body of data; >52,000 result times (RTs) from our nine observers. For our purposes, the questions of most interest have to do with when the observer leaves one patch to travel to the next. Accordingly, it is informative to average the data, aligned relative to the patch-leaving click rather than to the first click. Figure 3 shows RTs averaged in this manner. Thus, 1 on the x-axis refers to the final berry picked in the patch, 2 is the preceding berry, and so forth. It is also important to note that, since observers could pick as many berries as they desired in a patch, the number of clicks per patch varies. Every patch must have a final click, but the counts for each preceding click will decline. We will show the data for the final 10 clicks in each patch. On average, the Easy–Fast condition produced 267 final clicks, meaning that, on average, observers visited 267 patches in 50 min of Easy–Fast picking. For Easy–Slow, the average was 213; Hard–Fast was 111, and Hard–Slow was 94. 
Figure 3
 
RT as a function of the “reverse click order.” Click 1 is the final click in a patch. HF = Hard, Fast condition; HS = Hard, Slow; EF = Easy, Fast; ES = Easy, Slow. Solid lines are averages of nine observers. Error Bars show ±1 SEM.
Figure 3
 
RT as a function of the “reverse click order.” Click 1 is the final click in a patch. HF = Hard, Fast condition; HS = Hard, Slow; EF = Easy, Fast; ES = Easy, Slow. Solid lines are averages of nine observers. Error Bars show ±1 SEM.
The average numbers of clicks in Reverse Position 10 (tenth click from the moment of patch leaving) were Easy–Fast: 83; Easy–Slow: 70; Hard–Fast: 46; and Hard–Slow: 45. The minimum number of clicks for any observer in any condition does not fall below 25 at Position 10, meaning that all observer averages are based on no fewer than 25 observations. 
The results show that the Easy–Hard manipulation has a very large effect. The Fast–Slow travel manipulation has no effect in this experiment for reasons we will discuss later. RTs slow toward the end of the time in the patch. We can reject the hypothesis that RTs are constant across clicks, all Fs(1, 8) > 20, all ps < 0.002. Presumably, observers slow down because the resource is becoming depleted, making it harder to find a good berry. 
Increasing difficulty can be seen in Figure 4, plotting the redness of the picked berries as a function of the forward click order. In this graph, Position 1 is the first berry clicked and Position 10 is not necessarily the last. Larger numbers indicate more vividly red berries. Observers begin with the classic “low hanging fruit” and move to progressively less assuredly good berries. They are more cautious when the picking is hard, investing their picking energy only in the redder berries. The declines are reliable, all Fs(1, 8) > 16, all ps < 0.003. Results for the size of berries are smaller but comparable, all Fs(1, 8) > 7.6, all ps < 0.02. Observers pick the biggest, reddest berries first because those are most likely to be good berries. This can be seen if we plot the Positive Predictive Value (PPV) of each click as a function of reverse click order (Figure 5). PPV is defined as good berries/total berries picked at each position. 
Figure 4
 
Picked berry color as a function of the “forward click order.” Here, position 1 is the first click in the patch.
Figure 4
 
Picked berry color as a function of the “forward click order.” Here, position 1 is the first click in the patch.
Figure 5
 
Positive predictive value (PPV) of each click, averaged from the final click in a patch. In this case, PPV = “good” berries/berries picked.
Figure 5
 
Positive predictive value (PPV) of each click, averaged from the final click in a patch. In this case, PPV = “good” berries/berries picked.
The early clicks yield good berries on about 90% of trials. As the resource is depleted, the chance of picking a bad berry increases. Of relevance for current purposes is the difference between Easy and Hard conditions. When the picking is easy, observers are willing to tolerate a lower final PPV. The impact of a bad berry is not as great. The difference between Easy and Hard PPV at the final click is significant for Fast, t(8) = 5.5, p = 0.0006, and Slow, t(8) = 3.7, p < 0.0064, conditions. 
In order to test the utility of the MVT in this setting, we need to compare the instantaneous rate of return with the overall rate. We can obtain the instantaneous rate by dividing PPV by RT. The overall rate is simply the total number of berries picked divided by the 10-min duration of a block. In Figure 6, the solid lines show the instantaneous rates, averaged across observers, as they change over time while the dashed lines show the overall rate, again, averaged across observers, for each of the four conditions, plotted as constant values. 
Figure 6
 
Instantaneous (solid lines) versus overall (dashed) rates of return four each of the four conditions of Experiment 1. Error bars = ±1 SEM.
Figure 6
 
Instantaneous (solid lines) versus overall (dashed) rates of return four each of the four conditions of Experiment 1. Error bars = ±1 SEM.
MVT predicts that observers should leave the patch when the instantaneous rate of return reaches the overall rate. Clearly, the data shown in Figure 6 are consistent with this hypothesis. We would not want to say that we are proving that the overall rate does not differ from the rate at the final click, because that would be a form of “proving” the null hypothesis. However, it is encouraging that the instantaneous rate is quite strongly correlated with the overall rate, across individuals (Easy–Fast condition: r = 0.77; Easy–Slow: 0.55; Hard–Fast: 0.92; and Hard–Slow: 0.58). Moreover, the overall rate is slower than the rate at all click positions other than the final position, all ts(8) > 2.6, all ps < 0.03, showing that the instantaneous rate does not reach the average rate until the final click. This is true for three of the four conditions. For those conditions, the rate of return on the last click does not differ from the overall rate, t(8) < 0.05, p > 0.05. The final click in the Easy–Fast condition has a lower rate than the overall rate, t(8) = 3.3, p = 0.01. In this case, the overall rate is about the same as the penultimate click. If one were concerned about multiple comparisons in this case, the penultimate click might also be considered to be indistinguishable from the overall rate. 
It may not be immediately obvious that the average rate can be lower than the instantaneous rate at almost all points. This occurs because the average rate includes the travel time between patches. During that time, the instantaneous rate is zero. The instantaneous rate, plotted in Figure 6 does not contain a contribution from the travel time. It represents the rate of return in the patch at this moment. 
Discussion
In Experiment 1, observers performed very much as the MVT predicts they should behave. They search until the instantaneous rate of return reaches the average rate of return and then they switch to a new patch. In principle, the rate of return should have been influenced by the time required to pick individual berries in the patch and the time required to move between patches. In this experiment, travel speed had no influence because we mistook travel speed for travel time. In practice, when travel speed was low, observers moved a short distance. When travel speed was high, they made much longer jumps between patches, seeking the “best” patches. The tradeoff of speed for distance left travel time roughly constant. As it happens, in this task, patches with more berries did not yield a higher rate of return. Observers simply left sparser patches more quickly. As a result, rate of return was essentially the same for fast movement and for slow movement. In Figure 6, this is seen by the small differences between the Fast overall rates (light dashed lines) and the Slow overall rates (dark dashed lines). We will return to this variable in later experiments. 
Experiment 2: Exhaustive instructions
Experiment 1 shows that, when told to pick as many good berries as possible, observers adopt behavior predicted by OFT. As discussed earlier, this could be a problem if observers need to find all of the targets, maximizing hit rate and/or accuracy, rather than rate of acquisition. In Experiment 2, we compared performance in our foraging task under rate maximizing and hit maximizing instructions. 
Methods
The methods for Experiment 2 were essentially the same as for Experiment 1 with the following changes. There was no manipulation of travel time between patches. It was fixed at the fast speed from Experiment 1. Observers were tested in four conditions: The Easy and Hard Fast Foraging conditions replicated the two Fast conditions of Experiment 1. The instructions stressed collecting as many good berries as possible. In the Easy and Hard Exhaustive conditions, observers were told to try to find every good berry in a patch before moving to the next patch and to try to find every good berry in the entire field (imposing some time pressure, akin to the pressure of finishing your clinical case load for the day). Observers received one point for a hit and lost one point for a false alarm in either case. They received feedback about each berry as it was picked. Ten observers were tested (one male), mean age 22 (18–32 years old). Three observers had participated in Experiment 1
Results
Observers followed directions, as evidenced by an increase in the hit rate from 68% to 83% of good berries in each patch sampled. A 2-way ANOVA with instructions and Easy versus Hard as factors revealed a significant main effect of instruction, F(1, 9) = 34.0, p = 0.0002, partial η2 = 0.25. Neither the effect of Easy versus Hard or the interaction was close to significant. Discriminability of good and bad berries was set at a nominal d′ of 2.0 for color and for size. Since these were independent, the maximum performance would be equivalent to a d′ of 2.8. In fact, d′ stayed constant at 2.3. Criterion shifts with instructions as being more exhaustive are accompanied by increase in false alarm rate from 4% to 10%. The criterion value, “c,” shifts from 0.54 in the Easy Foraging case to 0.14 in the Easy Exhaustive case, t(9) = 3.88, p = 0.003, and from 0.62 in the Hard Foraging case to 0.14 in the Hard Exhaustive case, t(9) = 7.46, p < 0.001. This pattern is seen in Figure 7 where z-transformed hits are plotted as a function of z-transformed false alarms for each observer in each of the four conditions. 
Figure 7
 
z(Hit) as a function of z(False Alarm) for each subject in each condition of Experiment 2. Note that this is a portion of the ROC space. The d' = 0 line is shown in the lower right.
Figure 7
 
z(Hit) as a function of z(False Alarm) for each subject in each condition of Experiment 2. Note that this is a portion of the ROC space. The d' = 0 line is shown in the lower right.
As can be seen in the figure, the data fall around a single receiver operating characteristic (ROC) (a straight line, when plotted in z-units). Data points reflect a more conservative criterion (down-left) in the Foraging conditions than in the Exhaustive conditions. 
Figure 8 shows how the instruction to find all the good berries altered patch-leaving behavior. As in Figure 6, this figure shows the rate of good berry acquisition for the last seven clicks in each condition. However, the x-axis is changed. Each point is plotted at the average time of that click's occurrence. Thus, in the Hard Exhaustive condition, the final click (plotted as Reverse Click Position 1 in Figure 6) occurs after an average of 35 s of picking in the patch. The final Hard Foraging click occurs after 30 s in the patch. This way of plotting the data makes it clear that the effect of the instruction pick all the berries is to induce observers to pick for a longer time in the patch and to pick to a lower rate of return for each click (thicker, darker lines). The dashed lines show the overall rate for each of the four conditions. In the two conditions that replicate the Easy–Fast and Hard–Fast conditions of Experiment 1, observers again tend to leave the patch when the instantaneous rate reaches the overall rate, as predicted by MVT. The slight undershoot, seen in the average data is not significant for either Easy or Hard conditions, both ts(9) < 2.0, p > 0.05. The exhaustive conditions show a tendency for observers to search beyond the point of diminishing returns. In the Easy condition, this is significant, t(9) = 3.2, p = 0.011; in the Hard condition, it is not, t(9) = 1.8, p > 0.05. 
Figure 8
 
Rate (berries/s) for the last seven berries picked in each condition. Here those last clicks are plotted against time since the start of picking in that patch. Error bars are ±1 SEM.
Figure 8
 
Rate (berries/s) for the last seven berries picked in each condition. Here those last clicks are plotted against time since the start of picking in that patch. Error bars are ±1 SEM.
Discussion
Not entirely surprisingly, Experiment 2 shows that observers can modify their foraging behavior in response to task demands. There is some circularity in the logic of thinking about these results in terms of OFT. As a result of the instruction to pick more exhaustively, observers stay longer in each patch, picking berries that are less and less likely to be good in the effort to get to the last good berry. As a result, their average rate across the field drops. This, in turn, predicts a later quitting time. However, while the instructions did move observers' criterion, it is worth noting that this movement was not terribly dramatic; at least, not if one is thinking about “exhaustive” search in terms of finding every sign of disease or every threat to security. One might imagine that an “exhaustive” search for an unknown number of targets would always end after picking a bad berry and, perhaps, after a sequence of bad berries. However, in this experiment, the final pick in the exhaustive condition had a 51% chance of being a bad berry, this compares to 25% under the foraging instructions. Nor did observers quit after a run of bad berries. The average “run length” was about one berry, as would be predicted if the probability of picking a bad berry was 0.5 on each click. 
Of course, this experiment does not tell us what radiologists or intelligence analysts might do, given instructions to “find everything.” The results do tell us that, while they do respond to instruction, observers do not trivially switch to a truly exhaustive search mode. 
Experiment 3: The effects of travel time
As noted earlier, MVT proposes that observers should be sensitive to the average rate of picking within a patch and the travel time between patches (Charnov, 1976). Travel time decreases the overall rate of return, so if the travel time is longer, a forager should continue to forage in the current patch for a longer period time, tolerating a lower instantaneous rate of return. This prediction was not borne out in Experiment 1 due to a flaw in the design. Observers were permitted to choose the next patch from the aerial view of the entire field. The speed with which their agent moved to the next patch did, indeed, vary. However, when that speed was slow, observers tended to pick nearby patches. When the speed was fast, they jumped to more remote, but apparently promising, patches. In effect, they traded speed for distance and maintained a roughly constant travel time. Experiment 3 modifies the method in order to correct that problem and to assess the effect of the travel time variable. 
Method
A sample stimulus field for Experiment 3 is shown in Figure 9. The basic method is very similar to the previous experiment. Observers pick good berries in the left-hand field. Good berries are brighter, more saturated red. Bad berries are less saturated and darker. Colors of good and bad berries are drawn from overlapping normal distributions separated by a 2.5 SDs (d′ = 2.5). This method eliminates the size cue. The square berries are all the same size. There were an average of 40 berries on each page (uniform distribution from 36–44). On average, half the berries in any given patch were good. Rate of picking could be either fast or slow. For fast picking, a 100 ms minimum was imposed between clicks (essentially, delay). For slow picking, a minimum 750 ms delay was imposed between clicks. Travel time to the next patch, shown on the right, was either fast (2 s) or slow (15 s). In this experiment, there was no choice about the next patch. When observers clicked on the small box at the center, they moved to the next patch at the ordained speed for that block. 
Figure 9
 
Screen shot of stimuli for Experiment 3.
Figure 9
 
Screen shot of stimuli for Experiment 3.
There were four blocks generated by this 2 × 2 design: Fast travel, Fast picking (FF), Slow travel, Fast picking (SF), FS, and SS. Each observer was tested for 15 min in each block. Block order was randomized. Observers were instructed to pick as many good berries as possible in 15 min. They could move from patch to patch at will. Patches were infinite in number. The picking times and travel times were described before each block as fast or slow. 
Ten observers were tested (two male), mean age 24 (19–37 years old). All had vision of at least 20/25 with correction and all passed the Ishihara color vision screening test. All observers gave informed consent. Two observers had participated in Experiments 1 and 2
Results
Figure 10 shows the instantaneous rate averaged from the last click in a patch, working backwards. Thus, as in previous figures, this graph shows the rate for each of the last several clicks in a patch for each condition (nine clicks in this case). As in Figure 8, the x-axis shows the average time of occurrence for each of those nine clicks. As can be seen, unlike Experiment 1, the travel time manipulation changes the overall rates, as does the picking speed manipulation. Overall rates are lower when the picking rate is slower and when travel time is slower. Patch-leaving time is clearly related to this overall rate or return. For three of the four conditions, observers leave the current patch on the first click after the instantaneous rate function crosses the average rate for that condition. For the FF condition, observers abandon the patch, on average, two clicks after the instantaneous rate falls below the average rate. 
Figure 10
 
Average (± SEM) instantaneous rate for the final nine clicks for each of the four conditions of Experiment 3. Dashed lines show average rate of return for each condition. Slow travel conditions are shown with outlined symbols and coarsely dashed lines for the average rate.
Figure 10
 
Average (± SEM) instantaneous rate for the final nine clicks for each of the four conditions of Experiment 3. Dashed lines show average rate of return for each condition. Slow travel conditions are shown with outlined symbols and coarsely dashed lines for the average rate.
The specific purpose of Experiment 3 was to show that observers left the current patch at a lower instantaneous rate when travel time was long than when travel time was short. This is clearly the case for Fast picking, t(9) = 7.065, p < 0.0001, and Slow picking, t(9) = 6.489, p < 0.0001. Moreover, the time in a patch is longer in Slow travel than in Fast travel conditions, Fast picking: t(9) = 5.811, p = 0.0003; Slow picking: t(9) = 3.986, p = 0.0032. Note that this is the time spent picking. It is does not include the time spent traveling which would make the time comparison trivial. 
Discussion
Experiments 1, 2, and 3 show that these artificial berry-picking tasks produce foraging behavior consistent with MVT. In a task where the probability of picking a good berry declines with time spent in a patch, observers leave the current patch for the next patch when the instantaneous rate of return drops below the average rate of return for the current set of patches. When picking is easy, observers leave the current patch more readily than when picking is hard. Similarly, when travel time is short, they leave the current patch more readily than when travel is slow. 
It is tempting to conclude that observers' patch-leaving behavior is caused by their sensitivity to these variables. That is, one appealing hypothesis is that observers estimate the average rate of return and monitor the current rate of return. A decision to move to the next patch would be caused when observers note that current rate has dropped below the average rate. However, this conclusion goes beyond the data. The data show only that rate at the moment of patch leaving is similar to and is correlated with the average rate. The rates might be related to each other even if the causal hypothesis is not true. This possibility is hinted at in Experiment 2. When we ask observers to search exhaustively, they search longer in each patch. This reduces the average rate of return and the instantaneous rate at the moment of patch leaving. The values co-vary but the cause is (we presume) the instructions to the observer. In the remaining experiments of this paper, we vary the basic foraging task by changing the numbers of berries in a patch or the ratio of good to bad berries. We also vary the visual distinction between good and bad berries. The results cast further doubt on the hypothesis that our observers are making patch-leaving decisions on the basis of an optimal assessment of the point of diminishing returns, as proposed by MVT. 
Experiment 4: Varying set size
MVT applies to the case where the field is a uniform resource. That assumption can be violated in several ways. Here we simulate the case of patches of different sizes. A berry bush could be big or small. In the computerized version of berry picking, this is implemented as a variation in the set size within each patch. 
Methods
This experiment is essentially identical to Experiment 3 with the following changes. Patches were randomly assigned one of six set sizes (10, 15, 20, 25, 30, or 35). As before, 50% of berries were drawn from the “good” distribution and 50% from the bad and, as before, the brightness and saturation of a berry was a good signal as to its status (d′ = 2.5). Picking was fast in all blocks of the experiment. Travel time was either 1 s or 10 s. Observers were tested for two 15-min blocks with each travel time for a total of four blocks. Block order was either 1-10-10-1 or 10-1-1-10 with half the observers tested with each block order. 
Ten observers were tested (one male), mean age 24 (18–53 years old). All had vision of at least 20/25 with correction and all passed the Ishihara color vision screening test. All observers gave informed consent. Three observers had participated in Experiments 1, 2, or 3
Results
The results, averaged over all set sizes, are shown in Figure 11. Looking first at the Fast Travel condition, we see that, as in earlier experiments, when the instantaneous rate reaches the average rate, the observers move to the next patch. There is no significant difference between the average rate (0.75 berries/s) and the rate on the final click on a patch, 0.76, t(9) = 0.4, p = 0.70. In the Slow Travel condition, however, the MVT-predicted behavior is not seen. As in Experiment 3, longer travel time induces observers to stay in the current patch longer and to a lower rate of return. However, observers leave the patch too soon. The average yield is 0.45 berries/s but they leave when the current rate reaches 0.66, significantly higher than that average yield, t(9) = 5.11, p = 0.0006. 
Figure 11
 
Rate of return for the last seven berries picked as a function of the time in the patch for that selection. Data are averaged over all observers and set sizes. Error bars are ±1 SEM. Slower travel (dark red) leads to longer time in each patch than fast travel (light green). In the fast travel condition, observers leave the patch when the rate reaches the average rate (dashed line). However, in the slow travel condition, observers leave the patch well before they reach the average rate.
Figure 11
 
Rate of return for the last seven berries picked as a function of the time in the patch for that selection. Data are averaged over all observers and set sizes. Error bars are ±1 SEM. Slower travel (dark red) leads to longer time in each patch than fast travel (light green). In the fast travel condition, observers leave the patch when the rate reaches the average rate (dashed line). However, in the slow travel condition, observers leave the patch well before they reach the average rate.
One might assume that the early departure from the patch in the slow travel condition was related to the wide variation in set size. For example, observers might abandon low set size patches very rapidly. However, Figure 12 suggests that this is not the case. 
Figure 12
 
Rate of return as a function of set size. Averages work backward from the final berry picked in a patch. Data are averaged over all observers with different brightness/color lines showing different set sizes. Error bars are ±1 SEM. The upper panel shows the fast travel condition, and the lower panel shows the slow condition. Dashed lines show the average rate of return for the block.
Figure 12
 
Rate of return as a function of set size. Averages work backward from the final berry picked in a patch. Data are averaged over all observers with different brightness/color lines showing different set sizes. Error bars are ±1 SEM. The upper panel shows the fast travel condition, and the lower panel shows the slow condition. Dashed lines show the average rate of return for the block.
As can be seen in the figure and as is reasonable, the yield drops faster in patches with fewer berries. It is also clear that all of the functions drop to approximately the same rate on the final click in the patch. Once the observers reach that rate, they leave the current patch for the next one. In the Fast Travel conditions (top panel), the rate at final selection clusters around the average rate for the task. In the Slow condition, observers leave the current patch when rate drops to a certain level but that level is consistently higher than would be predicted by MVT. Thus, there is no support for the hypothesis that observers are responding suboptimally to some set sizes. Rather, they are seemingly following the same rule for all set sizes. In the slow travel condition, that rule does not seem to be in line with MVT. One possibility is that observers are not correctly accounting for the effects of time. They are behaving as if the travel time is shorter than it is. It is unclear why that should be the case in Experiment 4 while no similar pattern is seen in the results for Experiment 3. The main difference between the two experiments is that the patches were more similar to each other in Experiment 3 and in Experiment 4 with its set size variation. In Experiment 5, we use a different sort of between-patch variation and we see a similar response to the travel time variable. 
Experiment 5: Varying patch quality
Returning to the berry patch, individual patches might vary in the number of berries present, as in Experiment 4. They might also vary in the quality of the berries available. Perhaps, the berries on one patch have become riper earlier than that berries on another. In Experiment 5, we manipulate patch quality by varying the ratio of good berries to bad berries. 
Methods
Methods were the same as in Experiment 3 with the following modifications. The total number of berries in a patch ranged from 20 to 30 (uniform distribution). The average set size was 25 berries. The critical variation in Experiment 5 was the proportion of those berries that were “good.” In a given patch, the probability of a good berry took one of seven values (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, or 0.8). These were uniformly distributed and randomly selected for each new patch. Thus, the average probability that a berry was a good berry was 0.5. As before, the brightness and saturation of the berries indicated “goodness.” The means of good and bad distributions of colors were separated by 2.5 SDs. This means that, rather like a real berry bush, a new patch would give an immediate impression of its quality with a 0.2 patch looking markedly darker and less saturated than a 0.8 patch. 
Ten observers were tested (one male), mean age 25 (18–53 years old). All had vision of at least 20/25 with correction and all passed the Ishihara color vision screening test. All observers gave informed consent. Four observers had participated in Experiments 1, 2, 3, and 4
Each observer completed four 15-min long blocks. Two of these had a fast travel time of 1 s while the other two had a slow travel time of 10 s. 
Results
Figure 13 shows the now-familiar graph of the instantaneous rate for the last ten selections as a function of the average time in the patch. Dashed lines show the overall rate for the two blocks with fast (light green) or slow (dark red) travel times. As can be seen, the fast travel time yields average data that correspond to the MVT prediction. The instantaneous rate falls over time and observers abandon the current patch, on average when the instantaneous rate drops below the average rate. As in Experiment 4, the slow travel time produces somewhat different results. Observers appear to leave the current patch before the rate reaches the average rate. The instantaneous rate on the final selection is higher (0.63) than the average rate (0.50), though that trend is only marginally significant, t(9) = 2.18, p = 0.06. For the fast travel time, there is no significant difference between the final rate (0.81) and the average rate (0.85; t[9] = 0.63, p = 0.55). 
Figure 13
 
Instantaneous rate as a function of time in patch. Data averaged over 10 observers for the last 10 selections. Dashed lines show average rates for the fast (light green) and slow (dark red) blocks.
Figure 13
 
Instantaneous rate as a function of time in patch. Data averaged over 10 observers for the last 10 selections. Dashed lines show average rates for the fast (light green) and slow (dark red) blocks.
If the results are plotted as a function of the different patch qualities (Figure 14), a pattern of results appears that strongly suggests that observers are not simply adopting a threshold rate of return as the basis for a decision to leave a patch. The final instantaneous rate is a function of quality of the patch. The main effect of patch quality is significant, F(6, 54) = 15.4, p < 0.0001, ges = 0.26. By contrast, if we perform the same analysis on the final instantaneous rates in Experiment 4, where set size was varied, there are no reliable differences in the final rates as a function of set size, F(5, 45) = 0.44, p = 0.81. 
Figure 14
 
Instantaneous rate as a function of time in patch shown for each level of patch quality (probability that a berry is a good berry). Data averaged over 10 observers relative to the last selection in the patch. Dashed lines show average rates for the fast and slow travel time blocks.
Figure 14
 
Instantaneous rate as a function of time in patch shown for each level of patch quality (probability that a berry is a good berry). Data averaged over 10 observers relative to the last selection in the patch. Dashed lines show average rates for the fast and slow travel time blocks.
Figure 14 shows that, while the overall behavior is consistent with MVT, at least for the shorter travel time, something other than the instantaneous rate of return is driving the decision to leave a specific patch. Figure 15 shows that the number of berries clicked is a strongly linear function of the quality of the patch (main effect of patch quality on items selected: F[1, 9] = 822, p = 0.00, ges = 0.95). Thus, the proportion of good berries strongly predicts the proportion of berries that will be selected. In the graph, smaller symbols are the data for each individual observer for a travel time of 1 s. Larger circles are the average of those individual data points with the solid line showing the linear regression. That function has a slope of 0.67 (r2 = 1.0). The individual functions have similar slopes. (all r2 > 0.96). The dashed purple line shows the linear regression for travel time of 10 s. Observers spend a little longer in each patch in the 10 s condition (main effect of travel time on time in each patch, F[1, 9] = 6.3, p = 0.033, ges = 0.02). As MVT predicts, observers seem weakly inclined to spend a little more time in a patch if it is going to take more time to get to the next patch. However, that is clearly not the main driver of performance. 
Figure 15
 
Probability of clicking on a “berry” as a function of the probability that a given berry is a target (patch quality). Filled green symbols show individual observer data for travel time of 1 s. Open green circles show the average of those data. Solid black line is the linear regression of those data. Dashed purple line shows the linear regression for travel time of 10 s. The dotted line has a slope of 1.0, representing perfect probability match behavior.
Figure 15
 
Probability of clicking on a “berry” as a function of the probability that a given berry is a target (patch quality). Filled green symbols show individual observer data for travel time of 1 s. Open green circles show the average of those data. Solid black line is the linear regression of those data. Dashed purple line shows the linear regression for travel time of 10 s. The dotted line has a slope of 1.0, representing perfect probability match behavior.
What is driving behavior in this case? Recall that good, ripe berries differ from bad berries in color with a d′ of 2.5. Behaviorally, observers perform at a d′ of about 2.1. This is largely constant over patch quality. There is no effect of patch quality on hit rate, false alarm rate, d′, or criterion, all Fs(6, 54) < 2, all ps > 0.05. 
Picking a greater percentage of berries would increase both hits and false alarms—the equivalent of a criterion shift. Picking fewer shifts criterion in the conservative direction. D′ will remain stable but the hit and false alarm rates would change. Returning to Figure 15, a Monte Carlo simulation shows that a P(click) × P(target) function with a slope of 0.67 is the slope that holds P(hit) and P(false alarm) constant over patch quality if the underlying d′ is 2.1. The slope value would be different for different d′ values. How do observers manage to produce this behavior? A constant decision criterion would lead observers to pick all berries redder than some color criterion value. The slope of 0.67 also minimizes changes in the redness of the final berry chosen over patch quality. The most plausible scenario, therefore, is that observers are picking all the berries that are redder than some criterion value. 
In fact, there is an effect of patch quality on the redness of the final selected item. Final redness is lower for low quality patches, ANOVA; F(6, 54) = 4.21, p = 0.001, ges = 0.05, and lower for longer travel times, ANOVA; F(1, 9) = 14.8, p = 0.004, ges = 0.10. The redness of the final berry picked in a low quality patch is about 0.2 SDs less than the redness of the final berry picked in the highest quality patch. The color criterion is roughly constant. Any constant color criterion will produce consistent hit and false alarm rates. In practice, when the travel time was short, observers adopted a neutral criterion (c = 0.04), equalizing the probability of a hit and the probability of a correct rejection, 1 – p(false alarm). When travel time was longer, the criterion was somewhat more liberal, c = −0.13. However, this shift is not statistically reliable, F(1, 9) = 3.7, p = 0.09, ges = 0.07. 
To summarize, in Experiment 5, faced with varying patch quality, observers appear to base their behavior on a decision about the color of the berry. This produces behavior that is, on average, broadly consistent with the MVT. However, on a patch by patch basis, observers do not use the current rate of return to determine quitting time. 
Experiment 6: Eliminating color information
If observers in Experiment 5 were using the color of the berries to drive their decisions, what would they do if the color information were eliminated? Experiment 6 repeats Experiment 5 except that all the berries were the same color. There was nothing to distinguish a good berry from a bad one until it was picked. At that point, feedback was given by means of a tone. Under these circumstances, your chance of picking a bad berry is constant over time in a patch. If there are 30% good berries, each pick will have a 30% chance of being good whether it is your first or your tenth pick. When, then, do you leave the patch? 
Methods
Methods were the same as in Experiment 4 with the following modifications. The major change was that all berries were identical in color (red). As before, in a given patch, the probability of a good berry ranged from 0.2 to 0.8 with an average probability over patches of 0.5. There was no travel time manipulation. Each observer completed two 15-min blocks with a travel time of 1 s. 
Ten observers were tested (five male), mean age 24 (18–37 years old). None had been tested in Experiments 1, 2, 3, 4, or 5. All had vision of at least 20/25 with correction and all passed the Ishihara color vision screening test. All observers gave informed consent. 
Results
Figure 16 shows the rate in berries per second as a function of time in patch. The average rate is plotted as a single, horizontal line. As before, these data are averaged from the final selection backwards. That is, if we look at the data for a target probability of 0.4 (pale blue triangles), we see that the final click occurred after an average of 9 s and at an instantaneous rate of about 0.3 berries/s. The preceding click occurred a second earlier, and so forth. Data are plotted for conditions where there was such a click for 75% of patches, across observers. Thus, for the same P(target) = 0.4 data, 75% of patches had at least four clicks. Better patches had many more clicks; hence, more data points are shown. 
Figure 16
 
Rate in berries per second as a function of average time in patch for Experiment 6. Different curves represent different probabilities of a “good” berry. Points are plotted for those conditions where there was such a click for 75% of patches, across observers.
Figure 16
 
Rate in berries per second as a function of average time in patch for Experiment 6. Different curves represent different probabilities of a “good” berry. Points are plotted for those conditions where there was such a click for 75% of patches, across observers.
Clearly, the quitting rule for this condition is not a rule based on the average rate. Low quality patches never yield at the average rate. Yield from high quality patches never falls to the average rate. It is interesting that the instantaneous rate does fall over time in the patch. While there is no way to know if a berry is good before picking it, the rate falls as a side-effect of the fact that observers are more likely to quit after picking a bad berry than after picking a good berry. Thus, the final berry is, on average, a bad berry. One might imagine that the final berry is always a bad berry. However, that is not the case. The positive predictive value of the last selected berry is 0.31, greater than zero (as it would be if the last berry was always bad). In about one-third of the patches, observers quit immediately after having picked a good berry. In some of these cases, observers clicked on every available berry. If we remove those cases, observers still quit immediately after clicking on a good berry in about one-fourth of the patches. 
What is the patch-leaving rule in this case? It isn't the MVT. It can't be selection based on color information. 
In Figure 17, we see that observers are probability matching, at least on average. That is, if the probability of a target is N% in a given patch, observers will select an average of N% of the items in that patch. Recall that observers have no information about the quality of the patch other than the feedback after picking each berry. If we compare Figures 17 and 15, we see that observers are much less consistent in this experiment than they were in Experiment 5. While the average data are strikingly close to perfect probability matching, slope of the P(target) × P(click) function is 1.03, intercept = 0.01, individual observers given patches of one quality will produce a range of results. In part, this is mandated by the structure of the experiment. If you are in patch of quality 0.3, it will take you an appreciable number of selections to estimate this fact and the probabilistic nature of that sampling task will assure that, in some cases, the initial selections will be misleading. 
Figure 17
 
Probability of selecting an item P(click) as a function of the probability that the item will be a target (patch quality, P[target]). Smaller, paler data points represent individual observers. Large, dark points show average data. The solid line is the best-fit regression line. It is very close to the dashed line, showing the predictions of perfect probability matching.
Figure 17
 
Probability of selecting an item P(click) as a function of the probability that the item will be a target (patch quality, P[target]). Smaller, paler data points represent individual observers. Large, dark points show average data. The solid line is the best-fit regression line. It is very close to the dashed line, showing the predictions of perfect probability matching.
A reasonable approximation to probability summation could be obtained by quitting after a fixed number of errors. However, the number of false alarms is not constant over patch quality. The average number of false alarms varies between 4.6 and 7.0 as shown in Figure 18
Figure 18
 
False alarms (bad berries) per patch as a function of patch quality.
Figure 18
 
False alarms (bad berries) per patch as a function of patch quality.
This variation is significant, F(6, 9) = 4.2, p = 0.027. If the patch quality is N% and observers are selecting N% of the items, then the expected number of false alarms is SetSize*N*(1 – N). Those values are the asterisk symbols on Figure 18. As can be seen, this captures the basic form of the data. The number of false alarms for a given patch quality appears to be the number predicted by perfect probability summation. 
While the observers are probability summating on average, they are not doing so on a patch-by-patch basis. The distribution of P(clicked) is bimodal. Observers tend to select almost all items in a patch or very few. Thus, when the target probability is 0.5., 43% of patches received fewer than nine clicks, 34% received greater than 16, while only 23% received between 9 and 16 clicks. These proportions change with patch quality but overall 47% of patches received fewer than nine clicks, 35% received greater than 16, and 18% received between 9 and 16 clicks. This can be simulated if we assume that observers have some probability of remaining in the patch at each click. That probability is adjusted in a staircase manner, moving down if the preceding selection was a bad berry and up if the preceding selection was a good berry. In our simulation, the probability is defined as a point on a simple logistic function:   
The staircase can move up 4 and down 0.8 on each step and P(remain) is constrained to fall between 0.2 and 0.995. These parameters produce probability summation, the bimodality of the data, and a between-observer variance that is similar to what is seen in the data. However, the parameters are neither principled nor unique. Using these specific values, the results are also somewhat more bimodal than the actual data. At best, the simulation suggests that observers could use something like a stepwise adjustment of a quitting probability to govern their behavior. More definitive tests of this hypothesis will require further experimentation. However the behavior is produced, the observers in Experiment 6 seem to divide patches into good and bad patches. They leave the bad ones quickly and tend to pick most or all of the berries in good patches. When the patches are of intermediate quality, observers seem to categorize them somewhat randomly (presumably based on the first few clicks). The resulting aggregate behavior exhibits probability matching. 
General discussion
As noted in the Introduction to this paper, there has been very little work on foraging within the visual search literature. This is, potentially, a large field of inquiry, and the present set of experiments can only serve as an introduction to the topic. The results show that patch-leaving behavior in human visual search tasks is a strongly rule-governed behavior. When searching through a world of roughly uniform, depletable resources, patch-leaving behavior is consistent with the expectations of the MVT. As observers select items from the current patch, those items become rarer and take longer to pick. As a result, the rate of yield from the patch drops. At some point, the rate drops below the average rate for the task, and at about that point, our observers tend to move to the next patch. 
The behavior is appropriately influenced by the experimental conditions. Observers stay longer and pick to a lower yield when the picking is hard (see, for example, Figures 6, 8, and 10). They stay longer if the travel time between patches is longer (e.g., Figure 10), though they may not perfectly account for the effects of travel time (e.g., Figure 11). Observers modify patch-leaving time in response to changes in instructions. In Experiment 2 (Figure 8), they searched longer when told to search exhaustively. If observers were searching for signs of cancer or security threats, they did not, as we might wish, eliminate false negative (miss) errors, but they did move in the appropriate direction. 
As noted in the discussion of Experiment 2, the response to instructions reveals something of a circularity in using OFT to explain patch-leaving times. If we ask observers to be exhaustive, we are asking them to reduce their average rate of return in the effort to find that last target. Their patch-leaving time will be later, but it does not seem quite right to say that the lower average rate actually caused the later leaving time. In this case, both quantities are modulated by goals and instructions. For a given Rate × Time-in-Patch function, patch-leaving time and overall rate will always be related over a range of leaving times. If the rate of return declines over time and travel time is not zero, there will be an optimal patch-leaving time that yields the highest average rate of return. It will be interesting to see if human foragers are sensitive to the shape of their own rate functions and if they behave “optimally” given the choice. 
As noted at the outset, the whole idea of “optimal” foraging is somewhat problematic. Witness, for example, Pierce and Ollason's (1987) paper, titled “Eight reasons why optimal foraging theory is a complete waste of time” (cited in Stephens et al., 2007). In the animal behavior literature, the issue is part of the debate about the role of evolution in shaping behavior. Did we evolve to be “optimal”? In the context of our very artificial tasks, it must be acknowledged that we certainly did not evolve to forage for red spots on computer screens, even if the experimenter tells the observer to maximize yield or to exhaustively search. The observer's “optimal” behavior might be to complete the odd task with as little effort and as much speed as possible. Seen in those somewhat depressing terms, it may be considered a pleasant surprise that the results of these experiments are as orderly as they are and that the MVT serves as a useful description of the results of several of these experiments. 
These experiments have observers foraging in a realm of uniform, infinite resources. This leaves other large areas unexplored. For example, suppose that there are multiple target types in the same patch (Wolfe, 2012b). Birds, specifically blue jays, searching for digital moths, tend to search for one type until it becomes rare and then switch to another type (Bond & Kamil, 2002). Would human observers behave in a similar manner? There is some cost to switching from one target template to another (Maljkovic & Nakayama, 1994; Rangelov, Muller, & Zehetleitner, 2011; Wolfe, Horowitz , Kenner, Hyle, & Vasan, 2004). This can be thought of as an internal travel time that will vary with the difficulty of searching memory (Mayr & Kliegl, 2000). Leaving the search for one item to begin the search for another is a form of patch-leaving behavior (Hills et al., 2012). It will be interesting to see how this interacts with the external visual search. Suppose, for example, that the set of possible targets includes raspberries and apples (red ones, perhaps). If you are searching for raspberries at the moment and your eyes happen to light upon an apple, there are several possible consequences. You might miss the apple entirely (inattentional blindness of a sort; Mack, Tang, Tuma, & Kahn, 1992) because your search template is narrowly fixed on raspberries. You might “pick” the apple and continue searching for raspberries, or the apple might provoke an automatic task switch to search for apples (Beck, Hollingworth, & Luck, 2012). Depending on the relative costs for switching templates, selecting items, and moving between patches, it could be optimal to search for one target, then the next, or it could be optimal to determine if each item in the display matches any item in the memorized target set. Again, it will be interesting to discover if humans behave optimally in the sense of getting the greatest yield for their efforts. 
As discussed earlier, the real world tends not to be uniform in its distribution of resources. Experiments 5 and 6 looked at behavior in a world of nonuniform patches; finding very rule-governed behavior, but again, raising many questions for future work. The world of Experiments 5 and 6 had a uniform distribution of patch qualities. How would behavior change if the mean probability of a good berry stayed fixed at 0.5 but the distribution of patches changes? If the mean is critical, a field composed of patches with target probabilities of 0.1 and 0.9 would produce the same behavior as a field composed of 0.4 and 0.6. This seems unlikely. Average yield from patches of unequal quality could be equated by making targets from poor patches worth more than targets from high quality patches (something like a “supply-and-demand” valuation). If a berry from a patch with 20% good berries was worth four times a berry from and 80% patch, would patch-leaving times be equated? Such questions slide into the domain of behavioral economics (Glimcher & Rustichini, 2004). 
In sum, humans engage in a great deal of visual foraging behavior. That behavior seems obviously rule-governed. The results of the six experiments reported here show that our observers changed rules depending on the specific conditions of the foraging task. It seems likely that we share the basis for our foraging decisions with other animals and it seems likely that there will be situations in our civilized world where those ancient rules are at odds with our modern desires. 
Acknowledgments
This research was supported by an ONR MURI (N000141010278), NIH-NEI (EY017001), and Google. I thank Jasper Danielson for his work on this project as part of the Research Science Institute. 
Commercial relationships: none. 
Corresponding author: Jeremy M. Wolfe. 
Email: wolfe@search.bwh.harvard.edu. 
Address: Visual Attention Lab, Department of Surgery, Brigham & Women's Hospital, Cambridge, MA, USA. 
References
Beck V. M. Hollingworth A. Luck S. J. (2012). Simultaneous control of attention by multiple working memory representations. Journal of Vision, 12 (9), 956, http://www.journalofvision.org/content/12/9/956, doi:10.1167/12.9.956. [Abstract] [CrossRef]
Bond A. B. Kamil A. C. (2002). Visual predators select for crypticity and polymorphism in virtual prey. Nature, 415 (6872), 609–613. [CrossRef] [PubMed]
Cain M. S. Vul E. Clark K. Mitroff S. R. (2011). A Bayesian optimal foraging model of human visual search. Psychological Science, 23, 1047–1054, doi:doi:10.1177/0956797612440460. [CrossRef]
Charnov E. L. (1976). Optimal foraging, the marginal value theorem. Theoretical Population Biology, 9, 129–136. [CrossRef] [PubMed]
Eckstein M. P. (2011). Visual search: A retrospective. Journal of Vision, 11 (5): 14, 1–36, http://www.journalofvision.org/content/11/5/14, doi:10.1167/11.5.14. [PubMed] [Article] [CrossRef] [PubMed]
Ehinger K. A. Hidalgo-Sotelo B. Torralba A. Oliva A. (2009). Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual Cognition, 17 (6), 945–978. [CrossRef] [PubMed]
Elazary L. Itti L. (2010). A Bayesian model for efficient visual search and recognition. Vision Research, 50 (14), 1338–1352, doi:10.1016/j.visres.2010.01.002. [CrossRef] [PubMed]
Glimcher P. W. Rustichini A. (2004). Neuroeconomics: The consilience of brain and decision. Science, 306 (5695), 447–452. [CrossRef] [PubMed]
Hills T. T. (2006). Animal foraging and the evolution of goal-directed cognition. Cognitive Science, 30, 3–41. [CrossRef] [PubMed]
Hills T. T. Jones M. N. Todd P. M. (2012). Optimal foraging in semantic memory. Psychological Review, 119 (2), 431–440. doi:10.1037/a0027373. [CrossRef] [PubMed]
Hutchinson J. M. C. Wilke A. Todd P. M. (2008). Patch leaving in humans: Can a generalist adapt its rules to dispersal of items across patches? Animal Behaviour, 75, 1331–1349, doi:10.1016/j.anbehav.2007.09.006. [CrossRef]
Klein R. M. MacInnes W. J. (1999). Inhibition of return is a foraging facilitator in visual search. Psychological Science, 10 (July), 346–352. [CrossRef]
Mack A. Tang B. Tuma R. Kahn S. (1992). Perceptual organization and attention. Cognitive Psychology, 24, 475–501. [CrossRef] [PubMed]
Maljkovic V. Nakayama K. (1994). Priming of popout: I. Role of features. Memory & Cognition, 22 (6), 657–672. [CrossRef] [PubMed]
Mayr U. Kliegl R. (2000). Task-set switching and long-term memory retrieval. Journal of Experimental Psychology: Learning, Memory, & Cognition, 26 (5), 1124–1140. [CrossRef]
Mozer M. C. Baldwin D. (2008). Experience-guided search: A theory of attentional control. In Platt D. K. J. Singer Y. (Eds.), Advances in neural information processing systems (pp. 1033–1040). Cambridge, MA: MIT Press.
Nakayama K. Martini P. (2011). Situating visual search. Vision Research, 51 (13), 1526–1537, doi:10.1016/j.visres.2010.09.003. [CrossRef] [PubMed]
Pierce G. J. Ollason J. G. (1987). Eight reasons why optimal foraging theory is a complete waste of time. Oikos, 49, 111–118. [CrossRef]
Pirolli P. (1997). Computational models of information-scent follow in a very large browsable text collection. Paper presented at the CHI 1997 Conference on Human Factors in Computing Systems, Atlanta, GA.
Pirolli P. (2007). Information foraging theory. New York: Oxford University Press.
Pirolli P. Card S. (1999). Information foraging. Psychological Review, 106 (4), 643–675. [CrossRef]
Posner M. I. Cohen Y. (1984). Components of attention. In Bouma H. Bouwhuis D. G. (Eds.), Attention and performance X (pp. 55–66). Hillside, NJ: Erlbaum.
Pyke G. H. Pulliam H. R. Charnov E. L. (1977). Optimal foraging: A selective review of theory and tests. Quarterly Review of Biology, 52 (2), 137–154. [CrossRef]
Rangelov D. Muller H. J. Zehetleitner M. (2011). Dimension-specific intertrial priming effects are task-specific: Evidence for multiple weighting systems. Journal of Experimental Psychology: Human Perception & Performance, 37 (1), 100–114. doi:10.1037/a0020364. [CrossRef]
Smith A. D. Hood B. M. Gilchrist I. D. (2008). Visual search and foraging compared in a large-scale search task. Cognitive processing (Vol. 9, pp. 121–126). Heidelberg: Springer. [CrossRef]
Smith T. J. Henderson J. M. (2011). Looking back at Waldo: Oculomotor inhibition of return does not prevent return fixations. Journal of Vision, 11 (1): 3, 1–11, http://www.journalofvision.org/content/11/1/3, doi:10.1167/11.1.3. [PubMed] [Article] [CrossRef] [PubMed]
Stephens D. W. Brown J. S. Ydenberg R. C. (2007). Foraging: Behavior and ecology. Chicago: University of Chicago Press.
Stephens D. W. Krebs J. R. (1986). Foraging theory. Princeton, NJ: Princeton University Press.
Thomas L. E. Ambinder M. S. Hsieh B. Levinthal B. Crowell J. A. Irwin D. E. (2005). Fruitful visual search: Inhibition of return in a virtual foraging task. Psychological Bulletin & Review, 13 (5), 891–895.
Thornton T. L. Gilden D. L. (2007). Parallel and serial processes in visual search. Psychological Review, 114 (1), 71–103. [CrossRef] [PubMed]
Townsend J. T. Wenger M. J. (2004). The serial-parallel dilemma: A case study in a linkage of theory and method. Psychonomic Bulletin & Review, 11 (3), 391–418. [CrossRef] [PubMed]
Verghese P. (2001). Visual search and attention: A signal detection approach. Neuron, 31, 523–535. [CrossRef] [PubMed]
Viswanathan G. M. Buldrev S. V. Havlin S. Da Luz M. G. E. Stanley H. E. (1999). Optimizing the success of random searches. Nature, 401, 911–914. [CrossRef] [PubMed]
Wilke A. Hutchinson J. M. C. Todd P. M. Czienskowski U. (2009). Fishing for the right words: Decision rules for human foraging behavior in internal search tasks. Cognitive Science, 33 (3), 497–529, doi:10.1111/j.1551-6709.2009.01020.x. [CrossRef] [PubMed]
Wolfe J. Horowitz T. Kenner N. M. Hyle M. Vasan N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research, 44 (12), 1411–1426. [CrossRef] [PubMed]
Wolfe J. M. (1998). Visual search. In Pashler H. (Ed.), Attention (pp. 13–74). Hove, East Sussex, UK: Psychology Press Ltd.
Wolfe J. M. (2007). Guided search 4.0: Current progress with a model of visual search. In Gray W. (Ed.), Integrated models of cognitive systems (pp. 99–119). New York: Oxford University Press.
Wolfe J. M. (2010). Visual search. Current Biology, 20 (8), R346–R349, doi:10.1016/j.cub.2010.02.016. [CrossRef] [PubMed]
Wolfe J. M. (2012a). Approaches to visual search: Feature Integration Theory and Guided Search 2012. In Kastner S. (Ed.), Oxford handbook of attention. New York: Oxford University Press.
Wolfe J. M. (2012b). Saved by a log: How do humans perform hybrid visual and memory search? Psychological Science, 23 (7), 698–703. doi:doi:10.1177/0956797612443968. [CrossRef]
Wolfe J. M. (2012c). Visual search. In Todd P. M. Hills T. T. Robbins T. W. (Eds.), Cognitive search: Evolution, algorithms, and the brain (pp. 159–176). Cambridge, MA: MIT Press.
Wolfe J. M. Cave K. R. Franzel S. L. (1989). Guided Search: An alternative to the Feature Integration model for visual search. Journal of Experimental Psychology: Human Perception & Performance, 15, 419–433. [CrossRef]
Wolfe J. M. Reynolds J. H. (2008). Visual search. In Basbaum A. I. Kaneko A. Shepherd G. M. Westheimer G. (Eds.), The senses: A comprehensive reference (Vol. 2, Vision II, pp. 275–280). San Diego: Academic Press.
Zelinsky G. (2008). A theory of eye movements during target acquisition. Psychological Review, 115 (4), 787–835. [CrossRef] [PubMed]
Figure 1
 
A Massachusetts blueberry farm would like you to search exhaustively even if Optimal Foraging Theory predicts otherwise. Reprinted by permission of Turkey Hill Farm, Haverhill, MA 01830.
Figure 1
 
A Massachusetts blueberry farm would like you to search exhaustively even if Optimal Foraging Theory predicts otherwise. Reprinted by permission of Turkey Hill Farm, Haverhill, MA 01830.
Figure 2
 
Stimulus configuration for Experiment 1: Modified screenshot.
Figure 2
 
Stimulus configuration for Experiment 1: Modified screenshot.
Figure 3
 
RT as a function of the “reverse click order.” Click 1 is the final click in a patch. HF = Hard, Fast condition; HS = Hard, Slow; EF = Easy, Fast; ES = Easy, Slow. Solid lines are averages of nine observers. Error Bars show ±1 SEM.
Figure 3
 
RT as a function of the “reverse click order.” Click 1 is the final click in a patch. HF = Hard, Fast condition; HS = Hard, Slow; EF = Easy, Fast; ES = Easy, Slow. Solid lines are averages of nine observers. Error Bars show ±1 SEM.
Figure 4
 
Picked berry color as a function of the “forward click order.” Here, position 1 is the first click in the patch.
Figure 4
 
Picked berry color as a function of the “forward click order.” Here, position 1 is the first click in the patch.
Figure 5
 
Positive predictive value (PPV) of each click, averaged from the final click in a patch. In this case, PPV = “good” berries/berries picked.
Figure 5
 
Positive predictive value (PPV) of each click, averaged from the final click in a patch. In this case, PPV = “good” berries/berries picked.
Figure 6
 
Instantaneous (solid lines) versus overall (dashed) rates of return four each of the four conditions of Experiment 1. Error bars = ±1 SEM.
Figure 6
 
Instantaneous (solid lines) versus overall (dashed) rates of return four each of the four conditions of Experiment 1. Error bars = ±1 SEM.
Figure 7
 
z(Hit) as a function of z(False Alarm) for each subject in each condition of Experiment 2. Note that this is a portion of the ROC space. The d' = 0 line is shown in the lower right.
Figure 7
 
z(Hit) as a function of z(False Alarm) for each subject in each condition of Experiment 2. Note that this is a portion of the ROC space. The d' = 0 line is shown in the lower right.
Figure 8
 
Rate (berries/s) for the last seven berries picked in each condition. Here those last clicks are plotted against time since the start of picking in that patch. Error bars are ±1 SEM.
Figure 8
 
Rate (berries/s) for the last seven berries picked in each condition. Here those last clicks are plotted against time since the start of picking in that patch. Error bars are ±1 SEM.
Figure 9
 
Screen shot of stimuli for Experiment 3.
Figure 9
 
Screen shot of stimuli for Experiment 3.
Figure 10
 
Average (± SEM) instantaneous rate for the final nine clicks for each of the four conditions of Experiment 3. Dashed lines show average rate of return for each condition. Slow travel conditions are shown with outlined symbols and coarsely dashed lines for the average rate.
Figure 10
 
Average (± SEM) instantaneous rate for the final nine clicks for each of the four conditions of Experiment 3. Dashed lines show average rate of return for each condition. Slow travel conditions are shown with outlined symbols and coarsely dashed lines for the average rate.
Figure 11
 
Rate of return for the last seven berries picked as a function of the time in the patch for that selection. Data are averaged over all observers and set sizes. Error bars are ±1 SEM. Slower travel (dark red) leads to longer time in each patch than fast travel (light green). In the fast travel condition, observers leave the patch when the rate reaches the average rate (dashed line). However, in the slow travel condition, observers leave the patch well before they reach the average rate.
Figure 11
 
Rate of return for the last seven berries picked as a function of the time in the patch for that selection. Data are averaged over all observers and set sizes. Error bars are ±1 SEM. Slower travel (dark red) leads to longer time in each patch than fast travel (light green). In the fast travel condition, observers leave the patch when the rate reaches the average rate (dashed line). However, in the slow travel condition, observers leave the patch well before they reach the average rate.
Figure 12
 
Rate of return as a function of set size. Averages work backward from the final berry picked in a patch. Data are averaged over all observers with different brightness/color lines showing different set sizes. Error bars are ±1 SEM. The upper panel shows the fast travel condition, and the lower panel shows the slow condition. Dashed lines show the average rate of return for the block.
Figure 12
 
Rate of return as a function of set size. Averages work backward from the final berry picked in a patch. Data are averaged over all observers with different brightness/color lines showing different set sizes. Error bars are ±1 SEM. The upper panel shows the fast travel condition, and the lower panel shows the slow condition. Dashed lines show the average rate of return for the block.
Figure 13
 
Instantaneous rate as a function of time in patch. Data averaged over 10 observers for the last 10 selections. Dashed lines show average rates for the fast (light green) and slow (dark red) blocks.
Figure 13
 
Instantaneous rate as a function of time in patch. Data averaged over 10 observers for the last 10 selections. Dashed lines show average rates for the fast (light green) and slow (dark red) blocks.
Figure 14
 
Instantaneous rate as a function of time in patch shown for each level of patch quality (probability that a berry is a good berry). Data averaged over 10 observers relative to the last selection in the patch. Dashed lines show average rates for the fast and slow travel time blocks.
Figure 14
 
Instantaneous rate as a function of time in patch shown for each level of patch quality (probability that a berry is a good berry). Data averaged over 10 observers relative to the last selection in the patch. Dashed lines show average rates for the fast and slow travel time blocks.
Figure 15
 
Probability of clicking on a “berry” as a function of the probability that a given berry is a target (patch quality). Filled green symbols show individual observer data for travel time of 1 s. Open green circles show the average of those data. Solid black line is the linear regression of those data. Dashed purple line shows the linear regression for travel time of 10 s. The dotted line has a slope of 1.0, representing perfect probability match behavior.
Figure 15
 
Probability of clicking on a “berry” as a function of the probability that a given berry is a target (patch quality). Filled green symbols show individual observer data for travel time of 1 s. Open green circles show the average of those data. Solid black line is the linear regression of those data. Dashed purple line shows the linear regression for travel time of 10 s. The dotted line has a slope of 1.0, representing perfect probability match behavior.
Figure 16
 
Rate in berries per second as a function of average time in patch for Experiment 6. Different curves represent different probabilities of a “good” berry. Points are plotted for those conditions where there was such a click for 75% of patches, across observers.
Figure 16
 
Rate in berries per second as a function of average time in patch for Experiment 6. Different curves represent different probabilities of a “good” berry. Points are plotted for those conditions where there was such a click for 75% of patches, across observers.
Figure 17
 
Probability of selecting an item P(click) as a function of the probability that the item will be a target (patch quality, P[target]). Smaller, paler data points represent individual observers. Large, dark points show average data. The solid line is the best-fit regression line. It is very close to the dashed line, showing the predictions of perfect probability matching.
Figure 17
 
Probability of selecting an item P(click) as a function of the probability that the item will be a target (patch quality, P[target]). Smaller, paler data points represent individual observers. Large, dark points show average data. The solid line is the best-fit regression line. It is very close to the dashed line, showing the predictions of perfect probability matching.
Figure 18
 
False alarms (bad berries) per patch as a function of patch quality.
Figure 18
 
False alarms (bad berries) per patch as a function of patch quality.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×