Abstract
In recent years, new challenges have emerged for vision scientists. Firstly, the growing awareness that most of our theories are based on samples that do not reflect diversity – namely, Western, Educated, Industrialized, Rich, and Democratic (WEIRD) samples – stresses the need to reach individuals as diverse as possible for future studies. Secondly, the COVID-19 has put great constraints on our capacity to bring participants to the lab. In reaction to these challenges, new technologies have been developed to allow researchers to collect data on the internet. These technologies, however, are often ill adapted to the experimental paradigms we have developed in the field. For instance, they are often not designed to allow modifications of stimuli as a function of participants’ responses. Moreover, they are not well adapted to the use of data-driven classification image methods. In the present study, we tested a new platform for online experiments: Pack&Go from VPixx Technologies. This platform runs Matlab/Psychtoolbox3 experiments online. We tested participants on a categorization task using Bubbles, a data-driven method allowing to reveal visual information utilization. In Phase 1, participants were tested in the lab, in three different conditions each comprising 1000 trials: 1) The experimental code was run locally; 2) The experiment was conducted on the same hardware, but the experimental code was run by Pack&Go; 3) The experiment was conducted on a different computer, and the experimental code was run by Pack&Go. In Phase 2, we tested participants that were recruited using a panel provider (Prolifics), and tested from their home using Pack&Go. In all conditions, the exact same experimental code was used, making it easy to compare the results across conditions. The pattern of findings was well replicated across conditions. The pros and cons of testing data-driven methods online will be discussed.