Furthermore, we would like to highlight the general difficulty of estimating dimensions if their value is high. Intuitively, a space is
\(d\)-dimensional if its points cover a (small) cube of
\(d\) dimensions. However, the number of points that is needed to cover a
\(d\)-dimensional cube grows exponentially with the dimension
\(d\). To see this, imagine 10 data points that cover the one-dimensional interval [0,1], for example, the grid points
\(0.1, \dots ,0.9, 1\). To cover a two-dimensional cube similarly well, we would already need
\(10 \times 10 = 100\) data points. In general, to cover a
\(D\)-dimensional cube we would need on the order
\(10^d\) many points. This fact makes it very difficult to estimate the dimension from a sample of points when
\(D\) is large. It is pretty much impossible to have enough sample points to be able to distinguish between, say, a space of 50 versus a space of 51 dimensions: our sample points will neither cover a cube of 50 nor of 51 dimensions, making each such estimate (or corresponding test) utterly unreliable. A more formal argument for the difficulty of estimating high dimensions can be found in
Block, Jia, Polyanskiy, & Rakhlin (2021). Consequently, although it is well possible in psychophysics to discriminate a two-dimensional from a three-dimensional space, it seems pretty much impossible to discriminate between, say, 50-dimensional versus 51-dimensional or 50-dimensional versus 60-dimensional. Even in a setting with very low noise, the high-dimensional scenario would require a prohibitively large number of data points (stimuli) and triplet trials for dimensionality estimation. In psychophysics, it might often be better to avoid high-dimensional spaces from the outset by using well-designed stimuli that observers judge by a few criteria.