Abstract
In a recent issue of Nature Communications, Harrison, Bays, and Rideaux1use electroencephalography (EEG) to infer population tuning properties from human visual cortex, and deliver a major update to existing knowledge about the most elemental building block of visual perception – orientation tuning. Using EEG together with simulations in an approach they refer to as “generative forward modeling”, the authors adjudicate between two competing population tuning schemes for orientation tuning in visual cortex. They claim that a redistribution of orientation tuning curves can explain their observed pattern of EEG results, and that this tuning scheme embeds a prior of natural image statistics that exhibits a previously undiscovered anisotropy between vertical and horizontal orientations. If correct, this approach could become widely used to find unique neural coding solutions to population response data (e.g., from EEG) and to yield a “true” population tuning scheme deemed generalizable to other instances. However, here we identify major flaws that invalidate the promise of this approach, which we argue should not be used at all. First, we will examine the premise of Harrison and colleagues1, to subsequently explain why “generative forward modeling” cannot circumvent model mimicry pitfalls and can deliver many possible solutions of unknowable correctness. Finally, we show a tentative alternative explanation for the data.Conflict of interestThe authors declare no conflict of interest
Publisher
Cold Spring Harbor Laboratory