Gintautas et al., 2011

From Ilya Nemenman: Theoretical Biophysics @ Emory
Jump to: navigation, search

Back to the full Publications list.

V Gintautas, M Ham, B Kunsberg, S Barr, S Brumby, C Rasmussen, J George, I Nemenman, L Bettencourt, G Kenyon. Model cortical association fields account for the time course and dependence on target complexity of human contour perception. PLoS Comput Biol 7, e1002162, 2011. PDF, arXiv.

Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this, we create a synthetic image set that prevents sole reliance on either local visual features or high-level context for the detection of target objects. Images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice experiment with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by single exponentials with time constants of 50-175 ms depending on amoeba complexity. The results of the psychophysics experiments agreed with predictions of a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and clutter images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes about 15 to 31 ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early and intermediate visual areas can account for a rich set of psychometric curves characterizing human contour perception.