Visual neural computation

From Ilya Nemenman: Theoretical Biophysics @ Emory
Jump to: navigation, search

Return to Home or Overview of Professional Activities.

Return to Research Projects list.

Last updated on 15 January 2012.


Neural dynamics

What is this about?

With all the progress in neuroscience, dynamical systems, and computer science, we still cannot design algorithms able to identify objects in complex visual scenes with an accuracy of a human subject. It turns out that a hard part of this task is segmentation -- separating the visual scenery into (potentially partially overlapping) objects. In collaboration with the scientists in Los Alamos we are aiming at taking the next step in designing artificial neural networks that would be able to segment complex natural images. This project is somewhat related to our more general studies of learning and adaptation.

Our approach is based on Einstein's famous "make things as simple as possible, but not simpler". We are introducing neural organization features observed in experiments into our computational models and testing the performance improvements from each feature one by one. The features we are testing are the geometrically sound lateral connectivity of neurons in the primary visual cortex, neural spiking that allows synchronization of neurons driven by images of the same objects, and learning by means of spike-timing dependent plasticity.

The image to the left, from Gintautas et al., 2011, shows how application of lateral co-circular neural connectivity allows to supress clutter in an image, and finally only the longest contours survive. Two original images are shown in the first row, and each next row represents one more cycle of lateral feedback.

Results

In a recent paper, Gintautas et al., 2011, we have shown that co-circular lateral connectivity allows for decluttering of images and identification of complex, intermittent contours with the speeds comparable to a trained human subject.

Open problems

As always, we are interested in asking if a phenomenological, coarse-grained model of the process can be built. That is, can we model the system without resolving single neurons, but nonetheless reproducing neural synchronization and co-activation. We believe that this is doable, and some interesting mathematical connections with statistical version of string theory will allow us to do this. In parallel, we would like to test human subjects on various image segmentation tasks to understand clearly which features of images make it hard or easy for a human to segment them.

I am looking for students and postdocs to help with these projects.