Physics 380, 2011: Lecture 13

From Ilya Nemenman: Theoretical Biophysics @ Emory
Jump to: navigation, search
Emory Logo

Back to the main Teaching page.

Back to Physics 380, 2011: Information Processing in Biology. This is the last lecture in the information theory block.

In-class presentation

Xiang Cheng presents the article by Pedraza and van Oudenaarden, 2005.

Warmup question

I am showing you a diagram of cellular information processing pathways of certain kind (MAPK pathways -- more on them later) courtesy of Jim Faeder (U Pittsburgh). We know how to measure the amount of information that travels along these pathways -- need to estimate mutual information between the quantities of interest. But can information-theoretic ideas also help us understand which of these pathways contribute more to the information processing than the others and, therefore, need first attention?

Main lecture

  1. Information theory provides a measure for characterization of quality of input output relations. But in addition, due to the data processing inequality, it also provides ways of unambiguously reducing dimensionality of the modeled biological system.
    • Indeed, say we have a large-dimensional signal and response . There's a certain mutual information between these . If we propose a reduction of the signal and response to , then by the data processing inequality.
  2. We can, for example, solve the problem like: Which inputs are informative of the outputs (and hence need to be accounted for in a model)? We omit different subsets of the inputs , calculate , and calculate the error due to omitting this signal . Those components that have a small can be safely neglected. This type of analysis can be used, for example, to understand which features of the neural code are important. In the subsequent presentation by Farhan, we will hear about using this trick to understand if high precision of neural spikes is important or not.
  3. We have discussed the problem of lossless coding earlier in the class. What if one is willing to transmit the message with errors, but still wants to reconstruct the message with a small loss. This is Shannon's rate distortion or lossy coding theorem. A good place to look this up is the book by Cover and Thomas.
    • Suppose there's a loss function for recovering a value of signal when, in fact, it should have been </math>x</math>.
    • One can encode the signal as , and the amount of bits one would need to store this encoding would be .
    • The average loss experienced would be .
    • We are interested in the shortest, the most compressed encoding, , and yet we want to have the smallest loss as well . We can choose then to minimize , where is an arbitrary constant that controls how much we value compression over quality (think different bitrates in the mp3 coding).
    • We minimize over all . This can only be done numerically, but there are effective algorithms (namely, Blahur-Arimoto algorithm described in the Cover and Thomas textbook) to do this.
  4. In some cases, it is unclear how to define . But maybe instead there's the following setup.
    • We observe , and compress it to by . However, we really care not about but about some other relevant variable , given by .
    • With compression, .
    • We can now maximize the information the compressed variable has about the relevant variable, while maximizing the compression itself. That is, we want to maximize , where is again a control parameter.
    • This should also be done numerically with the same Blahut-Arimoto algorithm. This approach is know as the Information Bottleneck method (see Tishby et al., 2000).