Physics 380, 2010: Information Theory

From Ilya Nemenman: Theoretical Biophysics @ Emory
Revision as of 11:28, 4 July 2018 by Ilya (talk | contribs) (1 revision imported)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Back to Physics 380, 2010: Information Processing in Biology.

Lectures 6 and 7

In these lectures, we cover some background on information theory. A good physics style introduction to this problem can be found in the upcoming book by Bialek (Bialek 2010).

  • Setting up the problem: How do we measure information transmitted by a biological signaling system?
  • Shannon's axioms and the derivation of entropy: if a variable is observed from a distribution then the amount of the information we gain from this observation must obey the following properties.
    1. If the cardinality of the distribution grows and the distribution is uniform, then the measure of information grows as well.
    2. The measure of information must be a continuous function of the distribution
    3. The measure of information is additive. That is, for a fine graining of into , we should have .

Up to a multiplicative constant, the measure of information is then , which is also called the Boltzman-Shannon entropy. And we fix the constant by defining the entropy of a uniform binary distribution to be 1. Then . The entropy is then measured in bits.

  • Meaning of entropy: Entropy of 1 bit means that we have gained enough information to answer one yes or no (binary) question about the variable .
  • Properties of entropy (positive, limited, convex):
    1. , where is the cardinality of the distribution. Moreover, the first inequality becomes an equality iff the variable is deterministic (that is, one event has a probability of 1), and the second inequality is an equality iff the distribution is uniform.
    2. Entropy is a convex function of the distribution
    3. Entropies of independent variables add.
    4. Entropy is an extensive quantity: for a joint distribution , we can define an entropy rate .
  • Differential entropy: a continuous variable can be discretized with a step , and then the entropy is . This formally diverges at fine discretization: we need infinitely many bits to fully specify a continuous variable. The integral in the above expression is called the differential entropy, and whenever we write for continuous variables, we mean the differential entropy.
  • Entropy of a normal distribution with variance is .
  • Multivariate entropy is defined with summation/integration of log-probability over multiple variables, cf. entropy rate above.
  • Conditional entropy is defined as averaged log-probability of a conditional distribution
  • Mutual information: what if we want to know about a variable , but instead are measuring a variable . How much are we learning about then? This is given by the difference of entropies of before and after the measurement: .
  • Meaning of mutual information: mutual information of 1 bit between two variables means that by querying one of them as much as possible, we can get one bit of information about the other.
  • Properties of mutual information
    1. Limits: . Note that the first inequality becomes an equality iff the two variables are completely statistically independent.
    2. Mutual information is well-defined for continuous variables.
    3. Reparameterization invariance: for any , the following is true .
    4. Data processing inequality: For , . That is, information cannot get created in a transformation of a variable, whether deterministic or probabilistic.
    5. Information rate: Information is also an extensive quantity, so that it makes sense to define an information rate .
  • Mutual information of a bivariate normal with a correlation coefficient is .
  • For Gaussian variables , where is the signal, is the response, and is the noise related to the input, (see the homework problem).

Homework (due Sep 24)

  1. We start with a simple problem. In class, we have defined the mutual information between and as a difference between a marginal and a conditional entropy, . Rewrite this expression to depend only on unconditional entropies.
  2. (Adapted from problem 76, Bialek 2010) In class we discussed that, naively adapting the definition of entropy to continuous variables, we get , where is the dimensionality of the space, and is the size of the grid, which we put on the space, and which we eventually take to zero. The integral (the differential entropy) in the above expression is some finite quantity, and so, as we take , we get . In some sense, this scaling relation allows us to estimate the dimensionality of the space. Then we would expect that if we view some probability distribution as living in a high number of dimensions, but it in fact lives in its smaller dimensional subspace, we should see it if we calculate the relevant entropies. To see this, let's suppose that a certain variable is uniformly distributes along a circumference . This is a 1-d subset of a 2-d space. Write a small Matlab/Octave program that
    • takes the region and partitions it into squares with a variable side of
    • moves carefully along the circumference and marks every square into which it falls
    • calculates the entropy of the uniform distribution along the circumference as , where is the number of marked squares
    • repeats this for values of , plots the dependence of on and shows that this dependence predicts the dimensionality of .
  3. How much information can a spiking neuron transmit? This is limited from above by its entropy rate. Let's represent a neuron as releasing action potentials with a Poisson process with a certain rate , and let's calculate the entropy rate of the Poisson process. First represent this process by discretizing time in intervals . Explain why the entropy of the Poisson generated sequence of duration (or, alternatively, symbols) is exactly proportional to time, that is , where is some constant. Thus we only need to calculate the entropy of a single symbol, this to find the entropy rate as . Does this rate have a finite value as ? Why or why not?
  4. Consider the following simple model of signal transduction in photoreceptors. The ambient light intensity (the signal) is converted into the response current by means of many amplification steps, and a substantial amount of noise is added to the current in the process, that is , where is the amplification factor and is the noise. For simplicity, let's suppose that the signal is distribute as a Gaussian with the mean and the variance , and the noise is similarly Gaussian with zero mean and . What is the distribution of ? Calculate the mutual information between and . Represent it as . Would it make sense to call the signal-to-noise ratio?