Difference between revisions of "Physics 434, 2016: Discrete randomness"

From Ilya Nemenman: Theoretical Biophysics @ Emory
Jump to: navigation, search
nemenman>Ilya
(General notes)
 
m (1 revision imported)
 
(No difference)

Latest revision as of 11:28, 4 July 2018

Emory Logo

Back to the main Teaching page.

Back to Physics 434, 2016: Physical Biology.

During these lectures, we study concepts of probability theory, such as probability distributions, conditionals, marginals, expectations, etc. We derive the law of large numbers. We study some specific useful probability distributions over discrete variables. In the course of this whole lecture block, we should be thinking about E. coli chemotaxis, neural firing, and bacterial mutations in the background -- all of these concepts will be applicable.

Side story, Lecture 2

Let's talk a bit about life at low Reynolds number / high viscosity.

  1. We discussed E. coli swimming last time. Can E. coli swim by paddling an oar? It's fun to see the following movie of a kinematic reversibility of Low Reynolds number flows movie.
  2. Let's now watch a demo of E. coli flagellar bundling. A question is: how does the spiral motion propel the bug? In other words: how will a tilted bar fall in corn syrup?

The outcome of these discussions is that life for cells is very different from the life in the macroscopic world that we are so used to. So throw away all your preconceived notions and keep on asking questions!

General notes

There are now two good collections of notes that you can follow for this block of lectures. The first are Chapters 3, 4 from the Nelson's book. Another good introduction to probability theory, one of my favorites, but more on the mathematical side, can be found at Introduction to Probability by CM Grinstead and JL Snell.

As we discuss probability theory, think of an E. coli that moves in a run/tumble strategy, a neuron that fires randomly, or the Luria-Delbruck experiment. All of these should give you a good intuition about the random distributions that we are discussing.

During these lectures, we are also often stopping and doing some simple Matlab simulations -- to illustrate what we are discussing theoretically, and also to prepare us to do more interesting computational problems.

Introducing concepts of randomness

Some examples of random variables are: position of E. coli, time to neural action potential; number of bacteria with a given mutation; number of molecules of a nutrient near a bacterium. To define the necessary probabilistic concepts, we need

  • To define a set of outcomes that a random variable can take (e.g., head or tails, six sides of a die, etc.).
  • Then we define a probability of a certain outcome as a limit of frequencies after many random draws, or events. That is, if after draws, the outcome happened times, then it's frequency is , and the probability is .

Probabilities satisfy the following properties, which follow from their definition of limits of frequencies:

  • nonnegativity:
  • unit normalization:
  • nesting: if then
  • additivity (for non-disjoint events):
  • complementarity

Good place for a randomness demo is http://faculty.rhodes.edu/wetzel/random/mainbody.html. Test yourself -- can you generate a random sequence?

What if we are studying more than one random variable?

Multivariate distributions is the probability of both events happening. It contains all of the information about the variables, including

  • Marginal distribution:
  • The conditional distribution, which can then be defined as , so that the probability of both events is the probability of the first happening, and then the probability of the second happening given that the first one has happened.

The conditional distributions are related using the Bayes theorem, which says: , so that .

We can also now formalize the intuitive concept of dependence among variables. Two random variables are considered to be statistically independent if and only if , or, equivalently, or .

Characterizing probability distributions

Probability distributions are typically characterized by what's known as expectation values, or the anticipated averaged of various functions of the random variables. That is, the expectation of is defined as. . Expectation values add, whether for the same or for different variables, so that , . Importantly, for independent variables and , expectations of products are also products of expectations: .

A certain set of particular expectations are very useful and commonly used to characterize probability distributions. These are expectations of powers of the random variable, and they are called moments . Moments do not always exist, specifically for long-tailed probability distributions, as discussed at length in Nelson's book. The lower order moments are the most commonly used, if exist, and they got their own names.

  • The first moment is the mean:
  • The second moment allows us to define the variance, or the spread of the distribution: .

Interestingly, the additivity/multiplicativity of expectations discussed above then gives for two independent variables and :

  • , and
  • .

That is, means and variances of independent variables ad! This is a very important result, which will follow us through the entire course.

Finally, it some times makes sense to define what is called the central moments, which measure properties such as spread and skewness of the distribution relative to its mean. . Note that the variance is the second central moment.

Specific probability distributions

We then discussed some useful discrete probability distributions. We built all of them from the simple coin-toss (Bernoulli) distribution, step-by-step. However, while working with coins, it is useful to keep some physics in mind. A coin coming heads up could be a mutation happening, an action potential generated in a neuron, or a ligand molecule grabbed by a bacterial receptor.

  • Bernoulli distribution: , .
  • Binomial distribution: the number of heads out of trials: .
  • Geometric distribution: the number of trials to the next head, .
  • Poisson: the number of heads out of trials, when the probability of a head is small. This is the , and limit of the binomial distribution: .

Again, for all of these distributions we can think of the number of mutations in a bacterium, or the number of spikes produced by a neuron, or a number of molecules captured by a cell. In class and in various homework problems, we then calculated the means and the variances of our basic discrete probability distributions.

  • Bernoulli: , .
  • Binomial distribution: , .
  • Geometric distribution: , .
  • Poisson distribution: , .

Moment generating function

This is a complicated beast, and it's not immediately obvious why we even bother introducing it. So let's stay tuned for a few lectures.

Moment generating function (MGF) is defined as . It is thus an expectation of . Of course, it won't always exist. For it to exist, the distribution must fall off exponentially or faster in its tails. The utility of MGF comes from writing down the exponential as its taylor series, which then gives: . In other words, -- one can calculate the MGF just once, and then get all moments of the distribution from it by a simple differentiation. Additionally, MGF has the following useful properties:

  • .
  • If , and and are independent, then . That is, MGFs of sums of independent variables are products of their individuals MGFs.

To illustrate this, in class we explicitly calculated the MGF for the Poisson distribution , obtaining . And, indeed, the known results for the mean and the variance of the poisson distribution follow from this immediately.

While we didn't do this in class, it does make sense to define another generating function, the so called cumulant generating function, . One can similarly expand this function near in a Taylor series, . In other words, one can define . The quantities are called cumulants. These are combinations of various moments of probability distribution, utility of which will become useful when we study the Gaussian distribution. But notice that and . That is, the first cumulant is the mean, and the second cumulant is the variance. This provides us a taste of what the cumulants are -- they are special moments of the probability distribution that characterize its features in such a way that measure of the distribution width (2nd cumulant) is not influenced by the mean (the 1st cumulant), and the one characterizing the skewness (3rd cumulant) is not influenced by the value of the 2nd, and so on. This is in contrast to the usual second moment, which is the sum of the variance and the square of the mean.