Physics 434, 2014: Random walks and diffusion
Back to the main Teaching page.
Back to Physics 434, 2014: Information Processing in Biology.
In these lectures, we are focusing on random walks and diffusion -- the two related fundamental processes underlying much of stochastic information processing in biology. While Nelson's book largely skips these subjects, other books that we have used have great section on random walks. See, for example, Introduction to Probability by CM Grinstead and JL Snell. Another, more physics-like book on the subject that I recommend is "Random Walks in Biology" by H. Berg.
Contents
Warmup questions
The following two questions were asked in class in the beginning of this block. Not too many of you were able to answer them.
Come back to these questions again when we finish talking about random walks, and see if you can answer them now.
- Consider a neuron. Action potentials are generated by fluxes of ions through the channels in the neural membrane (read Dayan and Abbott, 2005). The channels open and close independently, with an exponentially distributed time in each state, and in the closed state they don't let ions path through. What is a better strategy to ensure that the neuron's voltage is nearly deterministic: one big channel, or many small ones?
- Chemical signals from the outside world, such as antigens in the case of immune cells, are typically sensed by receptors on the cell surface. The binding of a signaling molecule changes the receptor's confirmational state. The receptor, still in the membrane, then meets with one or more of enzymes that diffuse in from afar on the membrane and catalyze its various additional modifications. Finally the receptor complex is cleaved, and part of its intra-cellular domain travels to other compartments of the cells, such as nucleus to initiate further signaling events (e.g., transcription). Can you explain why the opposite sequence, where a bound receptor is first cleaved and then its signal relay component is modified in the cytosol, is used much less frequently?
Introducing random walk and diffusion
The CLT, which we studied previously, explains why some of the details of the E.coli motion that we glanced over are not that important -- long term behavior of the motion is, essentially, Gaussian, and hence it is largely independent of all cumulants, but the first and the second. This is a simple illustration of universality, which we cherish so much in physics: for a very large class of microscopic models, as long as some (often mild) conditions are satisfied, the macroscopic behavior is more or less independent of the microscopic model. It makes sense then to think of the simplest possible model resulting in the same universal behavior and make it a standard model for studies -- indeed, it's the simplest, and the results are independent on the model choice.
Here we introduce a few of such models. The first one is a random walk in one dimension, with discrete steps and discrete times when these steps were taken (DTDS random walk). Namely, at every time with integer , a particle makes steps of length each, with the probability to the right and to the left. If the total number of steps made is , then the CLT tells us that the distribution of positions where the particle ends up will be Gaussian, with the mean , and the variance . This is an example of a diffusive process. Conventionally, for diffusive processes, we write , where is called the drift velocity, and , where is the dimension and is the diffusion coefficient. So, the 1d DTDS random walk is an example of a diffusive process on long time scales, and for this random walk: and .
One can also introduce a continuous time - discrete space random walk (CTDS), where discrete steps are made at random times, with different distributions of these times. Here, if the distribution of time steps is sufficiently narrow, the resulting motion is still going to be diffusive. If, however, there are long tails in the distribution of time steps, it is possible for the particle to get stuck at the same point for a long time. In this case, one should expect that the variance in the position may scale sublinearly, and we will get subdiffusive motion. Similarly, we can introduce discrete time- continuous space random walk (DTCS), where steps of random length are made at fixed times. Then if the distribution of spatial steps doesn't have long tails, diffusion will emerge. For a long-tailed distribution of steps, one can have the particle making a long change once in a while. The variance can go faster than linearly then, and this will be a superdiffusive motion. Finally continuous time-continuous space (CTCS) random walks are possible, and many different interesting asymptotic behaviors may emerge if either the distribution of step sizes or step times have long tails. We will largely omit these interesting topics for the rest of the class, but you should keep in mind that random walk does not always result in a diffusive motion.
One can generalize random walks to higher number of dimensions. This would correspond to each of the coordinates of a particle making random walks independently. One, two, and three dimensional walks are trivially possible, and we will see later, when we study chemical kinetics, that higher dimension walks are also common. Since the walk in every dimension is independent of the other dimensions, their means and variances can be calculated independently, so that , and , where is the dimension, and .
Modeling diffusion: Langevin equation
It's clear how one models random walks -- in fact, we did a lot of simulations to this extent. How does one model a diffusion itself, where and for all times , but not just for . The overall motion is a sum of motions over small individual steps of duration , and then the overall mean and variances add for every one of these individual time segments, so that the displacement over the time are themselves Gaussian distributed, with the mean and the variance (in 1 dimension) . We can write this equivalently as , where is a Gaussian random variable with zero mean and unit variance. This numerical simulation procedure can be used as a definition of what we mean by diffusion, and it's a computational realization of what is called the Langevin equation.
Let's divide both sides of the Langevin numerical simulation equation by and take the limit of . We get , where is a Gaussian random variable with unit variance, uncorrelated at every time point. Note the interesting feature of this equation -- the stochastic term, proportional to diverges as . That is, if we move in time in small steps , then the noise at every time point is huge. However, noises, sometimes being positive and sometimes negative, average out, so that eventually, at long times, the first term in the equation (the deterministic one) starts dominating. Let's now define . This gives , with . This is an example of what is known as a stochastic differential equation, or a Langevin equation. The random variable is called the Wiener process, after Norbert Wiener, who invented it.
What does the stochastic term in the equation really mean? Especially, given its strange correlation function. My suggestion is to view the Langevin equation as simply a shorthand for the finite version, , which also immediately tells us how such equations need to be simulated.
Modeling diffusion: The diffusion equation
We will now write the differential equation for the probability distribution of a random walker of being at a point at large times, and when the step of the walk is much smaller than . This time we will consider a CTDS random walk -- that is, the walker jumps among discrete sites, but at continuous time point, so that the rate for jumping is fixed to be . We will also assume, for simplicity, that the walker is unbiased.
We start with what is known as the master equation for this probability. Namely, how does the probability of being at a point change with time? It changes when a particle from either jumps to , or a particle from jumps back to . The rate for all of these processes is times the probability that the particle is actually at the point, from where it jumps (that is, or . This allows us to write . An assumption of long time is equivalent to saying that changes very slowly. Thus, if viewed at coarse enough resolution, may be viewed as a continuous function of , and, moreover, we can write . Plugging this expression into the master equation and realizing that is now a function of two variables , so that full derivatives need to be replaced by partial derivatives, we get: . Defining the diffusion coefficient , we get . Further, suppose I have independent particles. Then their density at a given point is . Thus multiplying the equation for by , we get a deterministic equation for the particle density, , which is known as the diffusion equation.
Note that the diffusion equation and the stochastic Langevin equation are mathematically equivalent, just as the master equation and the CTDS random walk are -- one can either simulate a deterministic evolution of the probability, which is modeled either by a partial differential equation or by an infinite set of coupled differential equations, incorrectly named "the master equation". Or one can generate random trajectory, and then estimate the distribution of where the particle is using such samples. Averaging many trajectories of a random walk will result in the same solution as a master equation. And averaging many trajectories from the Langevin equation will result in the same thing as solving the diffusion equation.
E. coli chemotaxis
E. coli chemotaxis as a biased random walk: going up the gradient of an attractant, time to a tumble increases. This is described very well in (Berg 2000, Berg and Brown 1972). If going up the gradient, the average run time increases as (we will not focus on the mechanisms of how this happens at this point). Then the the expected displacement over a single run in the direction parallel to the gradient is , and it is zero perpendicular to the gradient. Adding many such runs, we get a biased random walk: E. coli moves preferentially to better areas.
This all sounds nice, but can we actually prove that with this random strategy the bacterium actually find the greener pastures as time goes by? Let's look at two nearby points , which are closer than the length of a single typical run, and have the nutrient concentrations at higher than at . Then the probability of going from 1 to 2 is , where is the mean waiting time to a tumble at a concentration . Similarly, . In steady state: . Therefore, , so that is higher in the direction where increases. We can now compare all points in a chain, and receive a similar expression for all, thus proving that the randomized strategy actually results in the bacterium finding, on average, having a higher probability of being where there's more food. Note that E. coli doesn't actually decrease its run time when going down gradient, the downwards runs are, on average, of about the same length irrespective of how much down they are going. This doesn't change our calculations much. Also note that is an example of what is called a detailed balance calculation. Strictly speaking, for non equilibrium system like a bacterium, detailed balance is not necessarily satisfied. But it is probably a good approximation in this specific example.
We wrote a simulation code for E. coli trajectories in a gradient using Matlab. Feel free to download it and use it in your homework assignments from now on.