# Physics 380, 2010: Random Walks

## Lectures 4 and 5

In these lecture, we talk about random walks, investigating the properties of their trajectories.

• Biased random walk
• Biased random walk: ${\displaystyle T}$ steps of ${\displaystyle \pm a}$ length each, and the probability of left-right steps is not the same. For the total displacement, ${\displaystyle \mu =T\times {\mbox{(bias of single step)}}=Tv}$ and ${\displaystyle \sigma ^{2}=T\sigma _{\rm {onestep}}^{2}}$.
• First passage times: what is a distribution of time until a random walk or diffusion reaches a particular point?
• Example of a neural action potential generation: First passage time to a threshold. We model the neural voltage as changing in discrete steps ${\displaystyle \pm \Delta V}$, starting from the potential ${\displaystyle V_{0}}$, releasing a spike when ${\displaystyle V=V_{\rm {threshold}}}$, and resetting the voltage to ${\displaystyle V_{0}}$. The right step probability is ${\displaystyle p}$ and the left step probability is ${\displaystyle 1-p}$. The bias of the walk is influenced by the current into the cell. What is the distribution of the time to the next spike, ${\displaystyle \tau }$?
• First passage for right-biased walk. Let's suppose that ${\displaystyle p\approx 1}$. Then always stepping right is the most probable situation. Reaching the threshold will require ${\displaystyle N=(V_{\rm {threshold}}-V_{0})/\Delta V}$ steps. They are independent, and their times will add. As a result, ${\displaystyle \mu _{\tau }}$=N\mu_{\rm one step}[/itex], ${\displaystyle \sigma _{\tau }={\sqrt {N}}\sigma _{\rm {onestep}}}$. As a result, the coefficient of variation ${\displaystyle {\rm {c.v.}}={\frac {\sigma _{\rm {onestep}}}{\mu _{\rm {onestep}}{\sqrt {N}}}}={\rm {const}}{\frac {1}{\sqrt {N}}}}$. Hence the coefficient of variation for an extremely right-biased walk is small. In particular, this means that this neuron, if driven to fire often, will fire regularly, with very small noise. It becomes deterministic.
• First passage for the left-biased walk. Let's now suppose ${\displaystyle p\to 0}$. In this case, the state of the system is going to be returning back to ${\displaystyle V_{0}}$ many times before it finally reaches ${\displaystyle V_{\rm {threshold}}}$. Starting with ${\displaystyle V_{0}}$, if we go forward without turning back, the probability of such trajectory is ${\displaystyle p^{N}}$. If we turn back once, we have to make one more forward step as well, resulting in ${\displaystyle p^{N+1}(1-p)\approx p^{N+1}}$. But there could have been ${\displaystyle N-1}$ different places where the step back could have occurred. So the total probability of any such 1-step-back trajectory is ${\displaystyle (N-1)p^{N+1}}$. Hence the most direct, no turn, trajectory is more probable than 1-turn ones when ${\displaystyle \approx Np<1}$. Thus for some small ${\displaystyle p}$ the direct trajectory is always the most probable exit trajectory. Hence the exit process consists of a waiting time to this unlikely event, and then the event itself. Hence the waiting time to exit is exponentially distributed, with ${\displaystyle {\rm {c.v.}}\approx 1}$.
• Overall: the coefficient of variation of the exit time as a function of ${\displaystyle p}$ changes from 1 to 0 as ${\displaystyle p}$ goes from 0 to 1. We have discussed these processed in detail in (Bel et al., 2010).
• E. coli chemotaxis as a biased random walk: going up the gradient of an attractant, time to a tumble increases. This is described very well in (Berg 2000, Berg and Brown 1972).
• Showing that the E. coli can find the greener pastures with this protocol: looking at nearby points ${\displaystyle x_{1},x_{2}}$, closer than the length of a single typical run, ${\displaystyle p_{1\to 2}\propto p(x_{1})\exp \left[-{\frac {|x_{1}-x_{2}|}{v\tau (c(x_{1}))}}\right]}$, where ${\displaystyle \tau (c)}$ is the mean waiting time to a tumble at a concentration ${\displaystyle c}$. Similarly, ${\displaystyle p_{2\to 1}\propto p(x_{2})\exp \left[-{\frac {|x_{1}-x_{2}|}{v\tau (c(x_{2}))}}\right]}$. In steady state: ${\displaystyle p(x1\to x_{2})=p(x_{2}\to x_{1})}$. Therefore, ${\displaystyle {\frac {p(x_{2})}{p(x_{1})}}=\exp \left[-{\frac {|x_{1}-x_{2}|}{v\tau (c(x_{2}))}}+{\frac {|x_{1}-x_{2}|}{v\tau (c(x_{2}))}}\right]}$, so that ${\displaystyle p}$ is higher in the direction where ${\displaystyle \tau }$ increases.
• Simulations of E. coli trajectories and intro to Matlab. See Matlab simulation code.
• Generation of exponential random numbers: log of uniform random number is an exponentially distributed random number.
• First passage and first return
• Connections between the two moment generating functions: typically problems for first/eventual passage/return/location analysis are solved using moment generating functions. E.g., probability of being at point ${\displaystyle x}$ at time ${\displaystyle t}$ is equal to a probability of first passing through ${\displaystyle x}$ at ${\displaystyle \tau \in [0,t)}$ and then returning to ${\displaystyle x}$ in time ${\displaystyle t-\tau }$. Hence ${\displaystyle MGF_{being\;at\;x}=MGF_{first\;passage}MGF_{eventual\;return}}$.
• Return and passage probabilities in different dimensions: mean return times diverge in all dimensions; probability of eventual return is 1 in 1-d and 2-d, and about 0.65 in 3-d.
• Return times and Berg-von Hippel transcription factor searching for a binding site. What is an optimal strategy for a transcription factor to search for a binding site?
• For a diffusive process, the radius of explored region goes as ${\displaystyle r\propto {\sqrt {T}}}$. The number of different sites in the explored region is ${\displaystyle V\propto T^{d/2}}$. But the number of different visited sites is ${\displaystyle n\propto T}$. Hence each site is explored about ${\displaystyle n/V\propto T^{1-d/2}}$ times. Hence in 1-d each site is explored many times, in 2-d each site is (barely) explored, and in 3-d very few sites are ever explored
• Why 1-d search would fail? Because too much time is spent on exploration -- you always come back.
• Why 3-d search would fail? Because very few sites are ever explored, and the TF will not come close to its needed target.
• Why 1-d/3-d search is faster? You can move fast between patches (3-d), and then explore each patch throughly in 1-d way. Details of 1-3d search (following Slutsky and Mirny, 2004):
• Search partitioned into 1-3d search rounds.
• Total search time is the sum of search times in both modes: ${\displaystyle t=n(\tau _{3}+\tau _{1})}$, where ${\displaystyle n}$ is the number of rounds.
• In 3-d search the protein almost never come back to the same search patch.
• In 1-d search the protein explores ${\displaystyle N_{1}}$ sites. Hence ${\displaystyle n=M/N_{1}}$, where ${\displaystyle M}$ is the DNA length.
• We get ${\displaystyle t=M/N_{1}(\tau _{3}+\tau _{1})}$
• ${\displaystyle N_{1}={\sqrt {16D_{1}\tau _{1}/\pi }}}$ for this model, where ${\displaystyle D_{1}}$ is the 1d diffusion constant. In general, we get ${\displaystyle N_{1}\propto {\sqrt {D_{1}\tau _{1}}}}$.
• Thus ${\displaystyle t={\frac {M}{\sqrt {16D_{1}\tau _{1}/\pi }}}(\tau _{3}+\tau _{1})}$.
• Is there an optimal time to spend on a 1-d search? Differentiating ${\displaystyle t}$ w.r.t. ${\displaystyle t_{1}}$, we get ${\displaystyle t_{1}=t_{3}}$. The transcription factor should spend the same amount of time in 1-d and 3-d search modes. Slutsky and Mirny (2004) review experimental confirmations of this.
• Wiener process: A good model of random walk at long temporal and spatial scales is diffusion. That is ${\displaystyle x(t)\sim N(vt,\sigma ^{2}t)}$. It's useful to represent such ${\displaystyle x}$ as a solution of an ordinary differential equation ${\displaystyle {\frac {dx}{dt}}=\sigma \eta (t)}$ where ${\displaystyle \eta (t)}$ is a Gaussian random variable with zero mean and the covariance ${\displaystyle \langle \eta (t)\eta (t')\rangle =\delta (t-t')}$. See the Homework problem No. 1 for the derivation of this. The random variable ${\displaystyle \eta }$ is called the Wiener process, after Norbert Wiener, who invented it.

## Homework (due Sep 17)

Note that from now on, we will have a lot of numerical simulations in our homeworks. Unless you have access to Matlab (university owned computers do), I suggest that you download Octave as described above and do all of these simulations in Octave. Save your programs -- we will keep reusing some pieces of them later in the course.

1. Suppose the variable ${\displaystyle x(t)}$ undergoes a diffusive motion with the mean drift of ${\displaystyle \mu (t)=At}$ and the variance of ${\displaystyle \sigma ^{2}(t)=B^{2}t}$. I would like to numerically simulate this stochastic dynamics on time scales ${\displaystyle \Delta t}$ much larger than the time of a single hop. For this, I write that ${\displaystyle x(t+\Delta t)=x(t)+a\,\Delta t+b\,\nu {\sqrt {\Delta t}}}$, where ${\displaystyle a,b}$ are deterministic numbers, and ${\displaystyle \nu }$ is a Gaussian random number with zero mean and unit variance. Find a relation between ${\displaystyle a,b}$ and ${\displaystyle A,B}$. Now let's take ${\displaystyle \Delta t\to 0}$, move ${\displaystyle x(t)}$ to the left of the equal sign, and divide everything by ${\displaystyle \Delta x}$. We will get ${\displaystyle {\frac {dx}{dt}}=a+\eta (t)}$, where ${\displaystyle a}$ is deterministic, and ${\displaystyle \eta }$ is a random number. Show that ${\displaystyle E(\eta )=0}$, ${\displaystyle E(\eta ^{2})={\frac {b^{2}}{\Delta t}}}$, and ${\displaystyle E(\eta (t)\eta (t'))=0}$ if ${\displaystyle t\neq t'}$. This is a very interesting differential equation that has a random term of an infinite variance in its right hand side. However, the random terms are independent from one moment of time to the next, and the infinities cancel, leaving only a small random component over long times. As a shorthand, we write such equations as (recall our definition of the ${\displaystyle \delta }$-function): ${\displaystyle \left\{{\begin{array}{l}{\frac {dx}{dt}}=a+\eta (t),\\\langle \eta (t)\eta (t')\rangle =b^{2}\delta (t-t')\end{array}}\right.}$. This is called a stochastic differential equation (SDE), and, if ${\displaystyle b=1}$, then ${\displaystyle \eta (t)}$ is called the Wiener process. While these definitions might sound confusing, especially with the infinities floating around, they will turn out to be very useful later. To avoid confusion, whenever we see such an SDE, we always interpret it as ${\displaystyle x}$ obeying a finite difference equation above, ${\displaystyle x(t+\Delta t)=x(t)+a\,\Delta t+b\,\nu {\sqrt {\Delta t}}}$.
2. In class, we have discussed the first passage time in the random walk model of an action potential generation. Let's compare our findings to numerical simulations. Let's suppose a neuron starts at rest with the voltage ${\displaystyle V_{0}<0}$. Every time step ${\displaystyle \Delta t}$ it's voltage can either go up by ${\displaystyle \Delta V}$ or down by ${\displaystyle -\Delta V}$ with probabilities ${\displaystyle p}$ and ${\displaystyle 1-p}$, respectively. If the voltage reaches ${\displaystyle V_{0}}$, then it cannot be lowered anymore. If it reaches ${\displaystyle V_{\rm {threshold}}}$, then the neuron releases an action potential (that is, it fires). Write an Octave program to simulate this random walk for arbitrary ${\displaystyle p,\,V_{0},\,V_{\rm {threshold}},\,\Delta V,\,\Delta t}$ and to record the time it takes for the neuron to fire. Run this program sufficient number of times to estimate the mean and the standard deviation of the time to firing for ${\displaystyle \Delta t=1ms}$, ${\displaystyle \Delta V=5mV}$, ${\displaystyle V_{0}=-65mV}$, ${\displaystyle V_{\rm {threshold}}=-35mV}$, and for value of ${\displaystyle p=[0.1:0.1:0.9]}$. Plot the curve of the coefficient of variation of the firing time as a function of ${\displaystyle p}$. Is it similar to what we saw in class? Note that Octave is very inefficient in implementing for or while cycles. However, it is very fast when operating on entire arrays (vectors) of numbers. Knowing this will help you to write programs that operate faster. The programs we wrote during the Wed and Thu study sessions can be downloaded here.
3. For Graduate Students and especially devoted Undergraduates (read: an Extra Credit assignment): Let's verify whether what I told you in class about random walk return probabilities is correct. We will solve Problem 1.1.17 in Grinstead and Snell book. Mathematicians have been known to get some of the best ideas while sitting in a cafe, riding on a bus, or strolling in the park. In the early 1900s the famous mathematician George Polya lived in a hotel near the woods in Zurich. He liked to walk in the woods and think about mathematics. Polya describes the following incident:
At the hotel there lived also some students with whom I usually took my meals and had friendly relations. On a certain day one of them expected the visit of his fiancee, what (sic) I knew, but I did not foresee that he and his fiancee would also set out for a stroll in the woods, and then suddenly I met them there. And then I met them the same morning repeatedly, I don’t remember how many times, but certainly much too often and I felt embarrassed: It looked as if I was snooping around which was, I assure you, not the case.
This set him to thinking about whether random walkers were destined to meet. Polya considered random walkers in one, two, and three dimensions. In one dimension, he envisioned the walker on a very long street. At each intersection the walker flips a fair coin to decide which direction to walk next. In two dimensions, the walker is walking on a grid of streets, and at each intersection he chooses one of the four possible directions with equal probability. In three dimensions (we might better speak of a random climber), the walker moves on a three-dimensional grid, and at each intersection there are now six different directions that the walker may choose, each with equal probability.
• Write a program to simulate a random walk in one dimension starting at 0. Have your program print out the lengths of the times between returns to the starting point (returns to 0). See if you can guess from this simulation the answer to the following question: Will the walker always return to his starting point eventually or might he drift away forever?
• The paths of two walkers in two dimensions who meet after n steps can be considered to be a single path that starts at (0, 0) and returns to (0, 0) after 2n steps. This means that the probability that two random walkers in two dimensions meet is the same as the probability that a single walker in two dimensions ever returns to the starting point. Thus the question of whether two walkers are sure to meet is the same as the question of whether a single walker is sure to return to the starting point. Write a program to simulate a random walk in two dimensions and see if you think that the walker is sure to return to (0, 0). If so, Po ́lya would be sure to keep meeting his friends in the park. Perhaps by now you have conjectured the answer to the question: Is a random walker in one or two dimensions sure to return to the starting point? Po ́lya answered this question for dimensions one, two, and three. He established the remarkable result that the answer is yes in one and two dimensions and no in three dimensions.
• Write a program to simulate a random walk in three dimensions and see whether, from this simulation and the results of (a) and (b), you could have guessed P ́olya’s result.