<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://nemenmanlab.org/~ilya/index.php?action=history&amp;feed=atom&amp;title=Physics_380%2C_2011%3A_Lecture_8</id>
	<title>Physics 380, 2011: Lecture 8 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://nemenmanlab.org/~ilya/index.php?action=history&amp;feed=atom&amp;title=Physics_380%2C_2011%3A_Lecture_8"/>
	<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_380,_2011:_Lecture_8&amp;action=history"/>
	<updated>2026-05-17T09:40:08Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.0</generator>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_380,_2011:_Lecture_8&amp;diff=311&amp;oldid=prev</id>
		<title>Ilya: 1 revision imported</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_380,_2011:_Lecture_8&amp;diff=311&amp;oldid=prev"/>
		<updated>2018-07-04T16:28:41Z</updated>

		<summary type="html">&lt;p&gt;1 revision imported&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 16:28, 4 July 2018&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-notice&quot; lang=&quot;en&quot;&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Ilya</name></author>
		
	</entry>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_380,_2011:_Lecture_8&amp;diff=310&amp;oldid=prev</id>
		<title>nemenman&gt;Ilya: /* Main Lecture */</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_380,_2011:_Lecture_8&amp;diff=310&amp;oldid=prev"/>
		<updated>2011-09-30T15:06:04Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Main Lecture&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{PHYS380-2011}}&lt;br /&gt;
&lt;br /&gt;
Today we are continuing discussion of randomness in biological networks that we started in the previous class.&lt;br /&gt;
I suggest you read the article by Paulsson (2004) on the main course page and the Wiener process Wikipedia article linked from there. You may also want to read Schneidman et al, 1998, Kepler and Elston, 2001, Pedraza and van Oudenaarden, 2005, and Gillespie, 2007.&lt;br /&gt;
====Warmup question====&lt;br /&gt;
#We have discussed during the last class that noise standard deviation in biochemical networks is roughly proportional to the square root of the number of involved signaling molecules.  Let's suppose the cell is producing many copies of LacY/Z proteins per RNA. What determines the coefficient of variation of the number of produced proteins: the average number of proteins, or the average number of RNAs?&lt;br /&gt;
#(courtesy of Schneidman et al., 2007). A similar question has been considered in neuroscience. Suppose we have a neuron with, a very large number of ion channels. I will tell you a few words about how these channels work in class. But roughly speaking, these channels randomly transition between multiple different open, closed, and inactivated states. In open state, ions go through. In closed state, they don't. In inactive state, the ions don't go through, and switching to open is impossible for a long time. So channels flicker between open and close, and if they stay open long enough, they may switch to inactive, where they will spend a long time. Flux of ions through the channels may shift the bias from closed to open. In addition to channels, there's a weak conductance through the membrane itself, in the direction opposite to the channels. When a large number of channels is open, the voltage growth due to influx of neurons is higher than the loss through the membrane, the voltage grows quickly, and we call this an action potential or a spike. What is the dependence of the time to the next spike on the number of channels in a patch of the neural membrane?&lt;br /&gt;
&lt;br /&gt;
====Main Lecture====&lt;br /&gt;
*The easiest way of studying such noise propagation phenomena is using the Langevin or Wiener approach.&lt;br /&gt;
*Suppose the variable &amp;lt;math&amp;gt;n(t)&amp;lt;/math&amp;gt; undergoes a diffusive motion with the mean drift of &amp;lt;math&amp;gt;\mu(t)=At&amp;lt;/math&amp;gt; and the variance of &amp;lt;math&amp;gt;\sigma^2(t)=B^2 t&amp;lt;/math&amp;gt;. I would like to numerically simulate this stochastic dynamics on time scales &amp;lt;math&amp;gt;\Delta t&amp;lt;/math&amp;gt; much larger than the time of a single hop.  For this, I write that &amp;lt;math&amp;gt;n(t+\Delta t)=n(t)+a\,\Delta t+b\,\nu\sqrt{\Delta t}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;a,b&amp;lt;/math&amp;gt; are deterministic numbers, and &amp;lt;math&amp;gt;\nu&amp;lt;/math&amp;gt; is a Gaussian random number with zero mean and unit variance. Find a relation between &amp;lt;math&amp;gt;a, b&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A,B&amp;lt;/math&amp;gt;. Now let's take &amp;lt;math&amp;gt;\Delta t\to0&amp;lt;/math&amp;gt;, move &amp;lt;math&amp;gt;x(t)&amp;lt;/math&amp;gt; to the left of the equal sign, and divide everything by &amp;lt;math&amp;gt;\Delta x&amp;lt;/math&amp;gt;. We will get &amp;lt;math&amp;gt;\frac{dx}{dt}=a+\eta(t)&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;a&amp;lt;/math&amp;gt; is deterministic, and &amp;lt;math&amp;gt;\eta&amp;lt;/math&amp;gt; is a random number. &amp;lt;math&amp;gt;E(\eta)=0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;E(\eta^2)=\frac{b^2}{\Delta t}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;E(\eta(t)\eta(t'))=0&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;t\neq t'&amp;lt;/math&amp;gt;. &lt;br /&gt;
*This is a very interesting differential equation that has a random term of an infinite variance in its right hand side. However, the random terms are independent from one moment of time to the next, and the infinities cancel, leaving only a small random component over long times. As a shorthand, we write such equations as (recall our definition of the &amp;lt;math&amp;gt;\delta&amp;lt;/math&amp;gt;-function):  &amp;lt;math&amp;gt;\left\{\begin{array}{l}\frac{dx}{dt}=a+\eta(t),\\\langle\eta(t)\eta(t')\rangle=b^2\delta(t-t')\end{array}\right.&amp;lt;/math&amp;gt;. This is called a stochastic differential equation (SDE), and, if &amp;lt;math&amp;gt;b=1&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;\eta(t)&amp;lt;/math&amp;gt; is called the Wiener process. While these definitions might sound confusing, especially with the infinities floating around, they will turn out to be very useful later. To avoid confusion, whenever we see such an SDE, we always interpret it as &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; obeying a finite difference equation above, &amp;lt;math&amp;gt;x(t+\Delta t)=x(t)+a\,\Delta t+b\,\nu\sqrt{\Delta t}&amp;lt;/math&amp;gt;.&lt;br /&gt;
*The Wiener and the FP representations are the same. For the above FP equation, the following is the Langevin one:&lt;br /&gt;
&amp;lt;math&amp;gt;\frac{dn}{dt}=-rn+\alpha+\sqrt{rn+\alpha}\eta&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;&amp;lt;\eta(t)\eta(t')&amp;gt;=\delta(t-t')&amp;lt;/math&amp;gt;.&lt;br /&gt;
*In general, if a chemical species is produced in a reaction &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; and degraded in a reaction &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, and all of reactions are independent, we get &amp;lt;math&amp;gt;\frac{dx}{dt}=F-G+\sqrt{F+G}\eta&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\eta&amp;lt;/math&amp;gt; is the white noise.&lt;br /&gt;
*How do we simulate the Langevin formulation? We discuss this in class.&lt;br /&gt;
*What happens if reactions are correlated as in the protein-RNA model. Let's suppose that, deterministically&amp;lt;math&amp;gt;\left\{\begin{array}{l}\frac{dx}{dt}=Ky-r_xx\\\frac{dy}{dt}=\alpha-r_yy\end{array}\right.&amp;lt;/math&amp;gt;, where x is protein, and y is RNA&lt;br /&gt;
*Noise in y propagates into x. &amp;lt;math&amp;gt;\left\{\begin{array}{l}\frac{dx}{dt}=Ky-r_xx+\eta_x\\\frac{dy}{dt}=\alpha-r_y+\eta_y\end{array}\right.&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;&amp;lt;\eta_y(t') \eta_y(t)&amp;gt;=(\alpha+r_yy)\delta(t-t')&amp;lt;/math&amp;gt;. However, if a single mRNA produces K proteins, on average, then &amp;lt;math&amp;gt;&amp;lt;\eta_x(t')\eta_x(t)&amp;gt;=(Ky+r_xx)\delta(t-t')&amp;lt;/math&amp;gt;. However, this now depends on y -- and the long term behavior is not that simple, as y will fluctuate itself. The propagation depends on on the strength of the variable coupling, and on the time scales.&lt;br /&gt;
*More generally, following notation in Paulsson 2004, &amp;lt;math&amp;gt;{\rm c.v.}^2_x=\frac{1}{&amp;lt;x&amp;gt;H_{xx}}+{\rm c.v.}^2_y\frac{H_{yx}^2}{H_{xx}^2}\frac{H_{xx}/\tau_x}{H_{yy}/\tau_y+H_{xx}/\tau_x}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\tau&amp;lt;/math&amp;gt;'s are the average molecule lifetimes, and &amp;lt;math&amp;gt;H_{xy}=\partial \log(G_x/F_x)/\partial \log y&amp;lt;/math&amp;gt; and similar for &amp;lt;math&amp;gt;H_{xx}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;H_{yy}&amp;lt;/math&amp;gt;&lt;br /&gt;
*Talking about our protein-mRNA system: &amp;lt;math&amp;gt;\tau_x=1/r_x&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\tau_y=1/r_y&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;H_{xx}=H_{yy}=1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;H_{yx}=-1&amp;lt;/math&amp;gt;. So &amp;lt;math&amp;gt;{\rm c.v.}^2_x=\frac{1}{&amp;lt;x&amp;gt;}+{\rm c.v.}^2_y\frac{r_x}{r_y+r_x}&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>nemenman&gt;Ilya</name></author>
		
	</entry>
</feed>