<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://nemenmanlab.org/~ilya/index.php?action=history&amp;feed=atom&amp;title=Physics_434%2C_2012%3A_Lecture_5</id>
	<title>Physics 434, 2012: Lecture 5 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://nemenmanlab.org/~ilya/index.php?action=history&amp;feed=atom&amp;title=Physics_434%2C_2012%3A_Lecture_5"/>
	<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2012:_Lecture_5&amp;action=history"/>
	<updated>2026-05-17T09:39:54Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.0</generator>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2012:_Lecture_5&amp;diff=425&amp;oldid=prev</id>
		<title>Ilya: 1 revision imported</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2012:_Lecture_5&amp;diff=425&amp;oldid=prev"/>
		<updated>2018-07-04T16:28:42Z</updated>

		<summary type="html">&lt;p&gt;1 revision imported&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 16:28, 4 July 2018&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-notice&quot; lang=&quot;en&quot;&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Ilya</name></author>
		
	</entry>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2012:_Lecture_5&amp;diff=424&amp;oldid=prev</id>
		<title>nemenman&gt;Ilya: /* Main Lecture */</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2012:_Lecture_5&amp;diff=424&amp;oldid=prev"/>
		<updated>2012-09-18T01:53:42Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Main Lecture&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{PHYS434-2012}}&lt;br /&gt;
&lt;br /&gt;
We are continuing our review of some basic concepts of probability theory, such as probability distributions, conditionals, marginals, expectations, etc. We will discuss the central limit theorem and will derive some properties of random walks. Finally, we will study some specific useful probability distributions. In the course of this whole lecture block, we should be thinking about ''E. coli'' chemotaxis in the background -- all of these concepts will be applicable.&lt;br /&gt;
&lt;br /&gt;
A very good introduction to probability theory can be found in &lt;br /&gt;
[http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/pdf.html Introduction to Probability] by CM Grinstead and JL Snell.&lt;br /&gt;
&lt;br /&gt;
====Main Lecture====&lt;br /&gt;
*We are still answering the question: what will the distribution of ''E. coli'' positions be if it starts at 0 and moves for time &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;.&lt;br /&gt;
*Using the addition of CGFs, we show that '''when independent random variables add, their cumulants, and, in particular, their means and variances add.'''&lt;br /&gt;
*''Why do measurements of a quantity many times improve the measurement?''&lt;br /&gt;
**Frequencies and probabilities: '''Law of large numbers'''. If &amp;lt;math&amp;gt;S=\frac{1}{n}\sum x_i&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;\mu_S=\mu_x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma^2_S=\sigma^2_x/n&amp;lt;/math&amp;gt;. This follows from the addition of means and variances. So, we can calculate the mean and the variance of the ''E. coli'' motion.&lt;br /&gt;
*'''Warmup question:''' Now consider an idealized spherical cell of radius &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; whose entire surface is covered with disk-like receptors of radius &amp;lt;math&amp;gt;a&amp;lt;/math&amp;gt;. This is a reasonably good model for an immune cell, such as a mast cell. There are &amp;lt;math&amp;gt;N\approx 4\pi A^2/(\pi a^2)=4(A/a)^2&amp;lt;/math&amp;gt; of such receptors. Using the Berg-Purcell limit from the first lecture, we know that the accuracy of determination of the concentration by a single receptor is &amp;lt;math&amp;gt;\delta C/C \sim 1/\sqrt{aCDt}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; is the diffusion coefficient and &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is the observation time. Since we have &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; receptors, we use the law of large numbers to calculate that the overall accuracy of the concentration determination by the cell should be &amp;lt;math&amp;gt;\delta C/C \sim 1/\sqrt{aCDtN}\propto 1/\sqrt{CDtA^2/a}&amp;lt;/math&amp;gt;. On the other hand, if we consider the entire cell a single large receptor of size &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;, the Berg-Purcell limit gives: &amp;lt;math&amp;gt;\delta C/C \sim 1/\sqrt{ACDt}&amp;lt;/math&amp;gt;. Can you reconcile the differences between these two estimates?&lt;br /&gt;
*'''Central limit theorem''': sum of many i.i.d. random variables (with finite variances) approaches a certain distribution, which we call a Gaussian distribution. This is the most remarkable law in the probability theory. It is supposed to explains why experimental noises are often Gaussian distributed as well. More precisely, suppose &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; are i.i.d. random variables with mean &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; and variance &amp;lt;math&amp;gt;\sigma^2&amp;lt;/math&amp;gt;. Then the CLT says that &amp;lt;math&amp;gt;S_N=\frac{1}{\sqrt{N}}\sum_{i=1}^N \frac{x_i-\mu}{\sigma}=\frac{1}{\sqrt{N}}\sum_{i=1}^N \xi_i&amp;lt;/math&amp;gt; is distributed according to &amp;lt;math&amp;gt;N(0,1)&amp;lt;/math&amp;gt; (called the ''standard'' normal distribution), provided &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; is sufficiently large. We prove this assuming that none of the cumulants of the i.i.d. variables is infinite.&lt;br /&gt;
**The same holds if the &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; variables have different variances and means, but all variances are bounded. Convergence will be slower though.&lt;br /&gt;
*The central limit distribution has only the first two cumulants that are nonzero. What is this distribution? &lt;br /&gt;
**It's a Gaussian with a given mean and a variance. We show this but explicitly computing the CGF of a Gaussian&lt;br /&gt;
**Numerical simulation of the CLT for exponential and binary distributions: [[media:CLT.m| CLT.m]]&lt;br /&gt;
**''E. coli'' motion has a Gaussian distribution of end points. Moreover, we will show in a homework that &amp;lt;math&amp;gt;\langle x^2\rangle\propto t&amp;lt;/math&amp;gt; for ''E. coli''. It's a ''diffusive'' motion as well, just like diffision of small molecules. We demonstrate this by numerical simulations (homework)&lt;br /&gt;
*Additional distributions to remember:&lt;br /&gt;
**normal: diffusive motion &amp;lt;math&amp;gt;P(x)={N}(\mu,\sigma^2)=\frac{1}{\sqrt{2\pi}\sigma}\exp{\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]}&amp;lt;/math&amp;gt;&lt;br /&gt;
**&amp;lt;math&amp;gt;\delta&amp;lt;/math&amp;gt;-distribution: deterministic limit &amp;lt;math&amp;gt;\delta(x-\mu)=\lim_{\sigma\to0}\frac{1}{\sqrt{2\pi}\sigma}\exp{\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]}&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\delta(0)\to\infty,\;\delta(x\neq0)=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
**multivariate normal: &amp;lt;math&amp;gt;P(\vec{x}|\vec{\mu},\Sigma)=\frac{1}{[2\pi]^{d/2} \left|\Sigma\right|^{1/2}}\exp\left[-\frac{1}{2} \left(\vec{x}-\vec{\mu}\right)^T\Sigma^{-1}\left(\vec{x}-\vec{\mu}\right)\right]&amp;lt;/math&amp;gt;, here &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the covariance matrix  &amp;lt;math&amp;gt;&lt;br /&gt;
\Sigma&lt;br /&gt;
= \left[\begin{array}{llll}&lt;br /&gt;
 \langle(x_1 - \mu_1)(x_1 - \mu_1)\rangle &amp;amp; \langle(X_1 - \mu_1)(X_2 - \mu_2)\rangle &amp;amp; \cdots &amp;amp; \langle(X_1 - \mu_1)(X_n - \mu_n)\rangle \\ &lt;br /&gt;
 \langle(X_2 - \mu_2)(X_1 - \mu_1)\rangle &amp;amp; \langle(X_2 - \mu_2)(X_2 - \mu_2)\rangle &amp;amp; \cdots &amp;amp; \langle(X_2 - \mu_2)(X_n - \mu_n)\rangle \\ &lt;br /&gt;
 \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\ &lt;br /&gt;
 \langle(X_n - \mu_n)(X_1 - \mu_1)\rangle &amp;amp; \langle(X_n - \mu_n)(X_2 - \mu_2)\rangle &amp;amp; \cdots &amp;amp; \langle(X_n - \mu_n)(X_n - \mu_n)\rangle&lt;br /&gt;
\end{array}\right].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>nemenman&gt;Ilya</name></author>
		
	</entry>
</feed>