<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://nemenmanlab.org/~ilya/index.php?action=history&amp;feed=atom&amp;title=Physics_434%2C_2014%3A_Continuous_randomness</id>
	<title>Physics 434, 2014: Continuous randomness - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://nemenmanlab.org/~ilya/index.php?action=history&amp;feed=atom&amp;title=Physics_434%2C_2014%3A_Continuous_randomness"/>
	<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;action=history"/>
	<updated>2026-05-17T08:43:19Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.0</generator>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=3355&amp;oldid=prev</id>
		<title>Ilya: /* Specific probability distributions */</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=3355&amp;oldid=prev"/>
		<updated>2019-04-14T18:16:56Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Specific probability distributions&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 18:16, 14 April 2019&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l23&quot; &gt;Line 23:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 23:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Just like for the discrete case, there are some continuous distributions that happen more commonly than others, and are thus more useful. These include&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Just like for the discrete case, there are some continuous distributions that happen more commonly than others, and are thus more useful. These include&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Uniform probability density. An example of this would a probability of&amp;#160; ''E.coli'' tumbling at any moment of an interval of duration &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;. Then &amp;lt;math&amp;gt;P(t)=1/T,\; 0\le t\le T&amp;lt;/math&amp;gt;. Note that uniform random numbers between 0 and 1 are generated in Matlab using the function rand(). &amp;lt;math&amp;gt;\mu_{\rm uni}=T/2, \, \sigma^2_{\rm uni}=\frac{1}{12}T^2&amp;lt;/math&amp;gt;. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Uniform probability density. An example of this would a probability of&amp;#160; ''E.coli'' tumbling at any moment of an interval of duration &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;. Then &amp;lt;math&amp;gt;P(t)=1/T,\; 0\le t\le T&amp;lt;/math&amp;gt;. Note that uniform random numbers between 0 and 1 are generated in Matlab using the function rand(). &amp;lt;math&amp;gt;\mu_{\rm uni}=T/2, \, \sigma^2_{\rm uni}=\frac{1}{12}T^2&amp;lt;/math&amp;gt;. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Exponential probability density: distribution of time to the next ''E. coli'' tumble event, if such tumbles happen at a constant tumbling rate. You can take the geometric distribution, and take a limit of very many time steps, and small probability of tumbling at any given time. As a result, we get &amp;lt;math&amp;gt;p(t)=r e^{-rt}&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;\mu_{\rm exp}=1/r,\, \sigma^2_{\rm exp}=1/r^2&amp;lt;/math&amp;gt;. Notice also the connection between exponential, uniform, and Poisson distributions: an event that happens at a uniform rate happens with an exponential waiting time between &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;to &lt;/del&gt;successive events, and a number of such events in a fixed period of time is Poisson.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Exponential probability density: distribution of time to the next ''E. coli'' tumble event, if such tumbles happen at a constant tumbling rate. You can take the geometric distribution, and take a limit of very many time steps, and small probability of tumbling at any given time. As a result, we get &amp;lt;math&amp;gt;p(t)=r e^{-rt}&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;\mu_{\rm exp}=1/r,\, \sigma^2_{\rm exp}=1/r^2&amp;lt;/math&amp;gt;. Notice also the connection between exponential, uniform, and Poisson distributions: an event that happens at a uniform rate happens with an exponential waiting time between &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;two &lt;/ins&gt;successive events, and a number of such events in a fixed period of time is Poisson.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Finally, taking a Poisson distribution with the large parameter &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, we see that it starts looking similarly to another probability distribution that many of you have seen before, namely the normal, or the Gaussian distribution: &amp;lt;math&amp;gt;p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp \left[-\frac{(x-\mu)^2}{2\sigma^2}\right]&amp;lt;/math&amp;gt; (for the normal distribution resulting from the Poisson, we have a specific relation between the mean and the variance; namely,&amp;#160; &amp;lt;math&amp;gt;\mu=\sigma^2&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Finally, taking a Poisson distribution with the large parameter &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, we see that it starts looking similarly to another probability distribution that many of you have seen before, namely the normal, or the Gaussian distribution: &amp;lt;math&amp;gt;p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp \left[-\frac{(x-\mu)^2}{2\sigma^2}\right]&amp;lt;/math&amp;gt; (for the normal distribution resulting from the Poisson, we have a specific relation between the mean and the variance; namely,&amp;#160; &amp;lt;math&amp;gt;\mu=\sigma^2&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*One can also define a multivariate extension of the normal distribution: &amp;lt;math&amp;gt;P(\vec{x}|\vec{\mu},\Sigma)=\frac{1}{[2\pi]^{d/2} \left|\Sigma\right|^{1/2}}\exp\left[-\frac{1}{2} \left(\vec{x}-\vec{\mu}\right)^T\Sigma^{-1}\left(\vec{x}-\vec{\mu}\right)\right]&amp;lt;/math&amp;gt;, here &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the covariance matrix&amp;#160; &amp;lt;math&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*One can also define a multivariate extension of the normal distribution: &amp;lt;math&amp;gt;P(\vec{x}|\vec{\mu},\Sigma)=\frac{1}{[2\pi]^{d/2} \left|\Sigma\right|^{1/2}}\exp\left[-\frac{1}{2} \left(\vec{x}-\vec{\mu}\right)^T\Sigma^{-1}\left(\vec{x}-\vec{\mu}\right)\right]&amp;lt;/math&amp;gt;, here &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the covariance matrix&amp;#160; &amp;lt;math&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Ilya</name></author>
		
	</entry>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=3354&amp;oldid=prev</id>
		<title>Ilya: /* Specific probability distributions */</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=3354&amp;oldid=prev"/>
		<updated>2019-04-14T18:14:42Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Specific probability distributions&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 18:14, 14 April 2019&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l23&quot; &gt;Line 23:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 23:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Just like for the discrete case, there are some continuous distributions that happen more commonly than others, and are thus more useful. These include&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Just like for the discrete case, there are some continuous distributions that happen more commonly than others, and are thus more useful. These include&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Uniform probability density. An example of this would a probability of&amp;#160; ''E.coli'' tumbling at any moment of an interval of duration &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;. Then &amp;lt;math&amp;gt;P(t)=1/T,\; 0\le t\le T&amp;lt;/math&amp;gt;. Note that uniform random numbers between 0 and 1 are generated in Matlab using the function rand(). &amp;lt;math&amp;gt;\mu_{\rm uni}=T/2, \, \sigma^2_{\rm uni}=\frac{1}{12}T^2&amp;lt;/math&amp;gt;. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Uniform probability density. An example of this would a probability of&amp;#160; ''E.coli'' tumbling at any moment of an interval of duration &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;. Then &amp;lt;math&amp;gt;P(t)=1/T,\; 0\le t\le T&amp;lt;/math&amp;gt;. Note that uniform random numbers between 0 and 1 are generated in Matlab using the function rand(). &amp;lt;math&amp;gt;\mu_{\rm uni}=T/2, \, \sigma^2_{\rm uni}=\frac{1}{12}T^2&amp;lt;/math&amp;gt;. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Exponential probability density: distribution of time to the next ''E. coli'' tumble event, if such tumbles happen at a constant tumbling rate. You can take the geometric distribution, and take a limit of very many time steps, and small probability of tumbling at any given time. As a result, we get &amp;lt;math&amp;gt;p(t)=r e^{-rt}&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;\mu_{\rm exp}=1/r,\, \sigma^2_{\rm exp}=1/r^2&amp;lt;/math&amp;gt;. Notice also the connection between exponential, uniform, and Poisson &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;distribution&lt;/del&gt;: an &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;even &lt;/del&gt;that happens at a uniform rate happens with an exponential waiting time between to successive events, and a number of such events in a fixed period of time is Poisson.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Exponential probability density: distribution of time to the next ''E. coli'' tumble event, if such tumbles happen at a constant tumbling rate. You can take the geometric distribution, and take a limit of very many time steps, and small probability of tumbling at any given time. As a result, we get &amp;lt;math&amp;gt;p(t)=r e^{-rt}&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;\mu_{\rm exp}=1/r,\, \sigma^2_{\rm exp}=1/r^2&amp;lt;/math&amp;gt;. Notice also the connection between exponential, uniform, and Poisson &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;distributions&lt;/ins&gt;: an &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;event &lt;/ins&gt;that happens at a uniform rate happens with an exponential waiting time between to successive events, and a number of such events in a fixed period of time is Poisson.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Finally, taking a Poisson distribution with the large parameter &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, we see that it starts looking similarly to another probability distribution that many of you have seen before, namely the normal, or the Gaussian distribution: &amp;lt;math&amp;gt;p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp \left[-\frac{(x-\mu)^2}{2\sigma^2}\right]&amp;lt;/math&amp;gt; (for the normal distribution resulting from the Poisson, we have a specific relation between the mean and the variance; namely,&amp;#160; &amp;lt;math&amp;gt;\mu=\sigma^2&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Finally, taking a Poisson distribution with the large parameter &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, we see that it starts looking similarly to another probability distribution that many of you have seen before, namely the normal, or the Gaussian distribution: &amp;lt;math&amp;gt;p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp \left[-\frac{(x-\mu)^2}{2\sigma^2}\right]&amp;lt;/math&amp;gt; (for the normal distribution resulting from the Poisson, we have a specific relation between the mean and the variance; namely,&amp;#160; &amp;lt;math&amp;gt;\mu=\sigma^2&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*One can also define a multivariate extension of the normal distribution: &amp;lt;math&amp;gt;P(\vec{x}|\vec{\mu},\Sigma)=\frac{1}{[2\pi]^{d/2} \left|\Sigma\right|^{1/2}}\exp\left[-\frac{1}{2} \left(\vec{x}-\vec{\mu}\right)^T\Sigma^{-1}\left(\vec{x}-\vec{\mu}\right)\right]&amp;lt;/math&amp;gt;, here &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the covariance matrix&amp;#160; &amp;lt;math&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*One can also define a multivariate extension of the normal distribution: &amp;lt;math&amp;gt;P(\vec{x}|\vec{\mu},\Sigma)=\frac{1}{[2\pi]^{d/2} \left|\Sigma\right|^{1/2}}\exp\left[-\frac{1}{2} \left(\vec{x}-\vec{\mu}\right)^T\Sigma^{-1}\left(\vec{x}-\vec{\mu}\right)\right]&amp;lt;/math&amp;gt;, here &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the covariance matrix&amp;#160; &amp;lt;math&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Ilya</name></author>
		
	</entry>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=3353&amp;oldid=prev</id>
		<title>Ilya: /* Main lecture */</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=3353&amp;oldid=prev"/>
		<updated>2019-04-14T17:27:54Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Main lecture&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 17:27, 14 April 2019&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l18&quot; &gt;Line 18:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 18:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Probability density satisfy most of the same properties that we discussed for probability distributions. For example, they must normalize to one: &amp;lt;math&amp;gt;\int dx p(x) =1&amp;lt;/math&amp;gt;. One distinction of probabilities is that probability densities can actually be larger than one (as long as they normalize). Joint, marginal, and conditional probability densities can be defines just like probabilities. Expectation values and, in particular, moments can be defined similarly, by replacing summations with integrals.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Probability density satisfy most of the same properties that we discussed for probability distributions. For example, they must normalize to one: &amp;lt;math&amp;gt;\int dx p(x) =1&amp;lt;/math&amp;gt;. One distinction of probabilities is that probability densities can actually be larger than one (as long as they normalize). Joint, marginal, and conditional probability densities can be defines just like probabilities. Expectation values and, in particular, moments can be defined similarly, by replacing summations with integrals.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Additionally, for continuous variables (or more generally, for continuous or discrete ordinal variables), one can define the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;commutative &lt;/del&gt;distribution, which is the probability that a random draw will be smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C(x)=\int_{-\infty}^x P(x')dx'&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Additionally, for continuous variables (or more generally, for continuous or discrete ordinal variables), one can define the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;cumulative &lt;/ins&gt;distribution, which is the probability that a random draw will be smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C(x)=\int_{-\infty}^x P(x')dx'&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Specific probability distributions===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Specific probability distributions===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Ilya</name></author>
		
	</entry>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=551&amp;oldid=prev</id>
		<title>Ilya: 1 revision imported</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=551&amp;oldid=prev"/>
		<updated>2018-07-04T16:28:43Z</updated>

		<summary type="html">&lt;p&gt;1 revision imported&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #222; text-align: center;&quot;&gt;Revision as of 16:28, 4 July 2018&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-notice&quot; lang=&quot;en&quot;&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Ilya</name></author>
		
	</entry>
	<entry>
		<id>https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=550&amp;oldid=prev</id>
		<title>nemenman&gt;Ilya: /* Reparameterization and generation of continuous random variables */</title>
		<link rel="alternate" type="text/html" href="https://nemenmanlab.org/~ilya/index.php?title=Physics_434,_2014:_Continuous_randomness&amp;diff=550&amp;oldid=prev"/>
		<updated>2014-10-21T14:34:41Z</updated>

		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Reparameterization and generation of continuous random variables&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{PHYS434-2014}}&lt;br /&gt;
&lt;br /&gt;
==General Notes==&lt;br /&gt;
In these lectures, we are extending our ideas of randomness from discrete to continuous variables. After introducing basic concepts, we study some specific useful probability distributions. In the course of this whole lecture block, we should be thinking about ''E. coli'' chemotaxis in the background -- all of these concepts will be applicable, when answering questions like: What is the duration of a single run? How far away would the ''E. coli'' move after many runs? etc. &lt;br /&gt;
&lt;br /&gt;
Here we are largely following Chapter 5 in the Nelson's book. However, an additional very good introduction to probability theory can be found in &lt;br /&gt;
[http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/pdf.html Introduction to Probability] by CM Grinstead and JL Snell.&lt;br /&gt;
&lt;br /&gt;
Since there are good books to follow here, the notes are not very detailed.&lt;br /&gt;
&lt;br /&gt;
==Main lecture==&lt;br /&gt;
&lt;br /&gt;
====Introduction====&lt;br /&gt;
Some random variables that we discussed were discrete, like heads or tails of a coin toss, or existence or non-existence of a mutation. However, other random variables can be continuous. For example, the heigh of people in the room, or a position of an ''E. coli'' in an experiment, or a time to neural action potential -- all of these variables are, in principle, continuous. It makes little sense to ask what is the probability that an individual has ''exactly'' a certain height; every two people in the room will have the height which is different, if only by a bit. it does make sense, however, to ask what is the probability that the height of a person is close to a certain number&amp;lt;math&amp;gt;x_0&amp;lt;/math&amp;gt;, within &amp;lt;math&amp;gt;\Delta x&amp;lt;/math&amp;gt; from it. Clearly, in the limit of small &amp;lt;math&amp;gt;\Delta x&amp;lt;/math&amp;gt;, this probability will be proportional to &amp;lt;math&amp;gt;\Delta x&amp;lt;/math&amp;gt; itself, &amp;lt;math&amp;gt;P(x\in[x_0,x_0+\Delta x])=p(x_0)\Delta x&amp;lt;/math&amp;gt;. Thus, having defined the set of possible outcomes of an experiment as a real number, we can define a ''probability density'', as a double limit -- a limit of frequencies of observing a certain interval after many independent draws, and the limit of taking the interval size small, &amp;lt;math&amp;gt;\Delta x\to0&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p(x_0)=\lim_{\Delta x\to\infty}\lim_{N\to\infty}\frac{f_{[x_0,x_0+\Delta x]}}{\Delta x}=\lim_{\Delta x\to\infty}\lim_{N\to\infty}\frac{n_{[x_0,x_0+\Delta x]}}{N\Delta x}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Probability density satisfy most of the same properties that we discussed for probability distributions. For example, they must normalize to one: &amp;lt;math&amp;gt;\int dx p(x) =1&amp;lt;/math&amp;gt;. One distinction of probabilities is that probability densities can actually be larger than one (as long as they normalize). Joint, marginal, and conditional probability densities can be defines just like probabilities. Expectation values and, in particular, moments can be defined similarly, by replacing summations with integrals.&lt;br /&gt;
&lt;br /&gt;
Additionally, for continuous variables (or more generally, for continuous or discrete ordinal variables), one can define the commutative distribution, which is the probability that a random draw will be smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C(x)=\int_{-\infty}^x P(x')dx'&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Specific probability distributions===&lt;br /&gt;
Just like for the discrete case, there are some continuous distributions that happen more commonly than others, and are thus more useful. These include&lt;br /&gt;
*Uniform probability density. An example of this would a probability of  ''E.coli'' tumbling at any moment of an interval of duration &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;. Then &amp;lt;math&amp;gt;P(t)=1/T,\; 0\le t\le T&amp;lt;/math&amp;gt;. Note that uniform random numbers between 0 and 1 are generated in Matlab using the function rand(). &amp;lt;math&amp;gt;\mu_{\rm uni}=T/2, \, \sigma^2_{\rm uni}=\frac{1}{12}T^2&amp;lt;/math&amp;gt;. &lt;br /&gt;
*Exponential probability density: distribution of time to the next ''E. coli'' tumble event, if such tumbles happen at a constant tumbling rate. You can take the geometric distribution, and take a limit of very many time steps, and small probability of tumbling at any given time. As a result, we get &amp;lt;math&amp;gt;p(t)=r e^{-rt}&amp;lt;/math&amp;gt;. &amp;lt;math&amp;gt;\mu_{\rm exp}=1/r,\, \sigma^2_{\rm exp}=1/r^2&amp;lt;/math&amp;gt;. Notice also the connection between exponential, uniform, and Poisson distribution: an even that happens at a uniform rate happens with an exponential waiting time between to successive events, and a number of such events in a fixed period of time is Poisson.&lt;br /&gt;
*Finally, taking a Poisson distribution with the large parameter &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, we see that it starts looking similarly to another probability distribution that many of you have seen before, namely the normal, or the Gaussian distribution: &amp;lt;math&amp;gt;p(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp \left[-\frac{(x-\mu)^2}{2\sigma^2}\right]&amp;lt;/math&amp;gt; (for the normal distribution resulting from the Poisson, we have a specific relation between the mean and the variance; namely,  &amp;lt;math&amp;gt;\mu=\sigma^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
*One can also define a multivariate extension of the normal distribution: &amp;lt;math&amp;gt;P(\vec{x}|\vec{\mu},\Sigma)=\frac{1}{[2\pi]^{d/2} \left|\Sigma\right|^{1/2}}\exp\left[-\frac{1}{2} \left(\vec{x}-\vec{\mu}\right)^T\Sigma^{-1}\left(\vec{x}-\vec{\mu}\right)\right]&amp;lt;/math&amp;gt;, here &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the covariance matrix  &amp;lt;math&amp;gt;&lt;br /&gt;
\Sigma&lt;br /&gt;
= \left[\begin{array}{llll}&lt;br /&gt;
 \langle(x_1 - \mu_1)(x_1 - \mu_1)\rangle &amp;amp; \langle(X_1 - \mu_1)(X_2 - \mu_2)\rangle &amp;amp; \cdots &amp;amp; \langle(X_1 - \mu_1)(X_n - \mu_n)\rangle \\ &lt;br /&gt;
 \langle(X_2 - \mu_2)(X_1 - \mu_1)\rangle &amp;amp; \langle(X_2 - \mu_2)(X_2 - \mu_2)\rangle &amp;amp; \cdots &amp;amp; \langle(X_2 - \mu_2)(X_n - \mu_n)\rangle \\ &lt;br /&gt;
 \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \\ &lt;br /&gt;
 \langle(X_n - \mu_n)(X_1 - \mu_1)\rangle &amp;amp; \langle(X_n - \mu_n)(X_2 - \mu_2)\rangle &amp;amp; \cdots &amp;amp; \langle(X_n - \mu_n)(X_n - \mu_n)\rangle&lt;br /&gt;
\end{array}\right].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
*To connect probabilistic and deterministic calculus, we can define a random variable with a probability distribution that forces it to take just one value. This is called a &amp;lt;math&amp;gt;\delta&amp;lt;/math&amp;gt;-distribution: &amp;lt;math&amp;gt;\delta(x-\mu)=\lim_{\sigma\to0}\frac{1}{\sqrt{2\pi}\sigma}\exp{\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]}&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;\delta(0)\to\infty,\;\delta(x\neq0)=0&amp;lt;/math&amp;gt;. An interesting property of the &amp;lt;math&amp;gt;\delta&amp;lt;/math&amp;gt; distribution is that &amp;lt;math&amp;gt;f(x)=\int_{-\infty}^{\infty}dx' \delta(x-x')f(x')&amp;lt;/math&amp;gt;. In other words, convolving with a &amp;lt;math&amp;gt;\delta&amp;lt;/math&amp;gt;-distribution simply replaces the variable name.&lt;br /&gt;
&lt;br /&gt;
===Reparameterization and generation of continuous random variables===&lt;br /&gt;
Suppose we define a new variable &amp;lt;math&amp;gt;x'=x'(x)&amp;lt;/math&amp;gt;. The number of counts that land into a certain interval is the same, irrespective of whether this interval is indexed by &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; or by &amp;lt;math&amp;gt;x'&amp;lt;/math&amp;gt;. Therefore, &amp;lt;math&amp;gt;p(x')|dx'|=p(x)|dx|&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;p(x')= p(x(x'))\left|\frac{dx}{dx'}\right|&amp;lt;/math&amp;gt;. This can be used to generate continuous random variables with different probability distributions. For example, suppose that &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is uniform between 0 and 1. Then &amp;lt;math&amp;gt;p(x)=1&amp;lt;/math&amp;gt;. If I want to generate and exponentially distributed &amp;lt;math&amp;gt;x'&amp;lt;/math&amp;gt;, I need to find such a function &amp;lt;math&amp;gt;x'(x)&amp;lt;/math&amp;gt;\left|\frac{dx}{dx'}\right|=\exp(-x')&amp;lt;/math&amp;gt;. One can see that &amp;lt;math&amp;gt;x'=-\log x&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;x=\exp(-x')&amp;lt;/math&amp;gt;, satisfies this condition. In other words, to generate an exponentially distributed random variable we take a log of a uniform random number. More generally, one can generate other random variables (e.g., Cauchy, as we do in a homework), by finding an reparameterization with an appropriate derivative.&lt;br /&gt;
&lt;br /&gt;
More generally, for multi-dimensional variables &amp;lt;math&amp;gt;P(\vec{x'})=P(\vec{x})\left|\frac{dx_\alpha}{dx'_\beta}\right|&amp;lt;/math&amp;gt;&lt;/div&gt;</summary>
		<author><name>nemenman&gt;Ilya</name></author>
		
	</entry>
</feed>