{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# High Performance Computing\n", "Module 5 of Phys212/Biol212 class\n", "### by Ilya Nemenman, 2016-2019" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Defining High Performance Computing\n", "\n", "Note to self: need to expand the introduction\n", "Nowadays news are full with articles about new technological breakthroughs in artificial intelligence: self-driving cars, automatic translation, and even computer-generated art. What makes all of this possible, among other things, is a dramatic increase in our computational capabilities in the last decade. Interestingly, the speed of computer processors has not changed that much over this time -- an individual processor now is only a bit faster than is used to be ten years ago. What has changed is that we figured out how to put more processors into a single computer (and sometimes dramatically more -- literally thousands of specialized processors in some video cards) and how to make all of these processors to work collectively on solving large computational problems we want to solve. In this module, we discuss the basics of such parallel computing and will figure out how to implement it on your laptops.\n", "\n", "What is high performance computing? To answer this, we need to define a series of terms. \n", " - **Processor** , or Central Processing Unit (**CPU**): The *brain* of the computer, the part in which most computing operations are controlled and executed. \n", " - **Serial Processing**: Execution of a single program step by step on one CPU.\n", " - **Sequential Processing**: Execution of multiple programs on a single CPU in a sequence.\n", " - **Concurrent Processing**: Execution of multiple programs at the same time. \n", " - **Parallel Processing**: Execution of multiple programs (or parts of the same one) on multiple CPUs. \n", " - **High Performance Computing**: It is basically another name for solving computational problems concurrently on multiple CPUs.\n", " \n", "It gets a bit more complicated. One can have serial and sequential processing (completing one task before starting the other on a single CPU). One can also have serial and concurrent processing (alternating between two or more tasks on the same single CPU). Finally, one can have triw parallel and concurrent processing, which means executing multiple tasks simultaneously at the same instant of time on multiple CPUs.\n", "\n", "### Organization of Parallel Computing\n", "What are the different ways of organizing parallel processing? We can develop the intuition by considering how you can arrange for multiple people in a group to do a certain complex task that consists of many different subtasks. For example, let's suppose that a group of people are arriving to a supermarket to buy a long list of supplies for a party. How should the long shopping list be split among the group? There are many different ways of doing this.\n", " - The simplest option is not to delegate different parts of the shopping list to different people, but to have just one person to complete all of the steps of the task. This is equivalent to sequential processing.\n", " - The second option is to have the group captain assign a list of subtasks to each of the people in the group, and let them all focus on each of their assigned tasks. This incurs communication cost. First, the partitioning of the full list of tasks into pieces must be done, and then these sublists must be communicated to the people. When they complete their individual tasks, the results of the completion must be communicated to the leader. Further, there are additional inefficiencies. While the task is being partitioned, the people will be waiting for their assignments, doing nothing. Then some of them will finish their subtasks before the others, and will again wait, doing nothing.\n", " - The arrangement above may possibly be improved, where whenever a person finishes his/her task, s/he starts helping the others who are still doing theirs. But this is easier said then done. Whom should they help? To know this, each person must be constantly communicating his/her progress, either broadcasting it to everyone in the group, or at least sending it to the captain, so that either everyone knows who needs help, or the captain can tell everyone. Further, when a helper arrives, the task that is running behind now needs to be partitioned again, which will take additional time and resources.\n", " - To avoid some of these problems, one may want to duplicate the tasks originally, but prioritize them, so that every person first works on his/her task, and then switches on other pre-assignees tasks, constantly reporting results. You can fill in the gaps of what kind of problems this solution carries with itself.\n", "\n", ">### Your turn\n", "Can you suggest other arrangements of how the tasks can be divided over multiple individuals?\n", "\n", "Note that, in all of these (and yours) suggestion, the more complicated the arrangement is, the more communication needs to be done. And communication takes time -- so that, at a certain point, communication cost outweighs the savings one will generate from a complex task-partitioning scheme.\n", "\n", "With these different arrangements, it makes sense to have a bit finer characterization of concurrent / parallel processing. \n", " - **Parallel processing** per se: Collection of connected, tightly coupled, strongly communicating processors concurrently solving a problem.\n", " - **Distributed processing**: Loosely coupled processors, perhaps great distances from each other, working concurrently on problems with minimal communication among them.\n", "\n", "True parallel processing is further distinguished by how memory of the computers is organized\n", " - **Shared memory**: Parallel processors access the same memory during concurrent processing, requiring minimal communication beyond the shared memory.\n", " - **Distributed memory**: Each processor has it's own memory to operate with, and synchronization of tasks is achieved through communication.\n", "\n", "Crucially, a program written for sequential processing won't execute on many processors. It needs to be **parallelized** or **scaled** to many processors. This usually involves careful consideration of how concurrent processing will be organized, how the tasks will be partitioned, how multiple data streams and execution streams will be synchronized, etc. A useful metric of how well the code is parallelized is the **speedup**, the ratio of the time it takes to execute it on $n$ processors compared to execution on $1$. It's essentially impossible to achieve the speedup of larger than $n$, and it is usually less than $n$ due to communication costs. \n", "\n", "### Types of parallel algorithms\n", "Note to self: need pictures of different parallelization schemes.\n", "Some problems are easier to parallelize than others, and different approaches are required for parallelization. These include:\n", " - **Embarrassingly parallel algorithms**: The task can be divided into subtasks that have virtually no communication.\n", "Typically problems such as generating statistics (e.g., producing many DLA clusters of Module 4) are embarrassingly parallelizable. For example, you could have run multiple Jupyter notebooks at the same time, and each would produce a single cluster, whose lengths can then be averaged.\n", " - **Data partitioning algorithms**: For example, when calculating a larger sum, the data set can be partitioned into parts, each part can be summed independently, and then the sums will be added. Here a *root* process partitions the data into subsets, and then collects the results. The root has a privileged state -- it distributes the tasks, but doesn't execute them itself.\n", " - **Divide and conquer algorithms**: Work is done without a root. Here each processor gets a task and, if it can't execute it in a requested time, it subdivides the task into two, leaves one half for itself, and recruits another processor for the remaining half. Both of the processors then continue doing the same, until each of the recruited processors has a manageable task. When the tasks are completed, each processor reports its results to the processor who originally gave it the task. \n", " - **Coarsening algorithms**: Here the processors solve the problem on multiple scales. Note to self: more to be added.\n", "\n", "For all of these algorithms, it is crucial to think how the data must be organized to speed up communication." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problems to be solved with parallel processing\n", "We already saw a few such problems in Module 4. Essentially every problem solved using agent-based simulations can be parallelized using embarrassingly parallelizeable approach. Embarrassing parallelization won't work, however, for approaches using ceullar automata. The problems we discussed last time (DLA, annihilation) can be solved by data partitioning, where each one of the processors analyzes a sub-part of the lattice.\n", "\n", "There is a special type of cellular automata, for which parallel processing is usually used (and, in fact, was developed for originally). These are called *partial differential equations*. Consider a situation where one is trying to predict weather, or, for simplicity, just the temperature $T$ over some extended spatial range, which is what weather forcasters routinely do. So the variable $T$ that we care about depends on the spatial coordinates in addition to time, $T=T(x,y,z,t)$. This is in contrast to all previous examples we considered, where the dynamical variables were few, and they depended just on time, such as the coordinate and the angular velocity of the pendulum, $\\theta(t), \\omega(t)$, or the concentration of nutrients and the number of bacteria, $\\rho(t),n(t)$. Variable that are space and time dependent, like the temperature, are known as *fields*. On a digital computer, we usually approximate the continuous coordinates $x,y,z$ as a grid, and so $T(x,y,z,t)$ becomes a set of variables $T(x_i,y_j,z_k,t)$, with the dimensionality of the set being $N=\\frac{L_x}{\\Delta x}\\frac{L_y}{\\Delta y}\\frac{L_z}{\\Delta z}\\sim \\left(\\frac{L_x}{\\Delta x}\\right)^d$, where $L_x$ is the linear span of the system, $\\Delta x$ is the lattice spacing (and similarly for $y$ and $z$), and $d=3$ is the dimension. Even for reazonably small $\\frac{L_x}{\\Delta x}\\sim 100$, the number of dynamical variables we need to store to describe the dynamics of $T$ is $N\\sim 10^6$. That is, the spatially extended dynamical system is equivalent to a dynamical system with very many (potentially millions) dynamical variables. As you have observed earlier, solving ODEs for even a few dynamical variables can take a substantial amount of time. It is also clear that this time complexity is going to scale as, at least, $O(N)$, and potentially even worse if many variables interact with each other. Thus the need to use all of the available computational resources (and hence parallelizing the code) for such spatially extended simulations is obvious. \n", "\n", ">### Track 2 \n", "Notice that the probability distribution of having a certain number of molecules in the Chemical Master Equation (CME), which we studied in Track 2 in the previous module, depends on that number as well as on time, $P=P(n,t)$. So while the CME does not describe a function $P(n,t)$ for a continuous $n$, it becomes very similar to the discretized dynamics of continuous fields, such as $T$, when both are represented on a digital computer. The dimensionality of the dynamics of the CME thus scales as $N\\sim \\prod N_{{\\rm max},i}\\sim N_{rm max}^d$, where $N_{{\\rm max},i}$ is the maximum allowable number of molecules of a chemical species $i$ involved in the process, and $d$ is the dimensionality, or the number of these species. Thus if the number of chemical species involved in the dynamics is more than a few (and it can be arbitrary!), the CME becomes a system of astronomically many coupled ODEs, and parallelizing the code becomes even more important than for fields in real spatial dimensions. Note to self: maybe stick with CME instead of diffusion? We will ontroduce one of those here -- the heat diffusion equation, but the Chemical Master Equation from Track 2 in Module 4 is a related example. \n", "\n", "### Newton's law of cooling and Fourier's law of heat conductance\n", "Let's start with a simple example. Suppose you have two bodies in contact with temperatures $T_1$ and $T_2$. How do their temperatures depend on time? To develop a mathematical model of this process, we ... to be continued...\n", "\n", "\n", "### Partial Differential Equations (PDEs): The Diffusion Equation\n", "The heat diffusion equation -- one of many partial differential equations (differential equations involving partial derivatives).\n", " - Fick's first law of diffusion $J=-D\\frac{d\\phi}{dx}$ or $\\vec{J}=-D\\nabla \\phi$ (also known as Newton's law of cooling if $\\phi=T$, or the Fourier law of heat conduction).\n", " - Fick's second law of diffusion $\\partial_t \\phi = -D\\nabla^2 \\phi$.\n", " - Finite difference form of the heat diffusion equation; need boundary conditions to complete the solution.\n", " - Boundary conditions (implemented with extending the grid matrix):\n", " - Absorbing\n", " - Reflecting\n", " - Periodic\n", "\n", "### Solving the Diffusion Equation using Python\n", "Talk about instabilities-- TBC." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Initialization block\n", "import numpy as np\n", "import time\n", "import matplotlib.pyplot as plt\n", "\n", "# The function below implements the single step of the diffusion equation\n", "def diffusion(D, dt, dx, u):\n", " \"\"\" compute one time step of diffusion\n", " D: diffusion constant\n", " dt: step size in time\n", " dx: step size in space\n", " u: concentration on lattice points of diffusing quantity, assumed (L+2)x(L+2) because of the\n", " boundary conditions\n", " return: u after one time step \"\"\"\n", "\n", " note that because of the boundary conditions, \n", " the assumed lattice has a boundary belt all around it, so that the lattice is (L+1)x(L+2)\n", " \n", " tmp = np.empty((u.shape[0] - 2, u.shape[1] - 2)) \n", " \n", " for x in np.arange(1, u.shape[0] - 1):\n", " for y in np.arange(1, u.shape[1] - 1):\n", " tmp[x - 1, y - 1] = u[x, y] + dt * D * (u[x - 1, y] + u[x, y - 1] - 4 * u[x, y] + \n", " u[x + 1, y] + u[x, y + 1]) / dx**2\n", " \n", " # we return the lattice LxL\n", " return tmp" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAO4AAAD7CAYAAABt9agKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAACaFJREFUeJzt22msXWUZR/H1QK1oCzSAAzMx1gGNGmMMGg0YcICIJE4RERFwjhqNOASl0khAjYkfHIgRRW2NsxJUTCQYSKrFMRicYhRbwRYRQaFQleHxw97Fw80d21vLX9YvOcntPvu87z67Z5093La6G0lZdtvVGyBp4QxXCmS4UiDDlQIZrhTIcKVA96twq+rMqrpgsdedx1hdVY+c4bnLq+rVizHPjqqqV1XVuok/b6mqRyzS2Pfsz6o6bNwnSxZp7EPGbd19McZLEBvu+CG7uqpur6rrq+r8qlox22u6+9zunlckC1n3vqiqjqqq63ZkjO5e3t3XLMY8i7k/q2pDVR0zMfafxm29azHGTxAZblW9Hfgg8A5gb+AI4FDg0qpaOsNrFuXbXQvnvt8JujvqAewFbAFeOmX5cuAG4LTxz2cDXwPWArcArx6XrZ14zSuBjcDfgLOADcAxE69fO/58GNDAKcCfgBuB90yM81RgPfB3YDPwMWDpxPMNPHKG93M58H7gB8CtwPeA/SaePwL44Tj2L4CjJp47FfjN+LprgNeNy5cBW4G7x321BThgmrn3BS4e98+Px+1YN912A8cBvx7n+jNwxkzzzLXvJ/bna4FN4z57+8S8nwXOmfjzUcB1489rxvm2jvO9c2K8JeM6B4zv6ybg98BrJsY6G/gK8PnxvfwKeMqu/lwv9JF4xH06sAfwjcmF3b0F+C7w7InFJzB8gFYAX5hcv6oOBz4BnATsz3DkPnCOuZ8BPBo4GlhVVY8dl98FvA3YD3ja+PwbF/CeXs4Q4UOBpQxRUFUHAt8BzgH2GZd/vaoeMr7uBuD5DF9mpwIfqaond/dtwLHAph5OIZd396Zp5v048M/x/Z82PmbyaYYvhj2BxwPfn2OeGff9hGcBK4HnAO+ePP2dSXefzPDlefw434emWe2LwHUMAb8YOLeqjp54/gXAl8Ztu5jhizZKYrj7ATd2953TPLd5fH6b9d19UXff3d1bp6z7YuBb3b2uu/8NrGL41p7N6u7e2t2/YDj6PRGgu3/W3Vd2953dvQH4JHDkAt7Thd39u3EbvwI8aVz+CuCS7r5kfA+XAj9lOPrR3d/p7j/04AqGo/Uz5zPheCPnRcCq7r6tu38JfG6Wl9wBHF5Ve3X3zd398zmmmG3fb7N6nPtq4ELgxPls+2yq6mCGL9h3dfc/u/sq4ALg5InV1o379C6GI/gTd3Te/7XEcG8E9pvhumn/8fltrp1lnAMmn+/u2xlOmWdz/cTPtzOcnlNVj6qqb483yW4BzuXeXyBzmXZchuv2l1TV37c9GD6U+4/zHltVV1bVTeNzxy1g3ocAS7j3Pto4y/ovGsffWFVXVNXT5hh/tn0/3TobGf5OdtQBwE3dfeuUsSfPpqbu7z3SrsMTw10P/At44eTCqlrGcNp22cTi2Y6gm4GDJl7/IIZrvu1xPvBbYGV37wWcCdR2jjXpWmBNd6+YeCzr7g9U1QOBrwMfBh7W3SuASybmnevs4a/AncDBE8sOmWnl7v5Jd5/AcDp/EcOZwWzzzOe/nU2de9tp9m3Agyeee/gCxt4E7FNVe04Z+8/z2J4YceF29z+A1cBHq+p5VfWAqjoM+CrDdc2aeQ71NeD4qnr6eCd6Ndsf254MN2G2VNVjgDds5zhTrR238blVtXtV7TH++uUghmvhBzIGWFXHMlwrbvMXYN+q2nu6gcfTxG8AZ1fVg8dr/lOmW7eqllbVSVW1d3ffMb7Xbb96mXWeOZw1zv04hmv0L4/LrwKOq6p9qurhwFunvO4vwLS/X+7uaxlu5p037q8nAKcz83V2pLhwAcYbEmcyHG1uAX7EcHQ6urv/Nc8xfgW8meEmxWaGO4w3MBzNF+oMhhtMtwKf4r8fwB0yfghPYHivf2V4j+8AdhtPBd/CcOS7eZz/4onX/pbhJs0142n2dKehb2I4Lb+e4U7uhbNszsnAhvFS4PUM19/znWcmVzDc9b0M+HB3f29cvobhHsIGhuv2qfvzPOC943xnTDPuiQx3mjcB3wTeN94f+L9R7X+kB6CqljP8ymVld/9xV2+PNJvII+5iqarjx1O1ZQxH76sZvuWl+7T7dbgMp6GbxsdK4GXtKYgCeKosBbq/H3GlSIYrBVrQvxapKs+rpZ2su+f89wQecaVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgZbs6g24L/jM6Ufe8/Npn75iF26JND8ecaVAhisFqu6e/8pV819Z0nbp7pprHY+4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhTIcP/P9fpV9PpVu3oztMgMVwpU3T3/lavmv7Kk7dLdNdc6HnGlQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBTJcKZDhSoEMVwpkuFIgw5UCGa4UyHClQIYrBVqywPVvBDbujA2RBMCh81mpuntnb4ikReapshTIcKVAhisFMlwpkOFKgQxXCmS4UiDDlQIZrhToP2VF4NppvreoAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Sequential processing took 3.877124071121216s\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAO4AAAD7CAYAAABt9agKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAEzhJREFUeJzt3VuMJNddx/Hff6579dqrdQxOsJ3YAQUkArzwAgKUcLEgghdCFExigh8QEgiIg0RwwIEQowABIYQQkBBiRYgkQkGAjExkWQkIhEQkSygSUUSydnyL7Xidvc5u9xwe6vyrTp2+zPRuz3T/x9+PtOrp7uqqmpn9z69O1alzLKUkALGsLHoHAMyOwgUConCBgChcICAKFwiIwgUCetkVrpndYmbnzGx1Duv6iJm9b8J7d5vZv13rNubFzJKZ3ZG//nMze8+c1tv7eZrZo2Z2zzzWndf3kJm9fV7rOyjWFr0De8XMvizpJknD4uVvTik9LunYQnZqTvL3dk9K6dNX8/mU0s/Pazvz/Hma2f2S7kgp3VWs/855rPugOeiJ+6aU0rHi31OL3qGDxMwO7B/+ZXfQC3eEmd2WDxvX8vNHzex3zOzfzeysmT1sZqeK5T9hZs+Y2Utm9hkz+7YZt/cHZvaimX3JzO4sXj9hZh8ys6fN7Ekze19xuHm7mT1iZi+Y2fNm9jEzuz6/96CkWyT9Yz5E/bUJ231XXvdTZvaO6r32EN/MTpnZP5nZGTP7mpl91sxWxm2n+Nn9nJk9LumR+ueZ3W5m/5V/Zv9gZifztr7fzL5S7cuXzeyNZvYjkt4t6afy9h4rfj/35K9XzOw+MzttZl81s4+a2Ynq9/p2M3s8/9x+Y5bfVSQvu8Kd4K2SflbSKyRtSLq3eO8hSa/N731O0sdmWO93S/pfSackfUDSh8zM8nt/I2kg6Q5J3ynphyR529AkPSDpZkmvk/RNku6XpJTSz0h6XN3RxAfqjeYiuFfSD+Z9f+OUfXynpK9IulFN0+LdzWambuf78n798IR1vk3SO/L+DyT9yZTtK39f/yLp/ZL+Lm/v9WMWuzv/+wFJr1FziP6n1TLfI+lbJL1B0m+a2et22nZEB71wP5WT5IyZfWrKcn+dUvpCSumipI9L+g5/I6X04ZTS2ZTSlprieb3/ld+F0ymlv0wpDdUU6jdKusnMbpJ0p6RfTimdTyl9VdIfSXpL3uYXU0r/mlLaSik9J+mDaoplt96cv6f/SSmdz/s9yZW8X7emlK6klD6bdu7Afn/e74sT3n+w2PZ7JL15HicDJf20pA+mlP4vpXRO0q9LekuV9u9NKV1MKT0m6TFJ4/4AhHfQ2yg/scsTOM8UX19QPtmS/7P9rqSfVJNI23mZU5JemmW9KaULOWyPSTopaV3S010Aa0XSE3m7r1CTUt8r6Xh+78VdbM/dLOm/i+enpyz7+2oK++G8L3+RUvq9Hdb/xAzvn1bzvZ6asOwsblb/ezmt5v/wTcVrY3+XB81BT9xr9VZJP67mUPOEpNvy6zbpA7v0hKQtSadSStfnf9ellLz9/ICkJOnbU0rXSbqr2uZOifi0msNrd8ukBfPRxDtTSq+R9CZJv2pmb9hhOzttv972FUnPSzov6Yi/kf8w3jjDep+SdGu17oGkZ3f43IFD4U53XE2BvaDmP9z757HSlNLTkh6W9Idmdl0+6XK7mfnh8HFJ5ySdMbNXSnpXtYpn1bTxJvm4pLvN7FvN7Iik35q0oJn9mJndkdveX1dz+cwvoe20nUnuKrb925I+mZsLX5B0yMx+1MzWJd0nabP6vm4zs0n/L/9W0q+Y2avN7Ji6NvHgKvYxNAp3uo+qORx7UtLnJf3nHNf9NjUnwj6v5jD4k2rampL0XknfpeZw/J8l/X312Qck3Zfb7vdW7yml9JCkP5b0iKQv5sdJXivp02r+UPyHpD9LKT26m+1M8aCkj6g5bD0k6Zfyfr0k6Rck/ZWan+l5NSfG3Cfy4wtm9rkx6/1wXvdnJH1J0iVJvzjDfh0Yxo30QDwkLhAQhQsEROECAVG4QEAULhDQTD2nzIxT0MAeSynt2MGHxAUConCBgChcICAKFwiIwgUConCBgChcICAKFwiIwgUConCBgChcICAKFwiIwgUConCBgChcICAKFwiIwgUConCBgChcICAKFwiIwgUConCBgA76xNa7MsvEs8AyIHGBgA504l7NtPGzfIZ0xqKQuEBAByZxpyWlWf1897laT/xdPq3XQgJjv5C4QEAULhBQ+EPl+nC1PAr2Q2J/baVaetoRc3tInBfazgfC5aGyH0b7a746Dpmx10hcIKCwievp5qlZp6vUJezKSpW8vuyU9Xtqbuc4XclTlm5vd3m6nVfgidueyOo/AHNH4gIBhUvcnZJ2tYhcT9q1lebv02p+vjImnWsenp64w5y0A9vu9iV/OWyztWrlkrzYIyQuEFCYxJ2UtJ6wXbp2Mbq2utJ/rBJ4ZWVy5HpbdrC9nR+b56vD7jODYfOe5ff8M8N2idHIJX0xDyQuENBSJ+64PBy5Nlsl7cbaarvsek7azfXV3vP1Non97HK3pZQzcTBsHq/kVPXHrStdnnpbWYPmtUF+vTvz3LxfdpvkWi/mgcQFAlrqxC11bdvm0du2ddJurHV/iw5vNN/eoZy4/nzDkzcvOy5xrwyahL2ck/bi5SZPV4t2sb/Wysmb8jVfP9tcnr1ORC3mgMQFAqJwgYCW8lB52kmpuhvjWnXCyQ+HJeno5lp+XO89P7yeD5nXRi8L+Ymly/lQ+eKVQW/9qytXRvYtpWaZtrOG33yQz2Ol3qFy6n2PHDnjapC4QEBLmbiuf4te81h3Y1yrLvn4iSipS9oThzckSccPN8+P5dc38wmtcYm7lU80ndu60tteybtBto9V98iUtnvrLL8PTlLhWpC4QEBLnbglb9u2l4OqjhfT2rietDcc2WyeH1rvLVte4vG09Es9vl7nXSClrlOGt4d9Xwb+uN3fd6m7IR+4FiQuENBSJ+640Ri9m+GK9du6nowbRUL62WNv03rSnjjStHmP5MQt26+eqGVHDqlL10uXuy6PF1b7qdzevJDv9/N9HKps4452gwRmReICAS114pbaLo/V8+56bm7rFknpqelnj71N60nrZ53LWwEH2/0k9Pbr5tagt85yW77teoicel8lceEWc0HiAgEtVeJezVw/3WdHb9HzBPTH1foacHVGurHdW2a1Wkd5zXfcNmff7wZBjFmQuEBAFC4Q0FIdKtdjJc722dR7lLquhttV18R6HCk/PC5f82WG1TrK7ovjtjn7fgOzI3GBgJYqcaepJgnoxj32hPQxogZdevqlHL9hwLsx1p0rxnXAuJCX9c/4Oi4X6/dt+ba325sLxu8rMC8kLhDQUidur1ugz5qXX/NHT8i2w/+wS0S/Cd5vzatvGPD0nHaTwdlLV3rr8HWW2/Jt+77U+zjxewKuEokLBLTUiVvy2+F81rzuDHF//ONy5MW687/zZb0b425upD97sXk8v9Wt37fVJW59I31/34F5IXGBgJY6cfuzvzeP29WseT6Xj88wULZX64HdvA3qt+bNMlicJ+35rW6dl/I2fduDuq1bJW/9NXC1SFwgIAoXCGgpD5XHdX30yyjbfq9rvurjh6c+2sTItCDqThb5SaR25IqrmILkUjHp18jJqWH/ENlPSo27BMQRM64FiQsEtJSJO07bjTA/thNqeaf/wXDMZ5pEHFYnnOY1zWY3ymM+OVWNr1zvMzAvJC4Q0FInbhlU7UgRqd8Cbi8P+YJF8tZz+axVI2CUl4Fq2xNuARwUXSoHVceLkZsMaNtij5C4QEBLnbilNOELz9cu7YobBqq5fHyGAR/3eMywzd32vMNHvY5iJoPtiW3a8bf3AfNC4gIBhUlcNyl5vc1bzhrQzk+bvC3aPPdrvtOGyGkHtanSsxy6prtO69shabE/SFwgoHCJ6+rAbWd6L2d/z197Sq5UqbybNq6r07XcZr3sXiftLIPpXc18vBwpLD8SFwiIwgUCslnGQDKzpT2Kmnb4WB8Sj5u+c5L65zPtx7VXP5x6b6/l+xlnp+9xaX/pB1Qqr2lOQOICAYU9OVUblwpdN8lq2Tn1+t+vk1CTEradarTI5EnLju1+6Ze4Rk5gTY5c0nc5kLhAQAcmccfZbTos07zT5b60E2RXCbtq1fShViZuf9nauMtZ3mWz68JZLzsauYv+Ob3ckbhAQAc6cXdrGdJjXHu2TtSVCRNzl7cn+nt1Wo/rLFKPRlnPVFgncPkaybtYJC4QEIm7YHXSlunpSbuWh9rphtxpHjfX+q+XX6+ujP+bPCxuSxwZlmfQH6vaX1cxeID/qSd5F4vEBQKicIGAOFRekEmHyOWlnfXqEHlzfVWSdHij+bUdqp5L0oZ/phozuh4vWpo8ZrQ/b/ex3HE/bOaQeaFIXCAgEnef1Unrl2vqE1FSl7SHcqIe22wej26uS5KOH17vPZekI1Ua+2gfPpJHORPDhcv9icx8KlG/zNTevFDMDtEmak7erjv8aOSSvnuHxAUCInEXrG7jlpd2vE3rSXv88IYk6YYjm5Kkk0ebx+uPbrSfOX6o+dqT16cd9VEqLxTpefbSZUnSmfPN48bqVm9fXHmDQj0Wl00YBQR7i8QFAiJx98G0Gwfatu1Kv3OF1J0t9jasJ+2N1x2SJH3DiSPN8+OH2s+cPNZ8fTx/ZjXPjzTM/RbPFhNzf+3cJUnSc5uXesu6uitk+fWwnr2hbusWXxLG80fiAgGRuAtSt239TK53Y5S6M8N+9tjbtJ60rzp5VJL0yhuOtp+56brDkqTrczp7m9m7L565sNUu++zXm/awt6Wdp/PlYXMGequYj8lnJhz6vEl+Bpq27r4icYGASNx95m1bH26mvmVvfUob188ee5vWk/bVp463n/HXvK27kRPc5wb2dq3UnXl2Pvdve103P57f6s5Ee6+quqfX0HtoleNaE797hsQFAqJwgYA4VF6Q+rKQXw4qD5X9hgE/pPXOFX4Y7CeiypNTrzp1rPkiL6N84mktHwYfOTT6K/dOGWcuNB0xXji3lbe71duPcv98fy/XU5ZydLwvSFwgIBJ3wbrkbR7LkSv81jy/LNQmr5+s8q6Px7oOGG3S5jT2xFVxc4E7ealJ2uvPXuqtt75RYb24RLU6MqbVbr5LzBuJCwRE4u6DcalUz/czbv4fvwneb83zGwa8a6K3NzeKRGwTtn6s3y8+145TtdrfTjcB+Jh9m7j/qXgtv0K7d+5IXCAgEncflInTpVA9Q96YuX1yevktdN6x37skejfGy8VwNH72eKRNO+Z1/5yvx9fb3kDg3RjHnCre1f6TtHuGxAUCInEXzFPJH3vjHudE9OFm/Hqrd0X0GwZ63Rjr67T1WeViWf+cr8fXe6EaPK4cYM73r95v7C8SFwiIwgUC4lB5QbpDzf5EW1eGo+Met4fIeYwoP8T1+2nru3ykrnPFtLuDnnzxfF7Pxd57vh3f7uXh6LQlg/aQmftwF4HEBQIicfeZJ9R2vixUTypdJu7FatxjH43Rx4iqR66QupT0bozTR8BoktaT97n8Gd+Ob/diMTKkr6ebgjPvvzx5id79QOICAZG4C+LBVI+kuFVcevHLMT7DgI97XI/GuFV0qvBb82Ya5TEn7TMvXWheP7/V2245+4Hv36Ae5ZGg3VckLhAQibsP0tgn3kZsnvlZWp9UWuralj4CZD3DgKfo+SJFu5vgZ5/JwJP2xdwOHtfGHVRnlYep37Ytk5cQ3jskLhAQibtgdVu3PKvczew3foYBH/e4bLf6cDPXMlufPz+XR3cs29D1WWXatotB4gIBkbj7LE34wtu6GpO4Pj9t18vKz0D7OMhdG3SeM9J70pZHAd7GndS2JYD3B4kLBEThAgFxqLwg9SGzn+zp/SnNh6Wp+owfKvsEXOXlmnb8qJXxf5N79/sO+zc2tJ0rqtcHxaFy3UWTQ+TFIHGBgEjcBZuYvFL3Z9UvwYyMPdW8XnbMWBsZ9zifnBrTQcI7UdTdLuvXh8WHSNrlQOICAdkst2GZGX9g91jZ1aJOzXa2g2pqztWig0a9bK3XJTH1E7VO2Ppm/95ru/x+MLuU0oTfXofEBQKijbtkpt2Q0KadJ7EnpsrE7a+vbuP2Vu/t6nr9I2MmT9g/LAyJCwRE4i6x0e6R/RfqBJ627NTtTEnY3a0B+43EBQKicIGAOFQOYKej32mX9K5mqksOjZcfiQsEROIGNEsiMkLFwUTiAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4QEIULBEThAgFRuEBAFC4Q0NqMyz8v6fRe7AgASdKtu1nIUkp7vSMA5oxDZSAgChcIiMIFAqJwgYAoXCAgChcIiMIFAqJwgYAoXCCg/wda+yxsHmNFSQAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# define integration step sizes\n", "dx = 1\n", "dt = 1\n", "\n", "L = 100 # system shape is square for convenience\n", "T = 100 # integration time\n", "D = 0.1 # diffusion constant\n", "\n", "# initial heat distribution\n", "grid = np.zeros(2 * [L * int(dx**-1)])\n", "gridsize = grid.shape\n", "grid[int(gridsize[0] / 2), int(gridsize[1] / 2)] = 70 # put a point source in the center, with intensity 70\n", "grid[int(gridsize[0] / 4), int(gridsize[1] / 4)] = 35 # put a point source at a quarter corner, intensity 35\n", "\n", "\n", "def sequential(grid, T, dt):\n", " ''' sequential processing solution '''\n", " ts = time.time() # measure computation time\n", " for t in np.arange(T/dt):\n", " # insert upper and lower boundary: reflecting boundary\n", " tmp = np.insert(grid, 0, grid_s[0, :], axis=0)\n", " tmp = np.vstack((tmp, grid[-1, :]))\n", " # insert left and right boundary: reflecting boundary\n", " tmp = np.insert(tmp, 0, tmp[:, 0], axis=1)\n", " tmp = np.hstack((tmp, np.array([tmp[:, -1]]).T)) # note: slicing gives a row vector therefore transpose to get column vector\n", " grid = diffusion(D, dt, dx, tmp)\n", " print('Sequential processing took {}s'.format(time.time() - ts))\n", " return grid\n", "\n", "plt.imshow(grid, cmap=plt.cm.copper, extent=[-1,1,-1,1]);\n", "plt.xticks([]); \n", "plt.yticks([]);\n", "plt.title('Original heat distribution')\n", "plt.show()\n", "plt.imshow(sequential(grid, T, dt), cmap=plt.cm.copper, extent=[-1,1,-1,1]);\n", "plt.xticks([]); \n", "plt.yticks([]);\n", "plt.title('Final heat distribution')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">### Your turn\n", "Time the solution of the problem for different $L$. How do you expect the complexity of the problem to grown with $L$? How does it grow with $L$?\n", "\n", ">### Your turn\n", "The code above does two nested `for` loops over the grid. Vectorize the code.\n", "\n", ">### Your turn\n", "Explore other initial conditions (absorbing and periodic) and explore how the heat diffuses in this system." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }