Physics 212, 2019: Lecture 1

From Ilya Nemenman: Theoretical Biophysics @ Emory
Revision as of 00:22, 16 January 2019 by Ilya (talk | contribs) (Created page with "{{PHYS212-2019}} ==Review of the class== *What is this class about *What are the class goals *Review of the syllabus ==Introduction to computational modeling== Western scien...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Emory Logo

Back to the main Teaching page.

Back to Physics 212, 2019: Computational Modeling.

Review of the class

  • What is this class about
  • What are the class goals
  • Review of the syllabus

Introduction to computational modeling

Western science has been established over the centuries based on various versions of the philosophy of empiricism, dating back to at least Francis Bacon. Roughly speaking, science works by making observations about the world; these observations are called experiments. One then makes generalized inferences from those observations, called theories, which allow making predictions about outcomes of experiments we haven't yet done. These predictions are then compared to results of new experiments. Our generalizations continue to stand as long as their predictions are empirically verified. However, if empirical facts contradict predictions, then the theories are wrong and need to be modified.

It has been understood similarly for centuries, at least since the time of Galileo's famous phrase that "the book of Nature is written in the language of mathematics" that mathematics plays a big role in this enterprise. First, the only thing we can compare are numbers. We can't compare an apple to an orange, but we can compare their weights (numbers), their colors (numbers representing intensities of various spectral lines), their chemical composition (again, numbers), their shapes and sizes (also numbers), and so on. Similarly, to say that a prediction is supported or not supported by an experiment, one must express the predictions and the experimental data in terms of numbers (binary, integer, real-valued, etc.) and compare the two. And numbers are in the domain of mathematics, which is the first way that mathematics enters science. Secondly, mathematics is a language for operation with abstract quantities, of which numbers are just one example. The language is useful because it affords brevity and, if handled properly, doesn't make room for misinterpretations. It is thus the language that we use to write down generalizations from our experiments, and to make predictions of future observations. For example, the Newton's second law of motion is a succinct way of writing that: Each body is characterized by a single number, the mass. Then the rate of change of the rate of change of the position of a body in space depends on effects of other bodies, such that the effect is smaller if the mass is larger. Solving this equation -- which also can be expressed as long, convoluted English sentences, but is deceptively simple in the language of math -- now can be used to make predictions about trajectories of any body with a known mass, as long as we know what the forces are created on it by the other bodies.

In view of this, since its early days, science has always involved two complementary approaches: experimentation (gathering new data) and mathematical theory (generalizing from them and making predictions). In some disciplines, such as physics, the importance of the two approaches has been well understood for centuries, and traditional curricula have been built to train students to learn the tools of the trade in both the experimental and the theoretical domains. Other disciplines, such as biology, have consciously cultivated themselves as a refuge from mathematics. But this was disingenuous, to say the least: the most important work in biology has been explicitly influenced by mathematics. Examples include development of Darwin's ideas, which were to a large extent influenced by earlier mathematical work of Malthus on population growth. More recently, the foundational Nobel prizes in biology, such as the Hodgkin-Huxley studies of action potential generation in neurons, Luria and Delbruck proof that evolution proceeds the Darwinian (rather than Lamarckian) route of accumulation and natural selection of random mutations, and the celebrated Watson-Crick reconstruction of the structure of the DNA, are all heavily mathematical pieces of work. And now, when biological experiments have finally started to measure precise real numbers, rather than vague trends, the culture of refuge from math is very quickly disappearing, and many of us (myself included) have made their careers on analyzing living systems using mathematical approaches. Other disciplines sit on this spectrum somewhere in-between biology and physics. For example, chemistry has turned mathematical since the advent of quantum mechanics, and social sciences are following the trend now, in the era of Big Data. All in all, you cannot be a research scientist nowadays anywhere in the natural sciences domain (and, to some extent, also in social sciences) without recognizing the importance of math and having the necessary capacity to do it.

However, something else happened additionally in the last about fifty years, which has completely changed how science is done, and this was the appearance and ubiquitous availability of digital computers. Instead of a two legs of theory and experiment, science has now become a three-legged stool, supported additionally by computation. Computational modeling is not quite theory and not quite experiment, but combines features of both, and additionally contributes its own. We can use computation as a way of speeding up theory -- computers are faster than humans are in transforming numbers. Thus we can use computers to calculate predictions of our theories very quickly and efficiently. For example, a few years ago, physicists at CERN, a particle physics laboratory on the outskirts of Geneva, reported discovery of the Higgs boson, the last missing component of what is called the Standard Model in particle physics, the theory of the subatomic world. The way they made the detection was by comparing findings of their experiments to thousands of random realizations of solutions of equations of the Standard Model, done by fast computers, and then by identifying which specific Standard Model parameters agreed with the data the best. Similarly, when scientists at Los Alamos National Lab study interactions between HIV viruses and the human immune system, they verify which of their theories of immune dynamics are correct by comparing computer-generated predictions to experiments.

The other way of using computational modeling is as a fast way of doing experiments. For example, we understand quite well the microscopic laws that govern interactions among individual amino acids in a protein, or between molecules in a glass. However, some of the important goals of these fields have been the discovery of phenomenological, coarse-grained laws, that either describe the equation of state for the glass, or the sequence-structure-function relation in proteins. Some experiments are very hard to do (for example, relaxation of glass to thermal equilibrium may take many millions of years). Here's where computation helps again. We can run a digital, in silico experiment using computers reasonably cheaply both in terms of time and actual financial resources. And then any putative phenomenological theory can be contrasted against findings of such computer experiments.

This course is the introduction into this new and exciting way of doing science: computational science. We will learn some basics of how to build mathematical models, how to implement them using computers, how to solve them, and how to make conclusions based on them. There are three goals that I have for the students in the class:

  1. To learn to translate a descriptive problem of a natural phenomenon into a mathematical / computational model.
  2. To learn how to solve such models using computers, and specifically using the Python programming language. This includes learning how to verify that the solution you produced is a correct solution.
  3. To learn basic algorithms used in computational science.