Physics 212, 2019: Lecture 17

From Ilya Nemenman: Theoretical Biophysics @ Emory
Jump to: navigation, search
Emory Logo

Back to the main Teaching page.

Back to Physics 212, 2019: Computational Modeling.

Avoid blind fits

Follow the Optimization notebook for this section.

As we have stressed again and again, there are no guarantees that a nonlinear optimization routine is going to be able to find a global minimum, or even a "good" minimum, where, for example, the curve that we are trying to fit passes through the data points. More commonly, you will find a local minimum, where the parameter values produce a slightly better fit than the nearby parameters, but the fit overall will be bad and it won't make much sense. To avoid the problem, one must again put their thinking hat on, and try to figure out what reasonable values of parameters are. Remember that every optimization algorithm requires an initial guess from which to start searching, and if the initial guess is close to a good optimum, then the chances that the optimum will be found are much higher.

How do we find such good initial guesses? The main idea is, again, to explore special cases. The curves that you fit to data are generated by models, and it is frequently the case that different parts of the curve are affected differently by different model parameters. The trick is to find different ranges of the futted curves that look nearly linear, with the parameters of the full model corresponding to parameters of the linear fit. The beauty of linear models is that there is just one global minimum, which is easy to find. So one can do linear least squares fitting, find parameter guesses from pieces of the full curve, and then use these guesses as initial conditions for the full fit.

The following Module 3 Python script shows how we can do this using an example of a Michaelis-Menten enzymatic kinetics, where the change in the number of substrates is given by . There are two unknown parameters -- the maximum reaction velocity and the Michaelis constant . We notice that if the substrate concentration is large, then . Thus one can fit the initial part of the vs. data, and the slope of the linear fit will give us a good estimate of . If the substrate concentration is small, then . This has the usual exponential solution . This exponential curve becomes linear in the semi-logarithmic coordinates: , where is some constant. One can do a linear fit of this semi-logarithmic data and get a good initial guess for . But then, knowing from the first part of the curve, one can get an estimate of too. Finally, with these estimates, one can then do a 2-d global fit to find really good parameter values.

Which data to fit?

The Optimization notebook also shows that fits often depend on what exactly is being fitted. If data comes as pairs, one can fit these data, or their transformed versions, such as , , , , or any other transformed combination. Which choice should we make? The sum-of-squares objective function assumes that the noise is of the same scale for every . And you should transform your data (typically the coordinate) to satisfy this property. For example, for exponentially decaying data with multiplicative noise, such transformation is -- and it produces much better fits, as the script shows. Similarly, sum-of-squares algorithms usually work much better when the distribution of points is not skewed, and there are no outliers -- and one often can achieve this by transforming the variable as well.

Your turn

Generate data using a model for some fixed value of . Then use the curve_fit function to fit the model to this data. Explore how the fitted value depends on the initial guess for . This should illustrate for you how important it is to start close to the correct value of the fitted parameter.