The method of least squares uses a particular measure of goodness of fit.
a. Total squared error, E
First of all, never forget that the word error in this context means uncertainty. Now, let’s say {xi,fi} are the n+1 data values and f(x) is the assumed function. Then E is defined to be
The {} are weighting factors that depend on the nature of the uncertainties in the data {fi}. For measured values, the , the experimental uncertainties. Often, we just take all the , perhaps implying that the experimental uncertainties are all the same.. In that case,
.
b. Least squares fit
We wish to derive an f(x) which minimizes E. That means taking the derivative of E with respect to each adjustable parameter in f(x) and setting it equal to zero. We obtain a set of simultaneous linear equations with the adjustable parameters as the unknowns. These are called the normal equations.
Share with your friends: |