# How to | Fit Models with Measurement Errors

Particularly in the physical sciences, it is common to use measurement errors as weights to incorporate measured variation into the fitting. Weights have a relative effect on the parameter estimates, but an error variance still needs to be estimated in weighted regression, and this impacts error estimates for results. The VarianceEstimatorFunction and Weights options to LinearModelFit and NonlinearModelFit can be used to get the desired results when weights are from measurement errors.

Define a dataset where the first two elements of each point are predictors and the third element is a measured value:

In[1]:= |

Define measurement errors associated with the measured values in :

In[2]:= |

You can use NonlinearModelFit to fit this data to a logarithmic function of the predictors. Using the Weights option, normally distributed variability based on the measurement errors can be incorporated into the fitting. Each data point is weighted by , where is the measurement error for that data point.

When using Weights alone, the variance scale is estimated using the default method. Error estimates will depend on both the weights and the estimated variance scale. However, if the weights are from measurement errors, you would want error estimates to depend solely on the weights.

Fit the nonlinear model and include the errors in the weighting, with the variance scale estimated using the default method:

In[3]:= |

Out[3]= |

You can query the FittedModel output object, , for results about the fitting.

Use and to obtain the best fit function and a table of parameter values for :

In[4]:= |

Out[4]= |

For a such a weighted fitting, the scale of the error variance needs to be estimated to obtain standard errors for the parameter estimates. The typical estimate, which is used by linear and nonlinear models by default, involves a weighted sum of squares.

It is important to note that weights do not change the fitting or error estimates. For example, multiplying all weights by a constant increases the estimated variance, but does not change the parameter estimates or standard errors.

Fit the same model with all weights increased by a factor of 100:

In[5]:= |

Out[5]= |

Note that the best-fit function and error estimates are the same as before:

In[6]:= |

Out[6]= |

Use for each result to compare the variance estimates. This shows that the estimate has increased by the same factor of 100 from the weights:

In[7]:= |

Out[7]= |

The weights in the examples above are just weights. They have a relative impact on the fitting, but estimates and errors remain the same.

To treat the weights as being computed from measurement errors, you can use the VarianceEstimatorFunction option in addition to Weights. VarianceEstimatorFunction explicitly defines the variance scale estimator that is used. For measurement errors, you want standard errors to be computed only from the weights and so the variance estimate should be the constant 1:

In[8]:= |

Out[8]= |

View the best-fit function and parameter table for this model:

In[9]:= |

Out[9]= |

Comparing the table with that for shows that the parameter estimates are equivalent, but the standard errors and related results have changed:

In[10]:= |

Out[10]= |

The results from all incorporate the weights and variance estimate consistent with weights coming from measurement errors. While weights have an impact on parameter estimates, the variance estimate itself does not. It is possible to override the variance estimate defined at the time of the fitting to get the measurement error results from .

Obtain the same parameter table for fitting with measurement errors from the first fitting:

In[11]:= |

Out[11]= |