Interpreting OLS results

Output generated from the OLS Regression tool includes:

Each of these outputs is shown and described below as a series of steps for running OLS regression and interpretting OLS results.

(A) Run the OLS tool:


OLS Tool

You will need to provide an input feature class with a unique ID field, the dependent variable you want to model/explain, and all of the explanatory variables. You will also need to provide a pathname for the output feature class, and optionally, pathnames for the coefficient and diagnostic output tables. As the OLS tool runs, statistical results are printed to the screen.

(B) Examine the statistical report using the numbered steps described below:


OLS Report

Dissecting the Statistical Report

  1. Assess model performance. Both the Multiple R-Squared and Adjusted R-Squared values are measures of model performance. Possible values range from 0.0 to 1.0. The Adjusted R-Squared value is always a bit lower than the Multiple R-Squared value because it reflects model complexity (the number of variables) as it relates to the data, and consequently is a more accurate measure of model performance. Adding an additional explanatory variable to the model will likely increase the Multiple R-Squared value, but decrease the Adjusted R-Squared value. Suppose you are creating a regression model of residential burglary (the number of residential burglaries associated with each census block is your dependent variable, y). An Adjusted R-Squared value of 0.84 would indicate that your model (your explanatory variables modeled using linear regression) explains approximately 84% of the variation in the dependent variable, or said another way: your model tells approximately 84% of the residential burglary "story".

  2. Model Performance

  3. Assess each explanatory variable in the model: Coefficient, Probability or Robust Probability, and Variance Inflation Factor (VIF). The coefficient for each explanatory variable reflects both the strength and type of relationship the explanatory variable has to the dependent variable. When the sign associated with the coefficient is negative, the relationship is negative (e.g., the larger the distance from the urban core, the smaller the number of residential burglaries). When the sign is positive, the relationship is positive (e.g., the larger the population, the larger the number of residential burglaries). Coefficients are given in the same units as their associated explanatory variables (a coefficient of 0.005 associated with a variable representing population counts may be interpretted as 0.005 people). The coefficient reflects the expected change in the dependent variable for every 1 unit change in the associated explanatory variable, holding all other variables constant (e.g., a 0.005 increase in residential burglary is expected for each additional person in the census block, holding all other explanatory variables constant). The T test is used to assess whether or not an explanatory variable is statistically significant. The null hypothesis is that the coefficient is, for all intents and purposes, equal to zero (and consequently is NOT helping the model). When the probability or robust probability is very small, the chance of the coefficient being essentially zero is also small. If the Koenker test (see below) is statistically significant, use the robust probabilities to assess explanatory variable statistical significance. Statistically significant probabilities have an asterisk "*" next to them. An explanatory variable associated with a statistically significant coefficient is important to the regression model if theory/common sense supports a valid relationship with the dependent variable, if the relationship being modeled is primarily linear, and if the variable is not redundant to any other explanatory variables in the model. The variance inflation factor (VIF) measures redundancy among explanatory variables. As a rule of thumb, explanatory variables associated with VIF values larger than about 7.5 should be removed (one by one) from the regression model. If, for example, you have a population variable (the number of people) and an employment variable (the number of employed persons) in your regression model, you will likely find them to be associated with large VIF values indicating that both of these variables are telling the same "story"; one of them should be removed from your model.

  4. Explanatory variable analysis.

  5. Assess model significance. Both the Joint F-Statistic and Joint Wald Statistic are measures of overall model statistical significance. The Joint F-Statistic is trustworthy only when the Koenker (BP) statistic (see below) is not statistically significant. If the Koenker (BP) statistic is significant you should consult the Joint Wald Statistic to determine overall model significance. The null hypothesis for both of these tests is that the explanatory variables in the model are not effective. For a 95% confidence level, a p-value (probability) smaller than 0.05 indicates a statistically significant model.

  6. Overall model performance

  7. Assess Stationarity. The Koenker (BP) Statistic (Koenker's studentized Bruesch-Pagan statistic) is a test to determine if the explanatory variables in the model have a consistent relationship to the dependent variable (what you are trying to predict/understand) both in geographic space and in data space. When the model is consistent in geographic space, the spatial processes represented by the explanatory variables behave the same everywhere in the study area (the processes are stationary). When the model is consistent in data space, the variation in the relationship between predicted values and each explanatory variable does not change with changes in explanatory variable magnitudes (there is no heteroscedasticity in the model). Suppose you want to predict crime and one of your explanatory variables in income. The model would have problematic heteroscedasticity if the predictions were more accurate for locations with small median incomes, than they were for locations with large median incomes. The null hypothesis for this test is that the model is stationary. For a 95% confidence level, a p-value (probability) smaller than 0.05 indicates statistically significant heteroscedasticity and/or non-stationarity. When results from this test are statistically significant, consult the robust coefficient standard errors and probabilities to assess the effectiveness of each explanatory variable. Regression models with statistically significant non-stationarity are especially good candidates for GWR analysis.

  8. Assess stationarity and heteroskedasticity

  9. Assess model bias. The Jarque-Bera statistic indicates whether or not the residuals (the observed/known dependent variable values minus the predicted/estimated values) are normally distributed. The null hypothesis for this test is that the residuals are normally distributed and so if you were to construct a histogram of those residuals, they would resemble the classic bell curve, or Gaussian distribution. When the p-value (probability) for this test is small (is smaller than 0.05 for a 95% confidence level, for example), the residuals are not normally distributed, indicating model misspecification (a key variable is missing from the model). Results from a misspecified OLS model are not trustworthy.

  10. Jarque Bera results

  11. Assess residual spatial autocorrelation. Always run the Spatial Autocorrelation (Moran's I) tool on the regression residuals to ensure they are spatially random. Statistically significant clustering of high and/or low residuals (model under and over predictions) indicates a key variable is missing from the model (misspecification). OLS results cannot be trusted when the model is misspecified.

  12. Assess the spatial distribution of regression residuals.

  13. Finally, review the section titled "How Regression Models Go Bad" in the Regression Analysis Basics document as a check that your OLS regression model is properly specified. Notice, too, that there is a section titled "Notes on Interpretation" at the end of the OLS statistical report to help you remember the purpose of each statistical test.

  14. Interpretation Notes

(C) Examine output feature class residuals. Over and under predictions for a properly specified regression model will be randomly distributed. Clustering of over and/or under predictions is evidence that you are missing at least one key explanatory variable. Examine the patterns in your model residuals to see if they provide clues about what those missing variables are. Sometimes running Hot Spot Analysis on regresion residuals will help you see the broader patterns in over and under predictions.


Residual Map

(D) View the coefficient and diagnostic tables. Creating the coefficient and diagnostic tables is optional. While you are in the process of finding an effective model, you may elect not to create these tables. The model building process is iterative and you will likely try a large number of different models (different explanatory variables) until you settle on a few good ones. You can use the Aiaike Information Criterion (AIC) on the report to compare different models. The model with the smaller AIC value is the better model (that is, taking into account model complexity, the model with the smaller AIC provides a better fit to the observed data). You should always create the coefficient and diagnostic tables for your final OLS models in order to capture the most important elements of the OLS report including the list of explanatory variables used in the model with their coefficients, standard errors, and probabilities, and results for each diagnostic test. The diagnostic table includes a description of each test along with some guidelines for how to interpret test results.


Coefficient table


OLS Diagnostics


AICc output