INTRODUCTION TO ECONOMETRICS II ECO 306 NOUN 80 Lets look at multicollinearity in a model with two explanatory variables. It would be observed that the higher the correlation between the explanatory variables, the larger the population variances of the distributions of their coefficients and the greater the possibility of attaining irregular estimates of the coefficients. You should, however, bear in mind that a high correlation does not necessarily lead to poor estimates. If all the other elements determining the variances of the regression coefficients are properly in the number of observations and the sample variances of the explanatory variables are large and the variance of the disturbance term small, good estimates could still be obtained. Multicollinearity, therefore, must be caused by a mixtureof a high correlation and one or more of the other elements being inappropriate. This is a matter of degree and not kind of element of which any regression will suffer from it to some extent unless all the explanatory variables are uncorrelated. But the consequence is only taken into consideration when it is obviously going to have aserious effect on the regression results. It is a common problem in time series regressions, particularly where the data consists of a series of observations on the variables over a number of time periods. Which may give rise to multicollinearity if two or more of the explanatory variables are highly correlated in a strong time trend. Using Table 3.1 as an example lets consider first the case of exact multicollinearity where the explanatory variables are perfectly correlated. Table 3.1 X 2 X 3 Y Change in X 2 Change in X 3 Approximate change in Y 10 19 51+u 1 1 1 5 11 21 56+ u 2 1 1 5 12 23 61+ u 3 1 1 5 13 25 66+ u 4 1 1 5 14 27 71+ u 5 1 1 5