The classical linear regression assumes that errors are normally distributed with mean zero and constant variance. Several tests can be used which include the Jarque-Bera (JB) test, the
Anderson-Darling
normality test, histogram of residuals and the Normal Probability plot, a
15
graphical device. The JB test was used to investigate the normality of residuals under the null hypothesis that errors are normally distributed. This null hypothesis may not be rejected if the computed probability value is reasonably higher than 0.05.
Regressors can be perfectly or imperfectly collinear and
with perfect multicollinearity, least square estimates will be undefined (Gujarati, 2009). In the presence of multicollinearity least square estimators have large variances which affect hypothesis testing. Pairwise correlation test, variance inflation factor and auxiliary regression can be used when testing for multicollinearity but this study adopted the pairwise correlation test. If the pairwise correlation coefficient between two regressors is in excess of 0.8 (rule of thumb) implies
the variables are collinear, hence one of the variables is dropped.
The Ramsey RESET testis used to check the validity of the whole model under the null hypothesis that the original model is adequate and correctly specified. Failure to reject the null hypothesis implies the test has failed to detect any model misspecification. The coefficient of determination (R) was used to measure the proportion of the dependent variable that is being explained within the model. The value of R
2
in excess of 50% implies
data fit the model well, however a very high R
2
may signal model over fitting (Gujarati, 2009). If the probability value of the t- and F-statistic are higher than 0.05 we may fail to reject the null hypothesis and conclude that the whole model is correctly specified.
Share with your friends: