University of zimbabwe faculty of social studies department of economics



Download 0.76 Mb.
View original pdf
Page26/45
Date16.12.2020
Size0.76 Mb.
#54545
1   ...   22   23   24   25   26   27   28   29   ...   45
taku dissertation
taku dissertation, taku dissertation
3.5 Model Diagnostic Tests






In the presence of autocorrelation least square estimates will still be linear and unbiased but with an inflated variance. The Dubin-Watson (DW) d-statistic test and the Breusch-Godfrey test are used in detecting the presence of autocorrelation. However, the DW d-statistic cannot be used in autoregressive models because the computed d-statistic always tends towards 2 (Wooldridge,
2008). Hence this study employed an alternative, the Lagrange Multiplier or DW h-statistic test under the null hypothesis of no serial autocorrelation. If the absolute value of the calculated DW h-statistic is found to be greater than the critical value (1.96), the null hypothesis of no serial autocorrelation maybe rejected and concluding that the model is free from the problem of autocorrelation. In the presence of heteroskedasticity least square estimates are inefficient. The Goldfeld-Quant, the autoregressive conditional heteroskedasticity (ARCH) test and the Breusch-Pagan tests can be used to test for heteroskedasticity. The problem of heteroskedasticity maybe perfected using the heteroskedasticity and autocorrelation consistent (HAC) standard errors. This study employed the ARCH test under the null hypothesis that the variance is constant. The null hypothesis may not be rejected if the probability value is greater than 0.05 implying that the error term variance is homoscedastic. The classical linear regression assumes that errors are normally distributed with mean zero and constant variance. Several tests can be used which include the Jarque-Bera (JB) test, the
Anderson-Darling normality test, histogram of residuals and the Normal Probability plot, a


16 graphical device. The JB test was used to investigate the normality of residuals under the null hypothesis that errors are normally distributed. This null hypothesis may not be rejected if the computed probability value is reasonably higher than 0.05.
Regressors can be perfectly or imperfectly collinear and with perfect multicollinearity, least square estimates will be undefined (Gujarati, 2009). In the presence of multicollinearity least square estimators have large variances which affect hypothesis testing. Pairwise correlation test, variance inflation factor and auxiliary regression can be used when testing for multicollinearity but this study adopted the pairwise correlation test. If the pairwise correlation coefficient between two regressors is in excess of 0.8 (rule of thumb) implies the variables are collinear, hence one of the variables is dropped. The Ramsey RESET testis used to check the validity of the whole model under the null hypothesis that the original model is adequate and correctly specified. Failure to reject the null hypothesis implies the test has failed to detect any model misspecification. The coefficient of determination (R) was used to measure the proportion of the dependent variable that is being explained within the model. The value of R in excess of 50% implies data fit the model well, however a very high R may signal model over fitting (Gujarati, 2009). If the probability value of the t- and F-statistic are higher than 0.05 we may fail to reject the null hypothesis and conclude that the whole model is correctly specified.

Download 0.76 Mb.

Share with your friends:
1   ...   22   23   24   25   26   27   28   29   ...   45




The database is protected by copyright ©ininet.org 2024
send message

    Main page