59 shown in Equation 2. A slope change may have been induced as a result of the intervention, or the baseline data may exhibit a slope. If the null hypothesis is accepted, then
β1 and
β3 can be assumed to be zero, and the resulting equation will reduce to the form in Model II representing only level change. In this case, Model II is more suitable for representing the observed data, and the statistical power of the testis higher due to the fewer number of parameters involved.
On the other hand, if the alternate hypothesis is accepted, then Model II would bias the level change estimate because the slope change is not adequately represented. The second step involves the testing of the assumption of independent errors, also known as autocorrelation. When repeated measures are
gathered on a continual basis, there is a possibility that the errors measured at a given time t could provide sufficient information to predict errors at subsequent periods (e.g.
t+1). The presence of autocorrelation with the observed data can lead to erroneous statistical inferences if the selected model does not account for autocorrelation. Thus, we used the Durbin-Watson test to test the null hypothesis that the lag autocorrelation among errors is equal to zero (
ρ=0). Because the Durbin-Watson test provided inconclusive results when the test statistic falls between the two critical values, we also computed the Huitema-McKean test statistic for autocorrelation. If the test reveals that the errors are autocorrelated, then Model III should be used when a slope
is present in either phase, or Model IV should be used if slope is absent in both phases.
𝐹 =
(𝑆𝑆
𝑅𝑒𝑔 𝑀𝑜𝑑𝑒𝑙𝐼
− 𝑆𝑆
𝑅𝑒𝑔 𝑀𝑜𝑑𝑒𝑙𝐼𝐼
)/2
𝑀𝑆
𝑅𝑒𝑔 𝑀𝑜𝑑𝑒𝑙𝐼
(2)
60 Where
𝑆𝑆
𝑅𝑒𝑔 is the regression sum of squares based on model I
𝑆𝑆
𝑅𝑒𝑔 is the regression sum of squares based on model II and
𝑀𝑆
𝑅𝑒𝑔 is the residual
mean squares based on model I Share with your friends: