Guide to Advanced Empirical



Download 1.5 Mb.
View original pdf
Page56/258
Date14.08.2024
Size1.5 Mb.
#64516
TypeGuide
1   ...   52   53   54   55   56   57   58   59   ...   258
2008-Guide to Advanced Empirical Software Engineering
3299771.3299772, BF01324126
7.2. Types of Validity
As noted above, we also want to make sure that our survey instrument is measuring what we want it to measure. This called survey validity. Four types of validity are discussed below.
Face validity is a cursory review of items by untrained judges. It hardly counts as a measure of validity at all, because it is so subjective and ill-defined.
Content validity is a subjective assessment of how appropriate the instrument seems to a group of reviewers (i.e. a focus group) with knowledge of the subject matter. It typically involves a systematic review of the survey’s contents to ensure that it includes everything it should and nothing that it shouldn’t. The focus group should include subject domain experts as well as members of the target population.
There is no content validity statistic. Thus, it is not a scientific measure of a survey instrument’s validity. Nonetheless, it provides a good foundation on which to base a rigorous assessment of validity. Furthermore if we are developing anew survey instrument in a topic area that has not previously been researched, it is the only form of preliminary validation available.
Criterion validity is the ability of a measurement instrument to distinguish respondents belonging to different groups. This requires a theoretical framework to determine which groups an instrument is intended to distinguish. Criterion validity is similar to concurrent validity and predictive validity. Concurrent validity is based on confirming that an instrument is highly correlated to an already validated measure or instrument that it is meant to be related to. Predictive validity is based on confirming that the instruments predicts a future measure or outcome that it is intended to predict.
Construct validity concerns how well an instrument measures the construct it is designed to measure. This form of validity is very important for validating sum- mated measurement scales (Spector 1992). Convergent construct validity assesses the extent to which different questions which are intended to measure the same concept give similar results. Divergent construct validity assesses the extent to which concepts do not correlate with similar but distinct concepts. Like criterion validity, divergent and convergent construct validity can be assessed by correlating anew instrument with an already validated instrument. Dybå (2000) presents a software engineering example of the validation process fora software survey using summated measurement scales.

Download 1.5 Mb.

Share with your friends:
1   ...   52   53   54   55   56   57   58   59   ...   258




The database is protected by copyright ©ininet.org 2024
send message

    Main page