30 J. Singer et al.
statistical analyzes are only applicable in well-constrained situations. The type of data collected infield studies often requires
nonparametric statistics. Nonparametric statistics are often called “distribution-free” in that they do not have the same requirements regarding the modeled distribution as parametric statistics. Additionally, there are many nonparametric tests based on simple rankings, as opposed to strict numerical values. Finally, many nonparametric tests can be used with small samples. For more information about nonparametric statistics, Seigel and Castellan (1988) provide a good overview. Briand et al. (1996) discuss the disadvantages of nonpara- metric statistics versus parametric statistics in software engineering they point out that a certain amount of violation of the assumptions of parametric statistics is legitimate, but that nonparametric statistics should be used when there are extreme violations
of those assumptions, as there may well be infield studies.
Qualitative analyzes do not rely on quantitative measures to describe the data. Rather, they provide a general characterization based on the researchers coding schemes. Again, the different types of qualitative analysis are too complex to detail in this paper. See Miles and Huberman (1994) fora very good overview.
Both quantitative and qualitative analysis can be supported by software tools. The most popular tools for quantitative analysis are SAS and SPSS. A number of different tools exist for helping with qualitative analysis, including NVivo, Altas/ti, and
Noldus observer. Some of these tools also help with analysis of video recordings.
In summary, the way the data is coded will affect its interpretation and the possible courses for its evaluation. Therefore it is important to ensure that coding schemes reflect the research goals. They should tie into particular research questions. Additionally, coding schemes should be devised with the analysis techniques in mind. Again, different schemes will lend themselves to different evaluative mechanisms. However, one way to overcome the limitations of anyone technique is to look at the data using several different techniques (such as combining a qualitative and quantitative analyzes. A
triangulation approach (Jick, 1979) will allow fora more accurate picture of the studied phenomena. Bratthall and Jørgensen (2002) give a very nice example of using multiple methods for data triangulation. Their example is framed in a software engineering context examining software evolution and development. In fact, many
of the examples cited earlier, use multiple methods to triangulate their results.
As a final note, with any type of analysis technique, it is generally useful to go back to the original participant population to discuss the findings. Participants can tell researchers whether they believe an accurate portrayal of their situation has been achieved. This, in turn, can let researchers know whether they used appropriate coding scheme and analysis techniques.
Share with your friends: