18.1. Impact assessment studies in the context of a monitoring and evaluation system
General guidelines for doing adoption and impact assessment studies
Diffusion and adoption of innovations
18.4. Diffusion and adoption studies
Tools for adoption analysis
Specific and integrated impact assessments
Rural livelihoods and the adoption of innovations in Tanzania
Module 18 Impact assessment studies: diffusion, adoption and effects of agricultural research and technologies Rationale: Understanding the diffusion, adoption and effect of agricultural research recommendations Throughout the world, researchers have generated a large number of improved technologies. But farmers have only adopted these technologies to a limited extent. This state of affairs has triggered the emergence of impact assessment studies.
Understanding diffusion and adoption processes and knowing the effect of agricultural research recommendations is crucial for many reasons. Impact studies can motivate researchers by providing feedback from the farm community and other research clients on the use and effects of research results. Ex post studies can provide managers with evidence of the value of research, to argue for continued investments. Adoption studies can help researchers refocus their research efforts by providing insights into farmers’ assessments of new technologies (in comparison with their current practices) and into farm-level adoption processes Lessons learned from impact assessments can be used to improve future research strategies, plans, and management. Finally, impact assessments can show how economic policies and technology interact in determining the ultimate benefits of agricultural research. This can be useful for discussions between research leaders and policymakers.
Objectives This module will enable participants to:
Explain the role of impact assessment studies in the context of a M&E system.
Define what is innovation, adoption and impact and to describe dissemination and adoption processes.
Know general guidelines for doing impact assessment studies, as well as the two major approaches to study adoption.
To mention the main topics of an adoption study and to use different practical tools.
To understand different types of impact studies and the need for a more integrated approach.
To explain why it is necessary to prepare for successful adoption and positive impact during all phases of the agricultural technology development and dissemination process.
Content 18.1 Impact assessment studies in the context of a monitoring and evaluation system
18.2 General guidelines for doing adoption and impact assessment studies
18.3 Diffusion and adoption of innovations
18.4 Diffusion and adoption studies
18.5 Analysis of factors affecting adoption
Specific and integrated impact assessments
Rural livelihoods and the adoption of innovations in Tanzania
Annex 1: Diffusion and adoption of innovations
Diffusion, adoption and impact of technologies; monitoring; evaluation
18.1 Impact assessment studies in the context of a monitoring and evaluation system The importance of planning
It is during the planning phase that the conditions for successful monitoring and evaluation are set. Ex-post evaluation of research activities and results needs ex-ante evaluation. It is against the original plan – its targets and assumptions – that an activity is judged.
There are different levels of ex-ante evaluation in national agricultural research. The highest level determines how well the research system responds to national development objectives. At the next levels, programs, projects within programs and activities within projects are subsequently defined. At planning level there should be clear indication of objectives, inputs to be used, results/outputs to be produced and the beneficiaries thereof. This implies that adoptability and/or pre-impact screening are part of the planning phase of the research cycle.
Adoptability and pre-impact screening are necessary in order to integrate elements in the research design that might determine adoption of technologies and impact on end-users. It helps a research project to be as practically relevant as possible. In their research proposals for adaptive agricultural research, scientists therefore have to indicate:
How they will organise themselves to arrive at technologies that are adapted to the prevailing agro-economic and socio-economic environment (adaptability analysis);
How the likelihood of adoption of the proposed innovation by different user groups will be studied (adoptability analysis);
How the potential social, economic and/or environmental effects of the proposed technology will be studied (potential impact analysis).
Project planning should also indicate key objectively verifiable indicators (OVI’s) and means of verification (MoV’s). The Logical Framework matrix is a technique that can be used for this purpose. Any assumptions used in planning should also be explicitly mentioned, so that these can be subject to evaluation later on. Others elements of planning are: task lists, timeframes or implementation schedules, milestone charts, and the budget.
Monitoring is primarily focused on the performance of a research project or program. Monitoring compares what actually takes place with what was planned based (in the approved research proposal). Close monitoring is crucial to obtain the planned results and output. The goals of monitoring are to improve performance of current research, to take timely corrective action at operation level, to learn from experience to improve future research, to motivate and give feedback to researchers and to enhance external support for research (by reporting on the use of funds and other resources).
Most of the attention goes to the use of resources (inputs), the process of implementation, quality control and timely delivery of outputs. Some key issues typically addressed in monitoring: degree to which objectives are achieved, extent to which objectives or activities need to be modified in response to new information, timeliness of activities, effectiveness of communicating results, capability of staff and functioning of interdisciplinary teams, evidence of successful day-to-day problem solving, extent of interaction with other institutes and extension services.
Monitoring comprises a five-step process: recording data on key indicators, analyzing and processing data, storing and retrieving, reporting research results and providing feedback. Many monitoring mechanisms can be used in research institutes. Some examples are: internal program review, periodic reporting, field evaluation, symposia and seminars. The important issue is to link what is scheduled to what actually takes place. Comparing periodic progress reports with the original work plan forces project leaders for example to keep track of their activities and to take corrective action when needed. It also facilitates communication of problems to higher levels of management.
Evaluations look beyond the monitoring of performance (implementation in relation to planning, outputs in relation to inputs). It also considers effectiveness, efficiency, quality and relevance of research. In addition, the recommendations from evaluation studies are more oriented at medium and long term.
The primary method employed for on-going research evaluation is peer and expert review. During annual project evaluations, colleagues generally assess the performance, relevance and quality of research projects. Peer review can take the form of senior scientists coaching less-experiences researchers, but it can also be in the form of annual program evaluation meetings.
Comprehensive program evaluation takes place periodically; normally every three to five years. These need experienced scientific leadership and staff, and are greatly facilitated if good annual project reviews and specific impact assessment studies have taken place and are well documented. Most agricultural research programs are responsive to larger development objectives. Comprehensive program evaluation should include, therefore, representatives from development and extension organizations, and a mechanism for bringing user feedback into the process.
What is Impact Assessment?
Impact assessment in the broadest sense is the evaluation of effects, in our case the evaluation of the effects of agricultural research in general, and the effects of certain agricultural technologies in particular. An impact assessment study may look at whether farmers accept or reject new technology, or it may focus on increases in yields and production that can be attributed to new technology. It can also estimate changes in income, employment, nutritional status, pollution, erosion, or rural-urban migration. Different evaluation methods may be used to assess these impacts, but in any case, the purpose is to provide managers, scientists, or those who sponsor agricultural research with indications of its benefits or negative effects.
Types of impact assessment studies
Although various kinds of effects can be assessed, the most common types of impact studies carried out for agricultural research are dissemination and adoption studies and economic evaluations (especially Cost-Benefit Analysis (CBA) and rate-of-return studies). Relatively few Social Impact Assessments (SIA), Environmental Impact Assessments (EIA), Gender Impact Analysis or Poverty Assessments have been done, but the demand for them is increasing.
The place of impact assessment in the R&D process (and the M&E system)
The objective of applied/adaptive agricultural research is to identify new materials, farming practices and marketing strategies that will improve the farmers’ production system and increase their productivity and incomes in a way that can be sustained. The ultimate goal is the adoption of research findings by the targeted farmer communities, leading to results that are economically beneficial, socially acceptable and environmentally sound. The agricultural research and development process has three main phases, which are related to certain components of the M&E system.
The first phase is the diagnostic phase. The focus is on developing a clear understanding of the current production system: the farm and its environment, as well as farmers’ goals, constraints and opportunities. The diagnostic phase is mainly focused on ex-ante evaluation:
Statistical database that includes baseline data on current situation and practices
Informal and formal surveys
Priority setting and strategic planning exercises
Research and information needs assessments that consider agro-ecological and socio-economic heterogeneity.
The second phase is the experimentation phase during which scientists, in close co-operation with the farmers, choose and design appropriate innovations, and test them under farmer conditions. The outputs of this phase and the components of M&E system related to it, show that the focus is mainly on process monitoring and quality control (although a critical analysis of the implementation process may take the form of a mid-term evaluation):
Research proposals according to accepted format (including logical framework, timing of activity implementation, objectively verifiable indicators, detailed budget)
Quality score forms and procedures for review of proposals
Formats for periodic (quarterly/ annual) progress monitoring reports
Research project database
Final scientific reports that include SE analyses as well as adaptability and adoptability analysis
Review procedure assuring quality control of scientific output
Seeds, planting materials, machinery, and other physical output
Inventory of available materials and machinery
Flexible recommendations for farming and marketing practices; according to recommendation domains
Overview, e.g. compilation of technologies (Technology reference books, District Technology overviews)
User-friendly output that makes end-users understand the message
Formats and procedures for testing of extension materials at target group level
The third phase is the extension phase, during which all attention goes to the wider dissemination of recommended technologies and practices. At this stage, extension services will focus on process monitoring., i.e. close attention to the implementation and progress of dissemination activities. For R&D institutes, the focus is on ex-post evaluation:
Planning of extension program (including logical framework, timing of activity implementation, objectively verifiable indicators, detailed budget)
Monitoring implementation of extension program
Dissemination and adoption studies
Specific impact assessments: gender, social, economic, environmental
Integrated impact studies
Timing of impact assessment studies: ex-ante and ex-post
This reminder on the participatory technology development and dissemination process shows that impact assessment studies are situated at the end of the cycle (demonstrating results), but also at the beginning of the cycle (disguised in the form of formal surveys, baseline and prospective studies that aim to contribute to priority setting exercises).
Of course, ex-post evaluations can contain important information that is very useful for planning and designing future activities. Also, adoption and impact studies only make sense if there is a comparison between the situation before and after the start of interventions. Therefore, as much information as possible should be made available about the situation before the start of interventions. Thus, impact assessments do not only focus on the effects that research and/or technologies have had in the past, but also on the effects that they may have in the future.
It is therefore of great importance to have state-of-the-art papers on the major commodities and topics addressed by agricultural research. Ex ante studies not only give an overview of the actual situation; they also try to estimate future trends.
Assessments done when research is being planned, or while it is still underway, can provide information to help decision makers identify the most promising directions for future research. Although ex ante impact assessments may be done as an aid to priority setting and decision-making, the systematic use of impact assessment studies in planning or reorienting research and development activities is not yet very common. Adoptability studies, which are carried out during the experimentation phase, are an exception to this rule. In addition, environmental impact studies can be mentioned. These studies look at the potential impact of proposed alternatives (mostly for large infrastructure projects) on the environment. The results of these studies are meant to be taken up during the decision making process.
When talking about adoption analysis and impact assessment, it appears that most people think of ex-post studies that analyze the effects that research has had after completion. This does not come as a surprise; in practice adoption studies and impact assessments generally look back at what happened in the past. Agricultural research managers can use ex post impact assessments to get an indication of the benefits (and side effects) that have resulted from agricultural research. In case of positive results (often successful cases are deliberately chosen), this information can justify requests for continued funding and support. Assessments done after research is completed can also extract lessons to improve the design of future research.
Usefulness of impact assessment studies
Most impact assessment studies are requested and sponsored by development agencies and serve their own decision-making and accountability needs. However, agricultural research managers and policymakers increasingly realize that impact assessment is useful in setting research priorities and demonstrating results.
The main benefits of impact assessment studies for NARS can be summarized as follows:
All kinds of impact studies can motivate researchers by providing feedback from the farm community and other research clients on the use and effects of research results.
Adoption studies can help researchers refocus their research efforts by providing insights into farmers’ assessments of new technologies (compared to their current practices) and into farm-level adoption processes.
Ex post studies can provide managers with evidence of the value of research, to argue for continued investments.
Lessons learned from impact assessments can be used to improve future research strategies, plans, and management.
Impact assessments can show how economic policies and technology interact in determining the ultimate benefits of agricultural research. This can be useful for discussions between research leaders and policymakers.
18.2 General guidelines for doing adoption and impact assessment studies When eager for information on the dissemination of innovations and the effects of research, managers are often tempted to begin collecting information as soon as possible. However, experience shows that evaluations are most successful when the urge to collect information is resisted until a good evaluation design is prepared. Insufficient preparation may lead to superficial results that can hardly be used. In this section, some general principles are presented that apply to all kinds of evaluation studies, including impact assessment studies.
Focusing: think before you start!
Impact assessment generally needs considerable time and money. For this reason, the first task in impact assessment is to focus the study.
What research effort or technology should be evaluated? What information is needed? Many different things can be evaluated in the context of agricultural research, including research activities, projects or programs; research resources, such as scientists, funds and physical inputs; research organisations, such as stations, laboratories or entire institutes; the national research system, which may be composed of several different organisations; and research outputs, such as technologies and information, and their impact.
Why is the evaluation being done? Two primary purposes of an evaluation are to satisfy accountability requirements and to contribute to decision-making. Impact assessment studies may be appropriate to give an indication of the socio-economic values of the research activities funded, i.e. the quantity and quality of research outputs and the nature and magnitude of the impacts or effects resulting from research.
For whom is the evaluation being done? Evaluations can be aimed at different users. These include producer groups, extension agencies, government policy-makers, external donors and the scientific community itself. Each type and level of user may require different information. For now, external demands for impact assessment exceed demands for internal management purposes.
What key issues should be addressed and what types of effects should be assessed? Then it is important to define what questions the principal audiences of the evaluation have and to attempt to answer them. Only by addressing the most important issues can an evaluation have an impact on decision-making.
The focus of an impact assessment should reflect the purpose of the evaluation, the interests and information needs of those requesting it and the expected use of the results. It should also consider the available resources (including trained and experienced personnel).
If the principal purpose of an impact assessment study is to estimate the benefits of research in a way that is comparable to other public investments, then an economic rate-of-return study may be appropriate.
If, on the other hand, there is an interest in understanding the distribution of benefits among different farming groups or different regions within a country, a more descriptive and illustrative adoption study may be called for.
If there is a concern for the effects of a new technology (such as a pesticide or a new tillage system) on pollution or soil erosion, then an environmental impact assessment may be needed.
In each of these cases, the impact assessment would attempt to evaluate the effects (be it agronomic, economic, social, or environmental) of the selected variable.
Design and information collection
Once the focus has been determined, attention can turn to designing the evaluation. An evaluation design outlines the purpose and focus, procedures for information collection and analysis, the strategy for reporting and follow-up. A general rule of thumb is that responsibility for designing and conducting M&E should be closely tied to decision-making bodies, rather than to administrative departments. Another important advice is to avoid spending too much time gathering information and too little time designing the evaluation, doing the analysis and preparing the report. For this and other reasons is it therefore important to review existing sources before gathering primary data.
Many different methods can be used for collecting and analysing data for evaluation purposes. The orientation can be on situational analysis and interpretation or on statistical evidence suggesting certain relationships. Both informal and formal data collection methods can be used. The most common include review of statistics and documents, surveys of various sorts, interviews with farmers and other stakeholders, and field observations. Methods should be chosen to suit the task at hand, based on the type of information and degree of accuracy needed, the expectations of key audiences, the environment in which the evaluator is working and the time and resources available. In most evaluations, much more information is collected than is finally included in the evaluation report. But even then, managers often still criticise evaluation reports to be too long and detailed. An important rule for information collection is therefore: select sources of information in relation to key evaluation questions that will give you usable information; do not collect what you cannot analyse or use.
Evaluation methods should of course be valid: they should measure what they are supposed to measure. Triangulation, using different methods and sources of information, is also important for evaluation studies. Methods must also be credible in order to make sure that the audience trusts and understands the results. Finally, evaluation methods have to be feasible. Some methods might be too costly, time consuming or complex to be carried out.
Analysis and reporting
The time allocated to analysis and reporting is generally too short. This leads to frustrations at both sides of the end: evaluators and audiences. Reporting is often seen as a one-time event to be carried out at the end of an evaluation, to package evaluation findings in a formal report or to present the main conclusions in a wrap-up meeting. However, reporting is best thought of as a continuous communication process between the evaluators and various audiences.
Reporting may include oral, visual and written communications. The quality of communication is extremely important and needs careful attention, as does the reporting strategy. Different audiences need different types of reports often at different times. It is usually a mistake to use a ‘shotgun’ approach in reporting, with general all-purpose reports. Important questions to ask when reporting: Who should get evaluation reports? When? What information should the reports include? How could the reports be delivered? How can audiences be helped in interpreting and using the reports?
18.3 Diffusion and adoption of innovations An innovation is an idea, method, or object, which is regarded as new by an individual, but which is not always the result of recent research. An innovation always has two components: the hardware and the software. This is not only true for computers, but also for a plant variety where we have the plants (the hardware) and the techniques for growing them (the software).
Adoption is the acceptance of innovations. Or, in other words: the incorporation of new elements in an existing situation. The term ‘adoption’ originally came from kinship and descent studies in anthropology.
People related to one another by descent are referred to as blood relations. Anthropologists use the term consanguineal kin to refer to all those people who are linked to one another by birth as blood relations. In addition, however, a consanguineal kinship group may decide to include individuals whose membership in the group was established not by birth but by means of culturally specific rituals of incorporation that resemble what Euro-Americans call adoption. Incorporation via adoption often is seen to function in a way that parallels consanguinity, because it makes adopted persons and those who adopt them of the ‘same flesh’ (Lavenda & Schultz, 2000: 139-140).
The process by which many types of innovations diffuse and are adopted has been studied extensively. Many years pass between the time people first hear about some innovations and the time they adopt them. Mass media play an important part in the early stages of the process by making people aware of the innovations. Personal contact with people who are known and trusted, such as extension agents and opinion leaders, is more important as the process progresses.
The literature on diffusion assumes that the cumulative proportion of adoption follows a S-shaped curve in which there is a slow initial growth in the use of the new technology, followed by a more increase and then a slowing down as the cumulative proportion of adoption approaches its maximum (which may be well below 100% of the farmers).
Adopters and opinion leaders
People who are quick to adopt innovations appear to be characterised by:
having many contacts with extension agents and other people outside their own social group;
active participation in many organisations;
making intensive use of messages from the mass media, especially those which carry expert information;
being well educated and having a positive general attitude to change;
having a relatively high income and standard of living;
having high aspirations for themselves and for their children
Opinion leaders play an important role in diffusing innovations. They tend to be people who are capable, willing and in a position to help others with important problems. The position of opinion leader in a group depends on group norms and current problems facing the group. Messages about innovations will be most successful when the receiver trusts the source and shares similar attitudes towards the innovation.
The rate of adoption is influenced by the farmers’ perception of the characteristics of the innovation and the changes this innovation requires in farm management and the roles of the farm family. Innovations are usually adopted most rapidly which:
have a high relative advantage for the farmer
are compatible with his values, experiences and needs;
are not complex;
can be tried first on a small scale; and
are easy to observe.
For more information on diffusion and adoption of innovations, please refer to Annex 1 of this module. Module 6 (participatory technology dissemination, networking and collaboration) also contains related background information.
18.4 Dissemination and adoption studies The objective of adoption research is to accelerate the rate of adoption of innovations or to change adoption processes in such a way that certain categories of farmers adopt innovations more rapidly. Adoption studies help to find out whether recommended agricultural technologies and messages are known, accepted and adopted by the targeted farmer categories. Since the lack of compatibility between technologies and the implementation environment is generally the main factor inhibiting agricultural development, these studies are also useful for illustrating the degree to which acceptance of new technologies is limited by insufficient inputs, credit or marketing infrastructure.
From an institutional point of view, it is most desirable that a country has mechanisms that generate a continuous and reliable flow of information on current farmer practices. For example, if a new technology involves purchased inputs (fertiliser, herbicides, machinery), then information on the sale of these inputs may give an indication on the adoption of technology. The same holds true for other statistical data like the distribution and sale of seeds and planting materials, the value of rural credits, the volume of marketed food crops, acreage of crops, production figures, livestock treatments, quantity of processed milk and meat etc. All this information helps to be informed about recent developments and should normally be part of the monitoring and evaluation system of an agricultural sector development program. When these data are lacking, formal surveys have to be organised in an ad hoc manner. But even when a good agricultural information system is in place, additional data gathering on dissemination and adoption is often necessary, if only to gain understanding about information flows, dissemination processes and the logic of farmer decision-making.
The eventual adoption of innovations should be the constant concern of researchers. The application of adoption analysis techniques during the three major phases of the participatory technology development and dissemination process can foster maximum likelihood of adoption of new technologies:
Planning phase. Diagnostic surveys provide information on farmers' present use of technology; they can be thought of as "adoption studies" for previous technology generation efforts. When planning a research proposal, some kind of adoptability screening is needed to integrate elements in the research design that might influence adoption (see module 10).
Adoptability analysis carried out during the experimentation phase, tries to determine the likelihood that new proposed technologies might be adopted by (different categories) of farmers.
Extension phase. Although constant monitoring of farmers' opinions and experience is essential during the design and testing of agricultural technology, it is also necessary to carry out some sort of assessment after a new technology has been recommended or introduced. In the remainder of this chapter we will concentrate on this type of ex-post adoption analysis.
Studying adoption: interpretative or survey approach?
Adoption studiesgenerally trace the results of innovations from the research station or on-farm trials through networks of adopters, some time after the release of the technology or the start of the extension program. These studies analyse the underlying patterns of adoption and the use of new practices. Adoption surveys attempt to determine why a technology is or is not being used and compare the benefits of old versus new technologies. The results are normally presented in terms of percentages of adopters and non-adopters, the reasons for the technology to be adopted or not, or the reasons for the technology to be discontinued.
There are two principal strategies for helping to understand why farmers accept or reject a particular technology. One is to seek the opinions and observations of the farmers. The second is to do a statistical comparison of adoption behaviour with the characteristics of the farm (production environment), the farmers (household categories), their production goals and the institutional environment. Combinations of statistical and interpretative evidence generally give the most telling results.
A key word of qualitative (or more correct: interpretative) research is empathy (“feeling in”, trying to see a situation from within, as if from the eyes of the people under study themselves). Another characteristic of qualitative evaluation research is the holistic approach and the participation in and the interpretation of ‘real-life’ situations. Quantitative data sets are most often lacking. Reports are written in a journalistic style that tries to convince readers.
Informal surveys and situational analysis can be very useful for providing researchers with feedback about the acceptability of a technology. It can also provide information about policy related problems that may impede the spread of a technology. For instance, farmers from FRG’s and FEG’s or from farmer field schools can provide insight in the dissemination and adoption of the new technology. Do they continue the technology after they were exposed to it during the experimentation phase or not? That is a first indication. Also, FRG and FEG members are often stimulated to conduct field days, they are involved in farmer-to-farmer training and they hear the reactions of neighbours, friends and relatives on the new technology. In that way, they are well situated for assessing the adoptability of a certain technology and they can give valuable observations on constraints of the dissemination/adoption of a technology. The fact that the members of this farmer groups are well known to scientists makes the empathetic approach easier and the interpretation of qualitative data more reliable.
Most adoption studies however deal with large areas and extensive extension programs. For that reason adoption studies generally focus on quantifiable data, and rely on formal methods, large samples and many enumerators. In attempting to explain adoption patterns by statistical analysis, the most common approach is to compare the characteristics of farmers who have adopted a technology with those who have not adopted, and to see if some of these differences might offer insights into the rationale for adoption. It should be noted that these differences do not constitute proof of an association. Formal surveys are the most common method to assess adoption. The results of a formal adoption study can be combined with data on changes in farm production, farm incomes, or consumer gains to develop a more complete impact study. General statistical data can complement the information base.
Given the intricacies of measuring adoption, the quality of the questionnaire is of great importance. Questions should be short and clear and ordered in a logical sequence. Leading and ambiguous questions should always be avoided; questions should be asked in a neutral way. Questions could be closed ended or open ended. Closed questions are limited in scope, but easy to analyse. Open-ended questions are more difficult to analyse. In the case of big surveys, the potential answers on open-ended questions can be categorised before the conduct of the survey in the field or the answers given could be coded later. In any case, the questionnaire should aim at documenting and quantifying the degree of adoption.
Box 18.1 Steps of survey implementation
The questionnaire should go through several drafts and should be proofread by other professionals.
The questionnaire should be disaggregated on socio-economic and agro-ecological circumstances so as to capture different variables that are likely to influence adoption.
Thereafter the questionnaire should be pre-tested by a representative sample of the enumerators, so that necessary feedback is obtained and actions taken accordingly.
In case of pre-coding of answers on open-ended questions: this is to be done after field-testing of questionnaire.
The questionnaire should be written in local languages to avoid misinterpretation by the enumerators and all enumerators must be trained to use the questionnaire.
After all logistics have been carefully thought of, the fieldwork can commence by interviewing the respondents.
The survey team leader must carefully go through the questionnaires every day after the survey so that if there are any problems he/she can rectify.
Coding of the questionnaires (in case of coding system that is based on answers on open-ended questions)
Data entry, interpretation and analysis, report writing
One of the simplest and most useful ways of examining differences in adoption patterns is with contingency tables in which the cells of the table compare the proportion of adopters and non-adopters with a particular characteristic. This is particularly appropriate if the variable is a nominal one, i.e. represented by non-numerical categories, such as access to irrigation (yes or no) or previous crop in the rotation (potatoes, barley, or other). Even in cases where the variable is a continuous one (such as farm size or number of days between land preparation and planting) it is sometimes useful to divide it into a few simple categories (large versus small; low, medium, high) and develop contingency tables. The relevant statistical test of this association is the chi-square test. If the variable is continuous, another option is to compare the means for adopters and non-adopters. An appropriate statistical test in this case is the t-test (For examples see Box 18.2 and Lyimo, 1997).
Statistical tests can provide a quantitative estimate of the likelihood that the association observed between two variables could have occurred by chance, even if there were no relation between them. Most statistical handbooks alert readers on the difference between statistical significance and importance. A relationship may be shown to be very significant (i.e. very unlikely to have occurred by chance) and yet be quite unimportant. If two factors are associated, there are a number of ways of explaining the observed association.
Box 18.2 Examples of statistical analysis
Contingency table with nominal variables: Planting method by land preparation
There is a strong association. Farmers who use a tractor for land preparation are much more likely to adopt row planting (84%), while only 29% of the farmers using manual land preparation adopt row planting. The chi-square test shows that it is improbable that this association could have occurred by chance. However, the contingency table does not tell use why these two factors are related.