Federal democratic republic of ethiopia ministry of agriculture


Sub-component 5.2 Monitoring, Evaluation and Learning (ME&L)



Download 0.99 Mb.
Page19/23
Date02.02.2017
Size0.99 Mb.
#15422
1   ...   15   16   17   18   19   20   21   22   23

Sub-component 5.2 Monitoring, Evaluation and Learning (ME&L)

The program needs proper follow up, and evaluation to improve performance including learning and sharing. These will rely on the monitoring and evaluation system for AGP II. The monitoring and evaluation system mainly build on the strength and weaknesses of AGP I that was identified during its implementation. Thus, a learning environment that involves an effective ME & L system is an important element of the AGP II ME&L design.


The system will assess and document provision of input, achievement of outputs and process of implementation as agreed in the annual work plans and progress towards program outcomes, and intermediate results. The ME & L system will also analyze and interpret such data to evaluate impacts and outcomes, track progress, and monitor how well agreed-upon processes are being carried out. It also identify implementation gaps and challenges for proactive corrective actions by implementing agencies or for discussion by SCs at each level as well as to document and incorporate lessons learned into program implementation. ME &L information’s/data will be obtained through formal and informal feedbacks obtained from various program implementers (federal, regions, woredas, and kebeles) as well as from qualitative and quantitative surveys conducted related to AGP II outcome indicators.
In general, the ME&L at the coordination unit of AGP II will be characterized by providing strong support provided on progress monitoring and supervision to implementing agencies and greater use of data triangulation (use of different data source) and methods of analysis. Stronger emphasis will be also given to capacity building (training) of staff in charge of ME&L and supervision and data collection; and functional program management information systems. The objective of monitoring & evaluation and learning in AGP II is therefore to:

  1. support information gathering and analysis for the program management and other stakeholders on the quality of program implementation so as to facilitate appropriate and timely program decisions

  2. institutionalize a learning mechanism and set up social accountability mechanism and

  3. assess the outcomes and impact of the program toward its objectives

Thus AGP II ME &L system will have the following major activities such as monitoring of inputs, implementation processes and outputs, and evaluation of outcome and impacts (including safe guards) and participatory ME and learning, and capacity building

5.2.1 Inputs, Outputs and outcomes Progress Monitoring

Program implementation performance monitoring includes tracking of inputs, outputs, process and outcomes. This will focus on keeping the system simple and interactive allowing regular reporting and learning by stakeholders at all levels. Monitoring will also include the regular assessment of AGP processes; e.g. assessment of effectiveness of trainings provided, mainstreaming of program cross cutting issues (gender, nutrition and climate smart agriculture), study of profitability and rates of return on investments by CIGs, technical audit of the CLPP, physical audit of small scale investments, environmental and social safeguard management, etc.


The Federal and Regional PCUs will take responsibility for program implementation monitoring and assessment of activities. PCUs will be responsible for regular monitoring of annual plan implementation and conduct assessment on selected indicators to measure achievement of outputs and progress towards outcomes of the results framework as per agreed indicators. PCU will also (outsourced for TA) prepare annual program progress report by conducting annual base assessment on selected thematic areas for study while important this enable to complement AGP-II MTR finding. PMIS under AGP I will update to reflect the new design and hierarchy of objectives of AGP II and monitoring data and qualitative information will be entered into a Performance Management Information System. Program monitoring information system (PMIS) will serve as a major source of information for quarterly and annual reports submitted to steering committee at each implementation level. This can be done by program coordination and PCU will offer for technical assistance (TA) if desirable.
In AGP II reporting on the program implementation performance is mandatory and reports collected to stakeholders are required to meet the periodic reporting deadline and so that it could be consolidated at FCU and timely distributed. Program performance reports will be produced at four levels (federal; regional; Woreda and Keble) and collected on quarterly, bi-annual and annual base. This will permit to review implementation of activities against annual work plans and budgets, and ensure that corrective measures are quickly applied as shown in figure 3.

Figure : Report flow arrangement

Zone FP


Federal CU

Regional CU

WOA-FP

RARI

Keble DA

Regional AGP implementers



NARC

DPs

EIAR

MoA- PPD

Federal AGP implementers



MOFED


Woreda Implementers





5.2.2 Outcome and Impact Evaluation



There are a number of methods for assessing or evaluating outcome and impact. The very important point is to consider is the nature, extent and timing of the review and evaluation process through program design and developed result framework. In AGP II result evaluation survey will be conducted two times in the program life (MTR evaluation and terminal evaluation) and result will be measured as indicators of the Results Framework (see Annex I).

For the program result evaluation the end of program evaluation of AGP I will serve as preliminary baseline. However, a more comprehensive baseline survey is planned for the first year of AGP II to capture new elements of the program design. This will be followed by a midterm survey and evaluation at the third year of program implementation and a terminal evaluation at end of program. The evaluations will be outsourced competitively to a competent consultancy firm, but the related surveys will be undertaken by the Central Statistics Agency as in AGP I. The midterm and terminal evaluations of AGP II will be carried out under the oversight of the FCU with technical support from the World Bank.


Program midterm and terminal evaluations will complement thematic assessments ( e.g., adoption and impact analysis of agricultural technologies promoted by the program, irrigation performance assessment, analysis of changes along selected value chains from the point of view of increased commercialization) that to be conducted on yearly base. Case studies (on selected FREGs, FTC, CIGs, micro irrigation and SSI) and qualitative studies will also conduct on changes in service delivery (extension, animal health, AI…..). In addition, assessment will be conducted focusing on cross cutting issues: gender (access to service & technologies), nutrition and climate smart agriculture.
The assessments will be agreed at the federal steering committee at the outset of each fiscal year. AGP II will also oversee gender evaluations to study the effectiveness of the program’s gender mainstreaming; and other complementary initiatives for scale up through the latter years of the program. Findings from thematic assessment (cross cutting issues) and case studies conducted will be incorporated into a Midterm Review Report (MTR) and an Implementation Completion Report (ICR) that will be compiled by the FCU at the third year of the program implementation and at the terminal evaluation of the program.
Result indicators are “quantitative and qualitative" facts or variables that provide a simple and reliable means to measure program achievement, to reflect the changes connected to program implementation, or to help assess the performance of a program. In other words, indicators are variables, which help to respond to the two fundamental questions to be raised in the program implementation: “How do we know success or achievement of program supported interventions?” and “Is the program moving towards achieving the desired outcomes? To respond to this fundamental question (For AGP-II) or track changes due to program implementation there are four major program development Objective (PDO) level indicators and another 15 intermediate (component level) and 24 sub component level indicators and the detail is depicted in Annex -1.

5.2.3 Participatory M&E Internal Learning

The purpose of promoting participatory monitoring and evaluation and learning is to ensure that learning through practices, experiences and best practices is improved and to ensure that beneficiaries learn from other beneficiaries’ experiences and best practices and to take advantage of the opportunities for learning from each other.


The results from participatory monitoring will potentially feed scaling up of best practices, allow adaptation of the program design as necessary, especially in early stages, also support a productive dialogue on findings and will empower them to take correctiveactions. Information collected from participatory M&E will be kept at FTCs and compile once a year by the Regional AGP M&E Officer.
Thus AGP II will promote participatory ME and L by organizing community learning platforms. Thus , farmers, in communities where AGP II is active, will discuss results achieved, progress on intended objectives and implementation problems and/or best practices using simple visual formats. This will include cross community monitoring (farmers monitor other farmers’ activities/sub-programs); using report finding, stocktaking, case studies (gender, nutrition, micro irrigations, FTC,…), and dissemination of M&E findings and other relevant thematic studies (e.g. irrigation sub-programs, impact on women, etc) undertaken within the program.
The participatory ME and L will be conducted continuously throughout the life of the program as important. To enhance this different monitoring and evaluation tools can be used; Participatory Impact Assessment (PIA), rapid appraisal methods (RAM) and community score card (CSC)…etc. Participatory impact assessment will look at particular interventions under the program and outside (for example other interventions with potential for scaling up) and enable to consider the cost/benefit analysis of program interventions (for example, cost-benefit analysis of irrigation sub-programs, CIG’s and other interventions areas of program).

Rapid appraisal methods are quick, low-cost ways to gather the views and feedback of beneficiaries and other stakeholders to respond to decision-makers’ needs for information. Beneficiaries opinions will can be obtain from key informant interview, focus group discussion, community group interview, direct observation, and simple survey. These enable to have qualitative understanding of complex socio-economic changes. Community scorecard (CSCs) method is simple and user-friendly and helps to track inputs or expenditures; monitor quality of services/sub-programs; to generation of benchmark performance criteria; comparison of performance across facilities, Keble/Woreda; generating a direct feedback mechanism between providers and users; building local capacity; and strengthening community empowerment.



5.2.4 Capacity Development of the Program Coordination Unit





  1. Physical capacity development

AGP II will provide support to program coordination units from federal to woreda level through purchase and furnishing of offices with equipment and facilities like computers, chairs and tables. The provision of computers and laptops will mainly focus to new regions and program woredas. In addition AGP II coordination units will be supported through provision of one vehicle each to federal and regional program coordination units.

  1. Human resource development

Human capacity development will stress on program coordination unit and cross cutting issues. This intervention will mainly focus on provision of TOT training to expertise at different level, production of training materials, organizing cross learning programs and lessons learned from implementation. To strengthen and facilitate implementation, performance monitoring and evaluation training will be provided to program monitoring and evaluation officers, coordinators and planning and monitoring officers of line implementing agencies on program planning (CLPP), financial management and procurement guideline, program MIS, result based M & E, methodologies and tools of M & E, internal learning, knowledge management & dissemination and communication and facilitation skills. These training can be provided by internal force or out sourced for technical assistance (TA). To make those training more practical and enhance internal learning experience sharing programs/study tours will be organized (locale and outside country) to M & E officers and Program coordinators




Download 0.99 Mb.

Share with your friends:
1   ...   15   16   17   18   19   20   21   22   23




The database is protected by copyright ©ininet.org 2024
send message

    Main page