Prepared by: Kais Al‐Momani Nour Dados Marion Maddox Amanda Wise C



Download 16.24 Mb.
Page31/59
Date05.05.2018
Size16.24 Mb.
#47761
1   ...   27   28   29   30   31   32   33   34   ...   59

SUSTAINABILITY AND EVALUATION



As pointed out by a number of participants, programs targeting leadership and media have multiplied in recent years. However there appears to be little in the way of co‐ordination among them. Increased co‐ordination and cooperation could contribute positively to sustainability in various ways, for example by avoiding replication of programs and thus multiple small scale programs of limited impact. Many programs (largely due to funding issues) are run only once or twice, and do not share information between them. A lack of evaluation (and information sharing) can lead to a situation where each new program ‘re‐invents the wheel’ and lessons learnt by previous programs are lost. Greater co‐ordination would allow for a pooling of resources in some instances, and could also involve, for example, articulation pathways from one program to the next. Articulated pathways would allow for particular organisations focusing on different participant profiles, while feeding successful graduates up to the next level of program. For example a program for disadvantaged and disengaged youth could feed into a program for more experienced professionals or those with some leadership experience. Or, for example, different programs could focus on different skill‐sets.
A further issue, not just confined to media and leadership interventions, is a lack of good quality evaluation. Serious program evaluation is a key to long term sustainability. However, many programs, including a number of those listed in the audit of initiatives, were either not evaluated, or evaluated at only the simplest level. Evaluations tended to be of the single page ‘tick a box’ variety, administered to participants to register their satisfaction with aspects of a program.

For example, a report on the La Trobe University ‘Leadership Training Program for Young Muslims’ (20072008) gave an outline of activities run under the initiative. A summary of the accomplishments of the program was provided; however, as the report made limited use of participatory evaluation processes or qualitative data collection, it did not evaluate the success of the program in any longitudinal depth. Similarly, the UK based Nasiha Active Citizenship Program (ACP) published participant responses in its ‘Accreditations’ page, but without an analysis or evaluation of the effectiveness of the program.


Hanberger (2001) defines three approaches for evaluating the effects of public policy and programs on civil society and democracy. They are technocratic, advocacy, and mediating. Technocratic and advocacy approaches promote expertoriented and grouporiented evaluations respectively. Hanberger favours the mediating approach, in which the evaluator acts as a counsellor and mediator: inquiring, learning, and working together with various stakeholders, trying to describe the current situation in fair ways, taking account of critical arguments in the face of difference and conflict, and finding practical solutions to collective problems (Hanberger 2001).
As part of the ‘Preventing Violent Extremism’ campaign run in the UK, the Department of Communities and Local Government published a set of guidelines, Evaluating Local PREVENT Projects and Programs, that local authorities could use to evaluate the initiatives’ relative success. The guidelines provided accessible information on evaluation parameters, data collection methods and data analysis models. The guidelines also favoured a participatory approach, encouraging program authorities to involve community‐based partners in decision making and evaluation.
However, Mayo and Rooke (2008) identify some problems of participatory approaches to monitoring and evaluation. Such approaches may simply lead to tokenism if participatory evaluation is taken to mean the occasional use of particular techniques (2008). The scope and limitations of participatory approaches are examined through a case study of the UK based program Active Learning for Active Citizenship (2008). In particular, they note the challenge of tracking the impact of such evaluations on wider policy decisions (2008) and understanding the diffuseness of ‘ripple effect’. However the Active Learning evaluation did employ some useful methods, for example tracking participants sphere of influence before their participation in the program, and again at one year afterwards. They asked participants to detail any new spheres of influence that had opened up for them as a result of participation. Examples of new opportunities for influence were mapped at different scales, from ‘self’, to family, neighbourhood, local, regional, and national influence. Diffuse multiplier effects were best tracked through qualitative case studies of individuals over time.
Good quality evaluations do not rely on the evaluation of program convenors alone (who are often cautious in negative evaluation due to anxieties around continued funding), nor reliant solely on participants broad statements of satisfaction. Serious evaluation needs to take an in‐depth participatory approach, assess specific and diffuse impacts in the short, intermediate and long term, and should involve both qualitative and quantitative components. Comparative evaluations have an important place in detailing differential outcomes for different participant groups and program types. Evaluations need to be customised for individual programs, and need adequate funding to cover participative approaches, observational work, in‐depth qualitative interviews and participant case studies, and return research at a reasonable interval of a year or more following the end of the program. Considered indicators need to be established, but allowance needs to be made for diffuse, or less mechanistic or direct cause‐effect impacts. For example media intervention projects may not lead to a widespread change in mainstream media representations, but participants who are successful in publishing or being interviewed positively in mainstream media may contribute subtly to the public conversation, and also to a sense of self efficacy and ‘voice’.



Download 16.24 Mb.

Share with your friends:
1   ...   27   28   29   30   31   32   33   34   ...   59




The database is protected by copyright ©ininet.org 2024
send message

    Main page