Monitoring
28. Monitoring of a programme during its life time will focus in the first instance on whether the programme is meeting its operational objectives and to whether it is reaching out to those firms and activities which it was designed to support. Evidence needs to be gathered on how the detailed specification and the procedures of the programme were implemented. Early evidence of the outcomes of the programme should be collected. Where the innovation activity being supported is relatively short in duration e.g. installation of a new IT system, evidence of results and even impacts will be available in respect of early participants. In the case of longer term investments in innovation e.g. applied research, it will only be possible to collect evidence on the early effects of the supported projects.
29. Monitoring arrangements should be designed with the needs of ex-post evaluation in mind. Some data needed for evaluation will be much easier to collect while the programme is still running; most of the individuals in participating firms who are directly involved with programme will still be in post and the details will be fresh in their minds. Similarly firms who applied but were rejected will remember the details of what happened and those who did not apply at all will be much more likely to remember why. Also if the financial assistance being given to firms in staged tranches then, while money remains to be paid, firms will have a much greater incentive to reply conscientiously to monitoring surveys etc. Data collection during the programme’s life-time and data collection after it is finished should be planned together as a coherent package.
Evaluation
30. Ex-post evaluation of programmes and policies has to serve a number of purposes and meet the needs of several groups of stakeholders. These include:
-
enable those people managing programmes and implementing policies to make them more effective;
-
enable those who design new programmes and new policies to make them more appropriate and to increase their likely effectiveness;
-
demonstrate that money (or other resources) have been well spent and for the purposes envisaged by managers, budget-holders, Ministers, Parliaments and the general public;
-
provide, as a by-product, information about the way the economy and society works;
Each of these is considered in turn below
31. Programme Management. As indicated above programmes need a clear set of operational objectives including measurable targets whose achievement can be measured during the lifetime of the programme or policy. Information on the achievement of these targets should be collated into a form which is readily absorbed and understood by managers and can be readily communicated to programme funders, evaluators and other stakeholders. Long monitoring reports full of data which has not been properly organised, analysed and summarised tend to be put aside and not read. Procedures should be put in place to allow mid-course redirection of programmes or policies if monitoring returns indicate that things are not going to plan.
32. Programme and Policy Design. The outcome of programme monitoring and evaluation must be effectively communicated to those designing similar or related programmes. This feedback loop is extremely important and is emphasised in the UK by the F at the end of ROAMEF. In the UK Department of Trade and Industry4 all evaluation reports concerning support for technology and innovation were circulated to the relevant programme committee and the programme committee chairman was required to provide a written statement of resulting action. In order to foster effective feedback evaluation reports should clearly state findings of wider policy interest and the conditions or assumptions on which those findings rest. Appraisal of new programmes should take explicit account of the findings of evaluations of earlier programmes.
33. Programmes should normally be time limited and follow-on programmes should not be agreed without consideration of evidence on the effectiveness of the existing programme. This will often require evaluations that are carried out before programmes are finished. In some cases it may be appropriate to carry out an interim evaluation around the mid-point of the programme's lifetime. Limited resources for evaluation should be concentrated on novel programmes to maximise opportunities for learning. Where programmes are particularly novel it may be appropriate to conduct a pilot before the full programme is launched. On the other hand if a programme has been repeated several times and shown by ex-post evaluation to have been successful in each case and if monitoring of the latest version shows that it is going to plan, then only limited resources could be devoted to ex-post evaluation this time around.
34. Demonstration of Value for Money. There is an increasing demand for quantitative indicators of programme and policy effectiveness for example in UK Government Departments Public Service Agreements (PSA) and the US Government Performance and Results Act (GPRA). These demands tend to emphasise the need for outcome rather than input or process measures. However it may be inferred from what is written above that while indicators based on programme results are not too difficult to produce those based on estimates of impacts are more difficult to construct. We lack a fully specified dynamic model of how technology and innovation impact on the economy and public action may be only one factor in the complex processes which determine the outcomes which the programme or policy is designed to influence. This makes it very difficult to devise reliable indicators of how individual support programmes affect overall economic performance or social wellbeing. It also means that reliable quantitative indicators will tend to be specific to quite narrowly defined types of programmes, since they will tend to reflect direct results. Also a set of indicators which covered the full range of government programmes which support Technology and Innovation could be very large indeed and might not prove to be of much practical use.
35. Consequently comparisons between programmes with even moderately different aims can be difficult. At best we can hope to compare programmes in terms of very rough estimates of value for money based on a separate assessment of each programme in terms of its own costs, results and impacts. Comparisons become mathematically more complex because in the case of innovation particularly, programmes and policies may be mutually reinforcing rather than substitutes for one another so that the effectiveness of one programme will depend on the operation of others5.
36. Budget Allocation. For a particular budget the elimination of programmes yielding poor value for money and the addition of programmes promising excellent value for money will improve the cost-effectiveness of the whole budget. It can also be made more effective by an integrated approach to ex ante appraisal, monitoring and ex post evaluation. This not only reduces the number of poorly designed programmes which go into operation but will improve the quality of ex-post evaluation. Feedback of evaluation results into the design of new programmes is an important part of the process. Stipulating that no programme should run for more than a certain period of time without being evaluated will further improve the effectiveness of the process. In a fast moving world this gradual bottom-up approach needs to be complemented from time to time by a more strategic top-down approach6.
37. To provide information on the way the economy and society works. Since the routes by which Science, Technology and Innovation affects Economic Growth and Social Wellbeing are not well understand government interventions can be regarded as experiments which will provide evidence on how the whole system works. Although there are some research institutes such as PREST at Manchester and ISI at Karlsruhe, which utilise the results of the evaluations they undertake in their theoretical research, one gets the impression that in Europe at least the results of evaluations are insufficiently used as input to academic research particularly by economists.
HOW SHOULD WE UNDERTAKE EVALUATION
38. Most evaluation involves the application of well known methods which have been adapted to the characteristics of the programme being evaluated. The standard techniques include surveys (face to face interviews, postal, telephone, by the internet), indicators, regression analysis, modelling etc. The art (not science) of evaluation is deciding how they should be applied in any particular case. The description of the innovation processes it is desired to influence (which should underpin the programme rationale), the objectives and modus operandi of the programme, and its administrative and political context will be the main determinants of what questions should be asked and of whom, what form of survey would be most appropriate and cost-effective, what indicators should be collected, what variables should be used in regressions and models etc. The theoretical underpinnings of the programme will determine how the information/results/indicators which are collected should be interpreted. This interpretation will form the basis not only of a judgement about success or failure but also why the outcomes were what they were and what lessons can be learnt for future policy-making and programme design. It is therefore important to make sure that the descriptions of innovation used by policymakers are properly grounded in what actually happens in the real world. This is not always the case.
39. A well designed programme with clear objectives and well-founded modus operandi should naturally suggest what the parameters of an appropriate evaluation should be. Evaluation of similar schemes in the past, many of which are either on or available through the internet will usually offer a suitable menu of broad approaches to choose from.
40. There are several different questions which need to be asked of a policy or programme which is being subjected to ex-post evaluation.
-
Was the programme appropriate? Did it address a significant weakness in the country or regions innovation performance or innovation system? Did it complement or compete with other programmes funded by the innovation budget? Did it take account of the main features of the innovation process if was trying to influence? For example if it was encouraging firms to develop new processes did the programme sponsor investigated whether the assisted firms had the necessary market access for the resulting products;
-
Was the programme effective in achieving its objectives? Even if all of the objectives have not been achieved the programme might still be regarded as a success particularly in those cases where they are benefits which may not have been anticipated;
-
Did the programme or policy give value for money? In words did the estimated additional benefits which resulted from the programme exceed the identifiable costs or are they expected to do so within the foreseeable future;
-
Was the programme efficient? Did it achieve the estimated benefits at the lowest possible cost?
Appropriateness of public support for innovation
41. Experience of evaluation of national innovation support programmes shows that the appropriateness of policies and programmes is a key issue. For example, joint industry-university programmes designed to undertake leading edge research are not appropriate to a region where the vast majority of the businesses are small firms in traditional sectors who lack the complementary capabilities and absorptive capacity to exploit the results of such research. Creating new spin-off companies to exploit the results of such research may seem very attractive but their failure rate can be high particularly in traditional regions which lack both a supportive environment and potential leading edge customers. It is often forgotten that the key early players in Silicon Valley were spin-offs7 from existing companies. The IT sector around Cambridge in England similar owes a great deal to the corporate children and grandchildren of Acorn Computers.
42. The success of a programme or policy often depends on it being a close fit to the needs and innovation characteristics of the firms or sectors at which it is targeted. About 12 years ago DTI introduced a scheme called the Motor Industry Component Forum. This arranged for leading engineers from multinational automobile manufactures to visit the factories of UK component manufacturers to offer advice about how these component manufacturers could increase productivity and offer a better service to their existing or to potential new customers. An interim evaluation showed that this scheme was showing signs of considerable success and it was decided to extend it to another ten sectors. This was against the advice of some innovation experts in DTI who argued that these other sectors did have not those characteristics which had made the scheme a success in motor vehicles. Subsequent evaluation by the University of Reading showed that that the cloned schemes were a complete waste of money.
43. Appropriateness is not merely a matter of a programme being well suited to the sector/firms/innovation activities at which it is directed but also whether addresses a significant problem with innovation in the region country or sector concerned. This means that it should fit within an overall strategy based on an analysis of the innovation system of the region concerned, its strengths and weaknesses and the economic and social opportunities and threats which may impact upon the region etc in the future. In mapping out a regional innovation system it must be always be remember that while for some firms and sectors within a region the main influences affecting their innovation performance will be found within the region itself, in the case of others innovation performance will be heavily influenced by what is going on in adjacent regions, in the rest of the country or abroad. For example the commercial and innovation performance of engineering firms in the Netherlands or Austria will often be influenced by what is happening in the Germany Automobile and Aerospace sectors as they are often part of the latter’s supply chains. Any proposal to support the innovation activities of firms should take account of the markets into which these firms can or might sell.
44. Experience in the UK (which I am sure is repeated in other countries) shows that if a proposal for supporting innovation is grounded in a properly constructed analysis of both the innovation system and the innovation processes of the firms concerned, clear about what it is trying to achieve and about what is to be delivered, by what means and to whom, then it has a good prospect of success providing achieves its objectives and a sound rationale (see paragraph 24 (b) above) is established.
Meeting objectives
45. A central part of any evaluation is to collect evidence in order to establish whether the objectives have been met. Paragraph 25 above identified three sets of objectives – operational results and impact. While evidence on the achievement of the first two types of objective is usually obtainable within a reasonable timescale for the reasons set out in paragraph 6 above evidence on the achievement of the high-level objectives may sometimes be delayed until some years after the programme has finished. This is particularly true of support for long-term research and for novel technology.
46. For example the UK Alvey Programme, which supported research into information technology, ran from 1983 to 1987 but experts are still identifying beneficial effects8. In 1982 DTI provided Acorn Computers with a grant of £2 million to develop a computer for use UK Schools. The technology developed by Acorn at that time has continued to be developed in various ways so that linear descendents include the ARM chips used in most mobile phones, the browser used in PlayStations sold outside of the US and Japan and the protocols used for transmitting TV over the Internet.
47. The timing of evaluation therefore needs to be carefully considered. Early evaluation has the advantage that individuals who participated on behalf of their organisations are easily reached, data and information are readily to hand and the programme is fresh in everyone’s mind. Early evaluation also helps the design of follow-on programmes. However while data on ‘failed’ projects may become available early success may take some time to come through. By contrast late evaluation may encounter difficulties in data collection and tracing key individuals and may suffer from ‘survivor’ bias. One solution is to have an interim evaluation based on monitoring returns and /or a preliminary survey with a more considered evaluation later. Whether this is appropriate will depend on the needs of the policy cycle, on the resources available for evaluation, on the size and significance of the programme and on the burden which evaluation places on programme participants.
48. Where results of public support for innovation takes time to emerge or are diffuse and indirect it may be appropriate to show that a programme achieved certain well chosen intermediate objectives or results which are known on the basis of academic research or case study histories to be likely to lead to certain types of economic and social benefits. For example if firms can be induced to develop in-house capabilities in a particular technology this may well enhance future innovation performance without early manifestation in new products and processes. Sometimes this involves research teams moving several times before they end up in a business fully able to exploit the results of their work9.
Value for Money
49. If a programme was appropriately designed and met its objectives then whether it offers good value for money will depend on whether paragraph 24(b) above is satisfied and a sound rationale for the programme exists. Traditionally the rationale for innovation and technology policy has been couched in terms of market failure within the standard textbook neoclassical economics paradigm. This has become increasingly unsatisfactory10 over time. One reason is that analysis of national systems of innovation shows that the governments’ role in innovation performance is not confined to intervention at the margin in activities which are primarily the responsibilities of firms but include areas where they have prime responsibility. These include training and education, the science and public research system, public procurement, taxation, institutions and infrastructure, the business environment generally. In many of these areas public authorities play the main. It is important that policies in these areas are innovation friendly. In others such as vocational training they play a significant but minority role; in these areas public-private partnerships will often have an important role to play. This is an important issue for ERDF since much of the support it provides is for infrastructure or other elements of the wider environment in which firms innovate.
50. Value for money depends on the policy or programme yielding additional benefits which would not otherwise occur and that they exceed any additional costs. Where support is provided to individual firms or groups of firms the benefits can be partitioned into benefits to the firms themselves and those which accrue to other participants in the innovation process including other producers, suppliers and customers (externalities). Quantifying externalities will usually require the evaluator to survey non-participants or to carry out some rather sophisticated econometrics.
51. Past evaluations of innovation support programmes have sought to identify three kinds of additionality:
-
Input additionality. For example did the firm increase the amount of its own resources which it devoted to the supported project or did the inputs simply increase by the amount of the subsidy (or even less)? Evidence on this should emerge fairly quickly;
-
Output additionality. What are extra benefits? These may take the form of direct innovation outputs such as patents or prototypes or in terms of consequent effects such as the firms innovation performance or profitability. Evidence on these will take longer to emerge; and more recently,
-
Behavioural additionality11. Changes in the strategy, processes and administrative practices of the firm. Behavioural additionality is complementary to output and input additionality but should be seen as providing independent evidence of the impact of the support. This evidence may appear both in the long and short run.
52. The identification of the additional benefits of a programme will depend on the framework of analysis which is applied. A good example of this was the UK CARAD programme which provided support for collaborative applied research in Aerospace. Critics of the programme, starting from the linear model of innovation (research leading to prototype leading to development leading to the market launch of new products and processes), argued that if a technology was necessary for the launch of a new aeroplane or aero-engine the companies would fund the R&D required. Any government funding would not add to Aerospace Business Enterprise R&D but would simply find its way into shareholders pockets. In fact Aerospace does not operate according to the linear model of innovation. Large Aerospace manufacturers develop knowledge and expertise of a portfolio of promising new technologies which may or may not be incorporated into future aircraft and aero-engine 'platforms'. Only when the design of a platform reaches a more advanced stage can the company be sure what technologies will or will not be included.
53. Given that (i) the long timescales involved in developing and manufacturing new platforms and (ii) the ex-ante expectation of the payoff from supporting any particular line of research must be highly uncertain, companies may be deterred from investing in any particular technology. It also makes it difficult for evaluators to measure the outcome of the programme in terms of products.
54. An evaluation of the programme found that the main identifiable benefits were:
(a) The discussions which led up to the formulation of a new CARAD programme broadened the perspective of the participating companies and led to a more considered choice of technologies to be covered and a wider range than even the larger firms could research on their own. Aerospace firms were content to engage in discussions of long-term research because their competitive advantage does not lie in their possession of knowledge of any particular technology but in their in-house engineering design and system integration skills: CARAD funding encouraged them to devote the resources to play an active part and increase the amount and variety of their research. A wider and better choice of technologies being researched would improve their ability to meet future market needs and regulations.
(b) CARAD provided the funds which enabled smaller specialist suppliers to the Aerospace sector to take part thus increasing their technological knowledge and capability. They used these enhanced technological capabilities to help the innovation efforts of their non-Aerospace customers.
The evaluation method undertook a series of in-depth face to face interviews with programme participants. It also drew on a survey by consultants of beneficial spin-off to other sector which had already been undertaken for another purpose.
55. In the case of public financial support for a R&D project carried out by a single firm the benefits may take a variety of forms. If the support is provided by a non-selective non-incremental tax credit then the firm may or may not increase the value of its R&D. Whether it does so depends formally on whether the price elasticity of demand is greater or less than one; a tax credit reduces the cost (price) of R&D to the firm. The volume of R&D may increase by more or less than the percentage value of the credit. Incremental tax credits, while more difficult to administer, are only payable on increases in a firm’s R&D from a predetermined base level and can usually expect to result in a larger increase in R&D per euro given up in tax revenues. Econometric evidence suggests that incremental tax credits increase R&D by a least the amount of the tax revenue forgone.
56. Non-incremental tax credits offer less value for money because they support a lot of R&D which would take place anyway. Support for activities which would occur without it is usually referred to as deadweight. However automatic non-incremental R&D tax credits have lower administrative costs and the costs to firms of applying is much less.
57. In the case of EDRF any support for single company R&D projects will probably be provided by grants (or possibly loans). A firm receiving an R&D grant may do one of several things depending on how selectively the grant is administered:
-
It may undertake the supported project in the same form as it would have done anyway and use the money from the grant elsewhere in the business. This constitutes deadweight and if the programme providing the grant is selective then its appraisal procedures should as far as possible be designed to exclude such cases:
-
It may use the grant to enhance the project in ways that will increase the expected benefits. For example it may add additional capabilities to a product thus increasing the value to the customer and therefore the amount at which it can be sold. Alternatively it may explore a wider range of options for some or all features of the proposed new product which may also result in better quality and a higher selling price;
-
It may put together a completely new project designed to qualify under the rules of the grant awarding programme. This may or may not substitute for another R&D project which the firm might have undertaken.
58. The net benefits of providing support to innovation and technology development is the difference between total benefits and total costs with the support compared to the difference total benefits and total costs without support. As pointed out the benefits of a supported project consist of those benefits accruing to the supported firm plus benefits accruing to others (externalities). The costs of the supported projects consist of:
-
The financial cost to the public finances of the support provided i.e. the value of the grant, the present value of the loan etc;
-
The cost to government of administering the support provided;
-
The cost to the firm which consists of the value of the resources which the firm puts into the supported project or activity plus the costs to the firm of applying for the support;
-
Any negative effects on other firms, individuals etc. This includes displacement where the supported project causes another firm to abandoned a similar project which it might otherwise have carried out.
59. An evaluation should attempt to add up these costs which together represent the opportunity costs of the project going ahead, in other words the value of the resources used in their next best alternative use. This is conventional assumed to be equal to the amount of the costs at market prices (financial value) which is assumed to include any capital costs at the going market rate. This is all that one can do in the case of 56(a) and 56(b) though some economists would want to add in an estimate of the costs of the economic distortions caused by the taxation needed to finance the public expenditures involved.
60. However in the case of 56(c) one must consider the possibility that the supported project displaced or substituted for another project which might have earned the firm a return above the going cost of capital12. The alternative project should be specified and any excess returns which it might have earned over and above the going cost of capital added to the costs of the supported project. This is standard practice in the appraisal of many types of infrastructure projects but experience of ex-post evaluation indicates that it is very difficult to do in the case of public support for innovation. One reason for this is that any displacement may not take place at the R&D stage but in the much larger downstream expenditures needed to bring the results of the R&D to market and on which the return to the R&D depends. In fact the supported project may take the firm along a completed different technological and/or commercial trajectory or path13 from that which it would otherwise have followed. However evaluation experience suggests that most firms will be unable to specify the counterfactual to the supported project with any precision and that even an interview questionnaire is unlikely to receive answers which would enable the evaluator to quantify any excess returns with an acceptable degree of precision14.
61. The economic cost of displacement of innovation related activities in other firms is also difficult to assess. First, one needs to ask whether the displacement takes place in the region or country concerned or within Europe at all. Helping local, national or European firms compete more successfully against firms from elsewhere is usually an important aim of innovation policy. Even if displacement takes place in other regions of the same country this may not matter if these other regions are more prosperous as it is often an aim of policy to shift economic activity from more prosperous regions to the less prosperous15. In any case innovation is often associated with rapid market growth which means that small or medium sized firms can innovate and expand without any adverse effects on similar firms operating in the same market. Increasing the number of firms innovating in an expanding market can improve customer choice and thus help the market expand further.
62. Even if the firms suffering from displacement are in the same region or in another less prosperous region of Europe evaluators still might not find it worthwhile to try and estimate the economic magnitudes involved. First, asking non-supported firms whether they have suffered from support given to their competitors under the programme concerned might not be met with reliable answers. It is all too easy for such firms to blame problems which have their cause elsewhere on support given under the programme. A key issue here is that both supported and non-supported firms were made equally aware of the existence of programme, given an equal opportunity to apply and accepted or rejected on fair and equal terms. It should be part of any evaluation to make sure that this was the case. It should involve asking rejected firms the reasons they were given for their rejection as one test of whether the criteria used by the programme were fair and appropriate. Subject to these checks if a firm decided not to apply or if it submitted a project which did not satisfy the criteria laid down then that was its own choice. Nevertheless, if too many reasonably successful firms in qualifying sectors or using the technologies covered by a programme were turned down, then the evaluator would need to ask questions about whether the programme was large enough, whether it was appropriate to the needs of the sector or region or whether it was implemented effectively.
63. Using support for innovation related activities to enhance the competitive dynamism and technological capabilities of firms in a region may help to improve the long run intensity of competition within region and as a result the competiveness and overall performance of the local economy. Less dynamic firms may suffer in the process but this can be seen as collateral damage incurred on the way to making the region as a whole better off.
64. If allegations were made that ERDF funding in an Objective 1 or 2 Region in one European country was damaging firms in a similar region in another country then they would need to be taken seriously. One might question how far SMEs in one region compete directly with similar firms in a region of another European country unless the two regions are close geographically, or were serving the same major customers or were operating in small specialised international market. It is important that policy makers, whether at the regional, national or European level, avoid actions which might encourage this unnecessarily. Public innovation support should focus on those areas where the region is likely to have a comparative advantage and exploits those features which provide a ‘unique selling point’. Although often politically attractive, simply following the fashion set by other European regions may lead to disappointing results.
65. Making sure that policies and programmes are appropriate to a region (or sector or nation) requires a strategic analysis of the characteristics of its innovation system and an analysis of the strengths and weaknesses of its innovation performance. An assessment should also be made of likely future opportunities and threats. Linkages with the innovation systems of adjacent regions should be examined.
Share with your friends: |