Personal Research Database



Download 6.47 Mb.
Page67/275
Date02.05.2018
Size6.47 Mb.
#47265
1   ...   63   64   65   66   67   68   69   70   ...   275
34 (2), 263-283.

Full Text: 1995\Scientometrics34, 263.pdf

Abstract: This paper seeks to compare the research priorities of thirty three countries in five macrofields (Physics, Chemistry Biology, Mathematics and Engineering & Technology) in two time spans: 1980-1984 and 1985-1989. Comparative analysis is based on the distribution of publications in different fields. Since the raw counts of publications are confounded by the size of the countries and the size of the subject fields, a relative index - Research Priority Index (PI) - is computed for cross-national comparisons. Correspondence analysis is applied to the asymmetrical matrices of priority profiles to reveal the structure of multivariate relationships between countries and fields. The configurations for the two time-spans, obtained-through correspondence analysis, are compared to reveal the dynamics of research priorities of these countries.

Keywords: Analysis, Correspondence Analysis, Dynamics, Indicators, Publications, Research, Size, Structure

? Egghe, L., Rao, I.K.R. and Rousseau, R. (1995), On the influence of production on utilization functions: Obsolescence or increased use. Scientometrics, 34 (2), 285-315.

Full Text: 1995\Scientometrics34, 285.pdf

Abstract: We study the influence of production on utilization functions. A concrete example of this is the influence of the growth of literature on the obsolescence (aging) of this literature. Here, synchronous as well as diachronous obsolescence is studied. Assuming an increasing exponential function for production and a decreasing one for aging, we show that, in the synchronous case, the larger the increase in production, the larger the obsolescence. In the diachronous case the opposite relation holds: the larger the increase in production the smaller the obsolescence rate. This has also been shown previously by Egghe but the present proof is shorter and yields more insight in the derived results. If a decreasing exponential function is used to model production the opposite results are obtained. It is typical for this study that there are two different time periods: the period of production (growth) and - per year appearing in the production period - the period of aging (measured synchronously and diachronously). The interaction of these periods is described via convolutions (discrete as well as continuous).

Keywords: Aging, Citation Age, Concrete, Function, Functions, Growth, Interaction, Literature, Model, Obsolescence, Time, Utilization

Notes: CCountry

Cozzens, S. (1995), U.S. research assessment: Recent developments. Scientometrics, 34 (2), 351-362.

Full Text: 1995\Scientometrics34, 351.pdf

Abstract: Over the last decade, ex post research assessment at the program level in the United States has seemed much less active than the equivalent activities in Europe, both west and east. This seeming lull was the result of a decline in program evaluation activity across the U.S. government in the 1980s, which slowed the rate of formal evaluations. Program review activities within agencies, however, were common, especially at such mission-oriented research supporting organizations as the Department of Energy and the Office of Naval Research. Review processes at these agencies relied primarily on expert assessment, sometimes at the project level, supplemented by user inputs. Quantitative performance measures were seldom used. That situation is about to change. In 1993, Congress passed the Government Performance and Results Act, which requires all agencies including those support research to set quantitative performance targets and report annually on their progress toward them. Agencies with clear technological goals are rapidly developing sets of indicators for this use, including peer assessments, bibliometric measures including patents, and customer satisfaction ratings. But fundamental research agencies do not find such measures satisfactory, and are just beginning to develop alternative ones.

Cunion, K.M. (1995), UK government departments experience of RT & D programme evaluation and methodology. Scientometrics, 34 (2), 363-374.

Full Text: 1995\Scientometrics34, 363.pdf

Abstract: The UK Department of the Environment is responsible for a range of policy issues within Government related to many aspects of the environment in its broadest sense. It spends about £ 100 M annually on Science and Technology in support of its policy functions. Over recent years a system of research assessment has been established which consists of the development of ROAME statements for the appraisal of programmes and regular independent evaluation of the success and impact of the research on the basis of a five year cycle. The mechanisms and process of the assessment system are described. Effective evaluation of policy-oriented research programmes has provided valuable information to the Department on the success and impact of the research, and guidance on future direction and balance of the programmes.

Gonda, K. and Kakizaki, F. (1995), Research, technology and development evaluation, developments in Japan. Scientometrics, 34 (2), 375-389.

Full Text: 1995\Scientometrics34, 375.pdf

Abstract: The research, technology and development (RTD) evaluation in terms of science and technology policy has come to be important in stimulating research activities and in continuously keeping the vitality and the higher quality of research in RTD institutions. There are two criteria on the RTD evaluation, i.e., in-house evaluation from the stand point of RTD management and independent macroscopic evaluation for the decision making of companies and/or policy making for science and technology policy.

The most important point for RTD evaluation in the former criteria is in the mission itself. RTD in universities, public research institutes, and enterprises have different objectives and characteristics. Therefore, the mission and methodology of RTD evaluations should be different, by categorized type and objectives of research institutions, and be developed in-house. Results of RTD evaluations should be fed back to researchers or engineers and disclosed principally if the mission was to stimulate knowledge creation through RTD activities.

The in-house RTD evaluation can be classified in general into three categories: prior evaluation, mid-term review, and ex-post facto review. The methodologies to evaluate RTD in each phase of the RTD process are different, even among those institutes categorized into the same type such as national and regional research institutes. In this paper, two cases of RTD evaluation a) in Riken, which was founded in 1917 as a private research foundation and later reorganized as a semi-public research corporation of the Science and Technology-Agency, b) in regional public research institutes.

RTD evaluation from the view point of policy assessment of governmental science and technology policy is discussed through analysis of data obtained by the survey of research activities in regional public research institutes. It can be concluded that developments and introduction of RTD evaluation as a new management system in these institutes is improving the research environment and advancing the quality of research. The differences of RTD evaluation between a Center of Excellence (COE) such as Riken and local technology centers, will be compared and the policy implication of RTD evaluation will also be discussed in terms of promotion of science and technology.

Notes: CCountry

Helander, E. (1995), Evaluation activities in the Nordic countries. Scientometrics, 34 (2), 391-400.

Full Text: 1995\Scientometrics34, 391.pdf

Abstract: There has been extensive experience with evaluations in the Nordic countries. The paper gives a brief overview of work related to: evaluations of research fields, bibliometric studies, evaluations of research programmes, performance of research institutes, evaluation of bodies supporting research, evaluation of universities, indicators and databases.

Evaluations of whole areas of research started in the Nordic countries in the early 1980’s. Another Nordic speciality is the evaluation of research-funding bodies. These evaluations comprise the Swedish Council for Planning and Co-ordination of Research, the Norwegian Research Council for Science and Humanities, the Academy of Finland and the Technology Development Centre (TEKES).

Many research programmes, research institutes and more narrow research fields have been evaluated in the Nordic countries. The evaluations have covered the tasks, performance and structure of these organisations. Lately, whole universities have been evaluated. A number of theoretical and methodological studies on evaluation have been published. Indicators of scientific, technological and educational performance and output have been developed in the Nordic countries. The paper deals mainly with ex post and to some extent also mid-term evaluations. However, ex ante evaluation, including peer review, has actively been developed and applied in the Nordic countries, though these developments lie outside the scope of this paper.

Typical for many Nordic evaluations is the use of foreign evaluators. Others have been based on surveys with potential users of research results and the scientists involved. Some of the evaluations have combined these approaches. Bibliometric studies have been performed parallel with some of the evaluations. Other bibliometric studies have compared the performance of the Nordic countries in an international perspective. In most cases the results of the evaluations are actively made public. Many of the evaluations combine an assessment of quality and relevance.

According to Nordic experiences important conditions for useful evaluations are: credibility implying the use of impartial and recognised experts and professionally done surveys, careful timing, active publicising of evaluation results, transparency of evaluation procedure, concrete measures and action following the evaluation.

When possible data required for the evaluation should be collected already in connection with the application or the report of the projects.

? Gabolde, I. (1995), First international conference on the evaluation of research technology and development - 26, 27 & 28 April 1995, Thessaloniki, Greece - Opening address. Scientometrics, 34 (3), 317-320

Full Text: 1995\Scientometrics34, 317.pdf

Keywords: Development, Evaluation, Greece, International, Research, Technology

? Piquer, C.R. (1995), Invited speech. Scientometrics, 34 (3), 321-323.

Full Text: 1995\Scientometrics34, 321.pdf

? Bach, L., CondeMolist, N., Ledoux, M.J., Matt, M. and Schaeffer, V. (1995), Evaluation of the economic effects of Brite-Euram programmes on the European industry. Scientometrics, 34 (3), 325-349.

Full Text: Scientometrics34, 325.pdf

Abstract: This article deals with an evaluation performed by BETA group about the economic effects of EU R & D programmes (Brite, Euram and Brite-Euram I) on the European industry. The approach used is based on an original methodology designed by BETA, which aims at evaluating those effects at a micro level (i.e. the participants to the programmes) by means of direct interviews of 176 partners involved in 50 projects. The definition of these economic effects is firstly described, as well as the different steps of the evaluation work. Then the overall results of the study are presented, showing the importance of both ‘direct’ and ‘indirect’ observed effects in monetary terms. Finally, some more detailed results highlight the positive impact of some aspects of the organization structure set up for the analyzed R & D projects on the amount of observed effects: i) the participation of a university lab, ii) the participation of at least one partner involved in a fundamental research work, iii) the diversity of research tasks over a scale ranging from fundamental research to industrialization work, iv) the combination of ‘user-type’ and ‘producer-type’ of activity in one given organisation (integration effect) or in one given project (consortia effect), etc.

Keywords: EU, Evaluation, Integration, Interviews, Methodology, Research, Research Work, Structure, University, Work

Cozzens, S. (1995), U.S. research assessment: Recent developments. Scientometrics, 34 (3), 351-362.

Full Text: 1995\Scientometrics34, 351.pdf

Abstract: Over the last decade, ex post research assessment at the program level in the United States has seemed much less active than the equivalent activities in Europe, both west and east. This seeming lull was the result of a decline in program evaluation activity across the U.S. government in the 1980s, which slowed the rate of formal evaluations. Program review activities within agencies, however, were common, especially at such mission-oriented research supporting organizations as the Department of Energy and the Office of Naval Research. Review processes at these agencies relied primarily on expert assessment, sometimes at the project level, supplemented by user inputs. Quantitative performance measures were seldom used. That situation is about to change. In 1993, Congress passed the Government Performance and Results Act, which requires all agencies including those support research to set quantitative performance targets and report annually on their progress toward them. Agencies with clear technological goals are rapidly developing sets of indicators for this use, including peer assessments, bibliometric measures including patents, and customer satisfaction ratings. But fundamental research agencies do not find such measures satisfactory, and are just beginning to develop alternative ones.

Keywords: Alternative, Assessment, Bibliometric, Europe, Evaluation, Indicators, Patents, Program Evaluation, Research, Research Assessment, Review, Satisfaction, United States, US

Cunion, K.M. (1995), UK government departments experience of RT & D programme evaluation and methodology. Scientometrics, 34 (3), 363-374.

Full Text: 1995\Scientometrics34, 363.pdf

Abstract: The UK Department of the Environment is responsible for a range of policy issues within Government related to many aspects of the environment in its broadest sense. It spends about £ 100 M annually on Science and Technology in support of its policy functions. Over recent years a system of research assessment has been established which consists of the development of ROAME statements for the appraisal of programmes and regular independent evaluation of the success and impact of the research on the basis of a five year cycle. The mechanisms and process of the assessment system are described. Effective evaluation of policy-oriented research programmes has provided valuable information to the Department on the success and impact of the research, and guidance on future direction and balance of the programmes.

Keywords: Assessment, Development, Environment, Evaluation, Functions, Information, Methodology, Policy, Research, Research Assessment, UK

Gonda, K. and Kakizaki, F. (1995), Research, technology and development evaluation, developments in Japan. Scientometrics, 34 (3), 375-389.

Full Text: 1995\Scientometrics34, 375.pdf

Abstract: The research, technology and development (RTD) evaluation in terms of science and technology policy has come to be important in stimulating research activities and in continuously keeping the vitality and the higher quality of research in RTD institutions. There are two criteria on the RTD evaluation, i.e., in-house evaluation from the stand point of RTD management and independent macroscopic evaluation for the decision making of companies and/or policy making for science and technology policy.

The most important point for RTD evaluation in the former criteria is in the mission itself. RTD in universities, public research institutes, and enterprises have different objectives and characteristics. Therefore, the mission and methodology of RTD evaluations should be different, by categorized type and objectives of research institutions, and be developed in-house. Results of RTD evaluations should be fed back to researchers or engineers and disclosed principally if the mission was to stimulate knowledge creation through RTD activities.

The in-house RTD evaluation can be classified in general into three categories: prior evaluation, mid-term review, and ex-post facto review. The methodologies to evaluate RTD in each phase of the RTD process are different, even among those institutes categorized into the same type such as national and regional research institutes. In this paper, two cases of RTD evaluation a) in Riken, which was founded in 1917 as a private research foundation and later reorganized as a semi-public research corporation of the Science and Technology-Agency, b) in regional public research institutes.

RTD evaluation from the view point of policy assessment of governmental science and technology policy is discussed through analysis of data obtained by the survey of research activities in regional public research institutes. It can be concluded that developments and introduction of RTD evaluation as a new management system in these institutes is improving the research environment and advancing the quality of research. The differences of RTD evaluation between a Center of Excellence (COE) such as Riken and local technology centers, will be compared and the policy implication of RTD evaluation will also be discussed in terms of promotion of science and technology.

Keywords: Analysis, Assessment, Criteria, Decision Making, Decision-Making, Development, Enterprises, Environment, Evaluation, Institutions, Japan, Knowledge, Management, Methodology, Policy, Promotion, Quality, Research, Review, Science, Science and Technology, Science and Technology Policy, Survey, Technology, Universities

? Helander, E. (1995), Evaluation activities in the Nordic countries. Scientometrics, 34 (3), 391-400.

Full Text: 1995\Scientometrics34, 391.pdf

Abstract: There has been extensive experience with evaluations in the Nordic countries. The paper gives a brief overview of work related to: evaluations of research fields, bibliometric studies, evaluations of research programmes, performance of research institutes, evaluation of bodies supporting research, evaluation of universities, indicators and databases. Evaluations of whole areas of research started in the Nordic countries in the early 1980’s. Another Nordic speciality is the evaluation of research-funding bodies. These evaluations comprise the Swedish Council for Planning and Co-ordination of Research: the Norwegian Research Council for Science and Humanities, the Academy of Finland and the Technology Development Centre (TEKES). Many research programmes, research institutes and more narrow research fields have been evaluated in the Nordic countries. The evaluations have covered the tasks, performance and structure of these organisations. Lately, whole universities have been evaluated. A number of theoretical and methodological studies on evaluation have been published. Indicators of scientific, technological and educational performance and output have been developed in the Nordic countries. The paper deals mainly with ex post and to some extent also mid-term evaluations. However, ex ante evaluation, including peer review, has actively been developed and applied in the Nordic countries, though these developments lie outside the scope of this paper. Typical for many Nordic evaluations is the use of foreign evaluators. Others have been based on surveys with potential users of research results and the scientists involved. Some of the evaluations have combined these approaches. Bibliometric studies have been performed parallel with some of the evaluations. Other bibliometric studies have compared the performance of the Nordic countries in an international perspective. In most cases the results of the evaluations are actively made public. Many of the evaluations combine an assessment of quality and relevance. According to Nordic experiences important conditions for useful evaluations are: credibility implying the use of impartial and recognised experts and professionally done surveys, careful timing, active publicising of evaluation results, transparency of evaluation procedure, concrete measures and action following the evaluation. When possible data required for the evaluation should be collected already in connection with the application or the report of the projects.

Keywords: Assessment, Bibliometric, Bibliometric Studies, Bodies, Concrete, Credibility, Evaluation, Finland, Indicators, International, Peer Review, Peer-Review, Potential, Quality, Relevance, Research, Research Funding, Review, Structure, Transparency, Universities, Work

? Hills, P. (1995), PREST’s experience of evaluation. Scientometrics, 34 (3), 401-414.

Full Text: 1995\Scientometrics34, 401.pdf

Abstract: PREST’s experience of evaluation is not as an isolated activity, but as one that has grown out of, and is still embedded in, a broader programme of work on science policy and management. This reflects a conviction that evaluation should be embedded in a wider management system including verifiable objectives and sound feedback mechanisms. The key to successful evaluation is meticulous planning and evaluation design. PREST’s evaluation work has been based mostly on surveys of opinion supplemented by statistical data. In any evaluation the different actors may all agree overtly on their objectives, but covertly have different and incompatible aims. In this situation PREST apply transparent principles of procedure. Evaluation has had a significant effect on the science and technology management culture. In a few cases it is possible to distinguish a direct link between evaluation findings and subsequent decisions. Usually, however, it is difficult to do so with precision, because evaluation is but one of several influences on policy development. The demand for evaluation will probably intensify, perhaps including simpler, more automatic approaches. There may also be an increased interest in more refined qualitative approaches.

Keywords: Culture, Demand, Development, Evaluation, Management, Planning, Policy, Policy Development, Principles, Qualitative, Science, Science and Technology, Science Policy, Technology, Technology Management, Work

? Johnston, R. (1995), Research impact quantification. Scientometrics, 34 (3), 415-426.

Full Text: 1995\Scientometrics34, 415.pdf

Abstract: The development of methods for the quantification of research impact has taken a variety of forms: the impact of research outputs on other research, through various forms of citation analysis, the impact of research and technology, through patent-derived data, the economic impact of research projects and programs, through a variety of cost-benefit analyses, the impact of research on company performance, where there is no relationship with profit, but a strong positive correlation with sales growth has been established, and calculations of the rates of social return on the investment in research. However, each of these approaches, which have had varying degrees of success, are being challenged by substantial revision in the understanding of the ways in which research interacts, and contributes to, other human activities. First, advances in the sociology of scientific knowledge have revealed the complex negotiation processes involved in the establishment of research outcomes and their meanings. In this process, citation is little more than a peripheral formalisation. Second, the demonstration of the limitations of neo-classical economics in explaining the role of knowledge in the generation of wealth, and the importance of learning processes, and interaction, in innovation within organisations, has finally overturned the linear model on which so many research impact assessments have been based. A wider examination of the political economy of research evaluation itself reveals the growth of a strong movement towards managerialism, with the application of a variety of mechanisms - foresight, priority setting, research evaluation, research planning - to improve the efficiency of this component of economic activity. However, there are grounds for questioning whether the resulting improved efficiencies have, indeed, improved overall performances. A variety of mechanisms are currently being experimented with in a number of countries which provide both the desired accountability and direction for research, but which rely less on the precision of measures and more on promoting a research environment that is conducive to interaction, invention, and connection.

Keywords: Analysis, Citation, Citation Analysis, Cost Benefit, Development, Economics, Efficiency, Environment, Evaluation, Examination, Foresight, Generation, Growth, Human, Innovation, Interaction, Knowledge, Learning, Methods, Model, Outcomes, Planning, Profit, Research, Research Evaluation, Sociology, Technology, Understanding

? Kameoka, A. (1995), Evaluating research projects at Toshiba. Designing a conceptual framework of evaluating research and technology development (RTD) programs. Scientometrics,



Download 6.47 Mb.

Share with your friends:
1   ...   63   64   65   66   67   68   69   70   ...   275




The database is protected by copyright ©ininet.org 2024
send message

    Main page