Personal Research Database



Download 6.47 Mb.
Page68/275
Date02.05.2018
Size6.47 Mb.
#47265
1   ...   64   65   66   67   68   69   70   71   ...   275
34 (3), 427-439.

Full Text: 1995\Scientometrics34, 427.pdf

Abstract: Toshiba, a broadly-based electric/electronics manufacturer, operates diversified businesses. A sophisticated research and technology management system supports those businesses based on a research and technology development (RTD) organization consisting of three layers: corporate, business group, and divisional laboratories. Evaluation of RTD projects is varied in accordance with their characteristics. To promote future inter-divisional business, the Corporate Incentive Program (CIP) funds corporate projects which are authorized and evaluated by the Corporate Technology Committee (CTC). In parallel, under the Corporate Strategic Program (CSP), committees monitor and evaluate specific, rapidly-advancing technologies so as to promote early acquisition and diffusion. Additionally, transnational strategic alliances (TSAs) are promoted on the basis of their merits and in accordance with Toshiba’s corporate philosophy of Competition, Cooperation and Complementarity (CC&C). The corporate Research and Development Center (RDC) conducts pre- and intermediate evaluations as part of the Long- and Middle-range Planning every year. When new technologies are transferred to the business divisions, post-evaluation starts and future monetary impacts are estimated, subsequently actual monetary contributions are monitored annually. Another style of pre-evaluation can be observed at the RDC in the Exploratory Programs by the Young (EPY). First, some actual cases at Toshiba are introduced. Next, discussion is extended to the evaluation framework, the corporate technology model and RTD productivity Also noted is the importance of recognizing that the consumer is the ultimate evaluator and that evaluation-quality is improved by feedback from the market. Concept creation and target clarification must come first, only then does the evaluation make sense.

Keywords: Business, Development, Diffusion, Evaluation, First, Framework, Management, Market, Model, Philosophy, Research, Technologies, Technology, Technology Management

? Krull, W. (1995), The Max Planck experience of evaluation. Scientometrics, 34 (3), 441-450.

Full Text: 1995\Scientometrics34, 441.pdf

Abstract: The Max-Planck-Gesellschaft (MPG) is a nonprofit organization founded in 1948 as a successor to the Kaiser-Wilhelm-Gesellschaft, which was originally established in 1911. Institutes run by the MPG are mainly devoted to basic research, to a large extent in the sciences and, to a smaller extent, in the humanities. In contrast to the university system, which must cover all academic disciplines, the MPG can concentrate its funds and its energy on selected key areas of basic research. In all of the decision making processes concerning structural or institutional changes as well as the reallocation of resourses, evaluation has a crucial role to play. The paper outlines the various ways and levels of quality assessment within the Max Planck system. in particular, it emphasizes the Importance of ex ante-evaluation, and the need for an assessment of ongoing research work at regular intervals. Furthermore, the strengths and weaknesses of quantitative indicators are discussed, and, finally, some principles for policy-relevant evaluations formulated.

Keywords: Assessment, Changes, Concentrate, Decision Making, Decision-Making, Evaluation, Humanities, Indicators, Intervals, Principles, Quality, Research, Research Work, Sciences, University, Work

? Kyriakou, D. (1995), Macroeconomic aspects of S/T programme evaluation. Scientometrics, 34 (3), 451-459.

Full Text: 1995\Scientometrics34, 451.pdf

Abstract: Understanding the macroeconomic aspects of S/T programme evaluation exercises must be anchored in exploring S/T and its impact in the context of the modern competitive economy, starting at the level of the firm and moving up to the country and EU regional level. Whereas monitoring focuses on the continuous managerial review of project operations, evaluation is concerned with what is being achieved, with maximizing the programme’s impact, and with providing guidelines for new ones. The economic context and the placement of S/T in it, in crucial in both ex-ante evaluation, setting goals and projecting evolution corridors, as well as ex-post evaluation of proximity to targets, and/or assessment/updating of projected technological and economic paths followed. The paper will briefly draw this connection and then proceed to explore the multi-level interface between S/T and the economic context, whose characteristics should inform ex-ante and ex-post evaluation efforts. Particular emphasis will be placed on the role of S/T - and hence in evaluating S/T programmes - visa-vis the effects of S/T on market structure, sustainability and EU cohesion. S/T will be viewed in terms of its projected effects on the viability of monopolistic/oligopolistic arrangements, and on the incontestability of markets, namely the ability of incumbents to deter entry by new challengers. It will be also argued that S/T is, and should be, the bridge linking growth and sustainability, the two towering preoccupations that are often deemed to be at odds. Finally, and most immediately critical for the EU, the vicissitudes of cohesion in the EU will be explored, and the role of S/T in alleviating them will be underscored. Successful and properly evaluated S/T programmes can help steer the EU away from the tensions generated by asymmetric shocks to liberalizing, integrating economies, specializing on the basis of comparative advantage.

Keywords: EU, Evaluation, Evolution, Growth, Guidelines, Market, Markets, Review, Structure, Sustainability

? Kuhlmann, S. (1995), German government department’s experience of RT&D programme evaluation and methodology. Scientometrics, 34 (3), 461-471.

Full Text: 1995\Scientometrics34, 461.pdf

Abstract: In Germany the interest in the evaluation of RT & D programmes has increased markedly in the recent past, not least because of cut-backs in public budgets, which put-considerable pressure on prioritising and posterioritising of financially effective state intervention. The paper reports on a comprehensive analysis of evaluation practice up till now in the field of RT & D programmes in Germany: within the framework of a ‘Metaevaluation’, the Federal Ministry for Research and Technology (BMFT) had over 50 evaluation studies, which it had commissioned since 1985, documented and critically analysed. On the basis of this analysis and its recommendations, a rough outline for a systematised future evaluation practice has been developed and discussed. Reflections on the basic meaning of evaluation, then which basic functions evaluation studies can fulfil during planning and implementation of RT & D policy measures for government departments, for policy-makers and for the public, were considered. In order to achieve a minimum of compatibility for evaluation activities in the future, a ‘Basic Pack’ of standards for the implementation of evaluation studies was developed (as regards evaluation planning, choice of evaluators, content/scope/range, methods and indicators, editing and utilising the results), and more ambitious possibilities for use were discussed (e.g. combination of technology foresight and ex ante policy analyses).

Keywords: Analysis, Evaluation, Evaluation Studies, Foresight, Framework, Functions, Germany, Indicators, Methodology, Methods, Planning, Policy, Practice, Pressure, Recommendations, Standards, Technology, Technology Foresight

? Laredo, P. (1995), Structural effects of EC RT&D programmes. Scientometrics, 34 (3), 473-487.

Full Text: 1995\Scientometrics34, 473.pdf

Abstract: Taking advantage of both ‘vertical’ evaluations (of the JOULE and MHR programmes) and of the ‘transversal’ study of the effects of all shared-cost programmes in France, the paper argues that such actions have already built large, heterogeneous, trans-border networks, out of which most are nearly stabilized but still in a learning process about collaborative research practices. It also shows that most networks fall under a limited set of collaborative patterns which focus on different outcomes and, in turn, have different structural effects. It, in turn, questions both the articulation and implementation mechanisms of the present framework programme.

Keywords: Collaborative Research, EC, Framework, France, Learning, Outcomes, Research

? Narin, F. (1995), Patents as indicators for the evaluation of industrial research output. Scientometrics, 34 (3), 489-496.

Full Text: 1995\Scientometrics34, 489.pdf

Abstract: Patent indicators are used in the evaluation of industrial research at many different levels of aggregation. They are used in policy-level applications to look at industrial research capability from a national or regional viewpoint comparing, for example, EU regional technology with that of Japan and North America. They are used in strategic-level applications to look at industrial research from a company viewpoint. For example, CHI Research, Inc. has used them to compare auto company research output company-by-company and technology-by-technology. They are used in tactical-level applications, typically involving technology tracing - where the performance of research groups is measured against one another within the domain of a specific technology. At the tactical level these indicators can characterize industrial research in three planes or stages: The early Precursor Plane, the current Technology Plane and the future-oriented Successor Plane. Finally, at the most precise level of evaluation, patent indicator techniques are now beginning to be used in the United States in establishing the value of patent portfolios for cross-licensing purposes, and in patent infringement litigation, where citation techniques demonstrate the importance and utility of patented technology.

Keywords: Aggregation, Citation, EU, Evaluation, Indicator, Indicators, Japan, Litigation, Patent, Research, Techniques, Technology, United States, Utility

? Nauwelaers, C. and Reid, A. (1995), Methodologies for the evaluation of regional innovation potential. Scientometrics, 34 (3), 497-511.

Full Text: 1995\Scientometrics34, 497.pdf

Abstract: This contribution is based on a SPRINT-EIMS project involving a ‘horizontal’ inventory and critical analysis of existing studies on the measurement and evaluation of regional technological innovative potential.(1) After the presentation of a conceptual scheme aiming at reflecting on the functioning of a ‘Regional System of Innovation’, the main trends in methodological approaches to the evaluation of regional innovative potential in the European Union are discussed, pointing to the necessity of moving progressively towards a methodology taking into account interactions, both locally and externally, between the various components and actors of the innovation process. There is no single best-practice methodology in this respect: the use of an ‘eclectic’ assortment of methodological approaches is investigated and the recommendation given to develop data bases on innovation at regional level.

Keywords: Analysis, European Union, Evaluation, Innovation, Measurement, Methodology, Potential, Trends

? OHerlihy, J. (1995), RT&D, regional development and evaluation. Scientometrics, 34 (3), 513-518

Full Text: 1995\Scientometrics34, 513.pdf

Keywords: Development, Evaluation

? Rinaldini, C. (1995), Experience on research evaluation at the Joint Research Centre of the European Commission. Scientometrics, 34 (3), 519-525.

Full Text: 1995\Scientometrics34, 519.pdf

Abstract: Since more than 10 years, the obligation to perform a research evaluation about the JRC activities is included in Council decisions on research programmes. From 1984 to 1986 eight Peer Panels reviews were performed, one for each programme, and they were followed by an overall assessment by the JRC Scientific Council. For the research programme 1988-1991, a mid-term and a final evaluations were entrusted to expert Panels for the all JRC. For the last programme, 1992-1994, a new approach was introduced by charging Visiting Groups to perform an evaluation of each JRC Institute. Internal evaluation through questionnaires and bibliometric analyses were also attempted at JRC. The merits of the various approaches are highlighted and specific considerations are shortly discussed concerning the ‘control’ and the ‘support’ function of the evaluations, quantitative and qualitative assessments, distributed or centralised evaluations, single or multi-stage evaluations.

Keywords: Assessment, Bibliometric, Evaluation, Function, Obligation, Qualitative, Questionnaires, Research, Research Evaluation

Smith, W.A. (1995), Evaluating research, technology and development in Canadian industry: Meeting the challenges of industrial innovation. Scientometrics, 34 (3), 527-539.

Full Text: 1995\Scientometrics34, 527.pdf

Abstract: Canadian firms respond to the challenges and opportunities of global competition by increasing their research productivity and the rate of innovation. The competitive edge for Canadian industry must now be based on a new appreciation of the dynamics of R & D, as well as management practices and strategies which are relevant to the systems which underpin innovation. New R & D and management models are being adopted by firms to cope with the dynamic and complex nature of innovation, the growing importance of transactions and linkages within innovation systems and the range of financial, human, social and environmental factors which now impact on technology assessment and decision-making. Given this new paradigm, evaluation techniques are being created and adopted by Canadian industry which provide them with a greater understanding of the value of their research and enhance the agility of their technology management. But, these developments are not confined to industry. of equal importance is the convergence of evaluation methods used in both industry and governments to assess research and technology. The methods used by industry are now the techniques employed by governments to assess their own R & D and to formulate industrial S & T policies and strategies.

Keywords: Assessment, Competition, Decision Making, Decision-Making, Development, Dynamics, Environmental, Evaluation, Human, Innovation, Management, Methods, Models, Paradigm, Research, Research Productivity, Techniques, Technology, Technology Assessment, Technology Management, Understanding

? Hodges, S., Hodges, B., Meadows, A.J., Beaulieu, M. and Law, D. (1996), The use of an algorithmic approach for the assessment of research quality. Scientometrics, 35 (1), 3-13.

Full Text: 1996\Scientometrics35, 3.pdf

Abstract: Recent years have seen a growing interest in the use of quantitative parameters for assessing the quality of research carried out at universities. In the UK, university departments are now subject to regular investigations of their research standing. As part of these investigations, a considerable amount of quantitative (as well as qualitative) information is collected from each department. This is made available to the panels appointed to assess research quality in each subject area. One question that has been raised is whether the data can be combined in some way to provide an index which can help guide the panels’ deliberations. This question is looked at here via a detailed examination of the returns from four universities for the most recent (1992) research assessment exercise. The results suggest that attempts to derive an algorithm are only likely to be helpful for a limited range of subjects.

Melin, G. (1996), The networking university: A study of a Swedish university using institutional co-authorships as an indicator. Scientometrics, 35 (1), 15-31.

Full Text: 1996\Scientometrics35, 15.pdf

Abstract: This article examines the subject of research collaboration, and elaborates on this subject on an institutional rather than an individual level. An empirical case-study is presented, the research collaboration of Umeå University in Sweden, during the period 1991-1993 is investigated. Institutional co-authorships based on the addresses of the departments are used as an indicator of this collaboration. The results are separated into three levels: the local level, the national level, and the international level. It is obvious that the research collaboration is most extensive. Finally the university’s collaboration is discussed and a scheme is proposed with the purpose to understand research collaboration in a social as well as a cognitive context. The guiding terms here are access, visibility and attractiveness.

? Leta, J. and De Meis, L. (1996), A profile of science in Brazil. Scientometrics, 35 (1), 33-44.

Full Text: 1996\Scientometrics35, 33.pdf

Abstract: The Brazilian contribution to publications in science and humanities increased from 0.29% of the worldwide total in 1981 to 0.46% in 1993. In science, but not in humanities, Brazilian publications tend to follow the world publication trend, thus, during the period 1981-1993, 57.9% of Brazilian publications were in life sciences, 35.4% in exact sciences, 3.9% in earth sciences and 2.9% in humanities. The ten institutions with the largest number of publications are universities, which account for half of the all Brazilian publications. The total number of authors on the Brazilian 1981-1993 publications was 52,808. Among these 57.8% appear in only one publication and 17.5% have their publications cited more than 10 times.

Keywords: Biochemists

Davis, G. and Royle, P. (1996), A comparison of Australian university output using journal impact factors. Scientometrics, 35 (1), 45-58.

Full Text: 1996\Scientometrics35, 45.pdf

Abstract: We weighted the output of SCI items from Australian universities using journal impact factors. This provides us with an accessible quality indicator of science journal publishing, and allow us to scale for institutional size in terms of output and research staff. Use of this indicator for the 20 pre-1987 Australian universities demonstrates that although some universities rank highly on output, when scaled for institutional size they are overtaken by some of the smaller, more recently established universities.

Rodríguez, K. and Moreiro, J.A. (1996), The growth and development of research in the field of ecology as measured by dissertation title analysis. Scientometrics, 35 (1), 59-70.

Full Text: 1996\Scientometrics35, 59.pdf

Abstract: This study assesses the growth, the patterns of development and the complexity of research in the field of ecology from 1976 to 1993 in Spain and the five Spanish speaking countries of the Caribbean. Using as a yardstick of research and development in that field, the dissertation titles were counted for each region. The total length, the key words per title were recorded and analysed statistically. Results show that the growth of research in ecology is greater in Spain and peaked earlier than in the Caribbean countries. However, the titles in the latter region were more complex than those in Spain.

? Urban, D. (1996), Quantitative measurement of public opinions on new technologies - An application of SEM-methodology to the analysis of beliefs and values toward new human applications of genetic engineering. Scientometrics, 35 (1), 71-92.

Full Text: 1996\Scientometrics35, 71.pdf

Abstract: The article presents the methodology of structural equation modeling (SEM) to study social perceptions of new technologies. It argues that the SEM-methodology offers a better statistical approach for the analysis of technology-related attitudes than the techniques most often applied in the field of public opinion research. SEM eliminates, compensates for, or at least reduces many problems raised by common surveying practices researching attitudes on new technologies. In particular, SEM-methodology reduces difficulties of testing the validity and reliability of measuring instruments when those are applied to vague and weakly established opinions on new technologies. To demonstrate these advantages of SEM the research presented here concentrates on the cognitive formation of public attitudes toward the particular gene technologies of prenatal genetic testing (pGT) and prenatal genetic engineering (pGE). The study explores whether a statistical analysis of various opinions on these technologies can reveal a set of underlying, structured attitudes, and if so, whether these attitudes form an entire syndrome or are differentiated into several distinct, coherent complexes.

Magri, M.H. and Solari, A. (1996), The SCI journal citation reports: A potential tool for studying journals? I. Description of the JCR journal population based on the number of citations received, number of source items, impact factor, immediacy index and cited half-life. Scientometrics, 35 (1), 93-117.

Full Text: 1996\Scientometrics35, 93.pdf

Abstract: In this paper, we analysed six indicators of the SCI Journal Citation Reports (JCR) over a 19-year period: number of total citations, number of citations to the two previous years, number of source items, impact factor, immediacy index and cited half-life. The JCR seems to have become more or less an authority for evaluating scientific and technical journals, essentially through its impact factor. However it is difficult to find one’s way about in the impressive mass of quantitative data that JCR provides each year. We proposed the box plot method to aggregate the values of each indicator so as to obtain, at a glance, portrayals of the JCR population from 1974 to 1993. These images reflected the distribution of the journals into 4 groups designated low, central, high and extreme. The limits of the groups became a reference system with which, for example, it was rapidly possible to situate visually a given journal within the overall JCR population. Moreover, the box plot method, which gives a zoom effect, made it possible to visualize a large sub-population of the JCR usually overshadowed by the journals at the top of the rankings. These top level journals implicitly play the role of reference in evaluation processes. This often incites categorical judgements when the journals to be evaluated are not part of the top level. Our ‘rereading’ of the JCR, which presented the JCR product differently, made it possible to qualify these judgements and bring a new light on journals.

Schwartz, S. and Hellin, J.L. (1996), Measuring the impact of scientific publications. The case of the biomedical sciences. Scientometrics, 35 (1), 119-132.

Full Text: 1996\Scientometrics35, 119.pdf

Abstract: The bibliometric indicators currently used to assess scientific production have a serious flaw: a notable bias is produced when different subfields are compared. In this paper we demonstrate the existence of this bias using the impact factor (IF) indicator. The impact factor is related to the quality of a published article, but only when each specific subfield is taken separately: only 15.6% of the subfields we studied were found to have homogeneous means. The bias involved can be very misleading when bibliometric estimators are used as a basis for assigning research funds. To improve this situation, we propose a new estimator, the RPU, based on a normalization of the impact factor that minimizes bias and permits comparison among subfields. The RPU of a journal is calculated with the formula: RPU = 10(1-exp (-IF/x)), where IF is the impact factor of the journal and x the mean IF for the subfield in which the journal belongs. The RPU retains the advantages of the impact factor: simplicity of calculation, immediacy and objectivity, and increases homogeneous subfields from 15.6% to 93.7%.

Katz, J.S. and Hicks, D.M. (1996), A systemic view of British science. Scientometrics,



Download 6.47 Mb.

Share with your friends:
1   ...   64   65   66   67   68   69   70   71   ...   275




The database is protected by copyright ©ininet.org 2024
send message

    Main page