Inf5180 Product and Process Improvement in Software Development



Download 56.71 Kb.
Date29.05.2017
Size56.71 Kb.

INF5180 Product and Process Improvement in Software Development








Table of Contents

1 Introduction 3

2 Improvement strategy 4

2.1 Capability Maturity Models 4

2.2 Agile project management 5

2.2.1 Defining a standard 5

2.2.2 Measurements and feedback 5

2.2.3 Double loop learning 6

2.2.4 Reduce the cost of quality control 6

2.3 Epistemology of SPI 6

2.4 The psychology of measurements and control 7

2.5 An algorithmic summary of the improvement strategy 7

2.5.1 Complex Adaptive Systems (CAS) 7

2.5.2 Action research 8

3 Research approach and setting 9

4 Case description and analysis 10

4.1 First iteration: Creating the framework for doing SPI 10

4.1.1 Diagnosis 10

4.1.2 Action planning 10

4.1.3 Action taking 10

4.1.4 Evaluation 11

4.1.5 Specified learning 11

4.2 Second iteration: Starting to predict the improvement rates 11

4.2.1 Diagnosis 11

4.2.2 Action planned 11

4.2.3 Action taken 11

4.2.4 Evaluation 12

4.2.5 Specified learning 12

4.3 Third iteration: Trying to improve the SPI system 12

4.3.1 Diagnosis 12

4.3.2 Action planned 12

4.3.3 Action taken 12

4.3.4 Evaluation 12

4.3.5 Specified learning 12

4.4 Fourth iteration: Improving the standard (“double loop learning”) 12

4.5 Fifth iteration: Calibrating old data due to revised standard 12

4.6 Sixth iteration: Insights from information infrastructure theory 12

4.6.1 Diagnosis 12

4.6.2 Action planned 12

4.6.3 Action taken 12

4.6.4 Evaluation 13

4.6.5 Specified learning 13

5 Discussion 14

6 Conclusion 15

References 16




1Introduction


At the Norwegian Directorate of Taxes (NTAX) one of the COBOL programmers died in 1997, and other programmers had to step in. Due to a lack of a standard way of programming, this caused major problems, and everybody quickly realized that there was a severe need for a way of programming that would make the software maintainable. A standard was suggested by the programmers, it was accepted by the management, it was monitored by quality assurance personnel, and it has now been running for six years producing statistical trends that show continuous improvement (Øgland, 2006).
Despite the fact that this particular process of producing maintainable software has been a success on the outside, it has been continuously been marred by conflicts on the inside. While most of the programmers agree with the need for a common standard, few are willing to apply the standard on them selves. Every programmer would like his or her colleges to follow strict rules, but for him self or her self there should be total flexibility.
The situation seems to reflect a type of discussion among software academics and professionals that emerged quickly after the introduction of the early SPI and software capability maturity models (CMM). Critics claimed that adoption of the ideas would result in increased bureaucratization and control, leading to decreased developer creativity and process innovation capability; c.f. (Bach, 1994; Bollinger and McGowan, 1991; Curtis, 1998; Bach, 1995). Proponents argued that the predictability and transparency of development and management practices, and the continuous systematic reflection of an organization’s software processes associated with higher maturity levels, would actually decrease management control and release the software developer’s creative potential (Curtis, 1994).
One way of seeing this conflict could be to identify the goals of productivity, quality and predictability of the group as a management view, while the “individual hero” perspective is a view that benefit individual programmers who hold no responsibility for the group as a whole. What view is the “right view” depends rather on whether one is a systematic software manager or a heroic software developer.
The problem of the SPI people, however, is to design a SPI system that is sufficiently rigid to aid the systematic improvement of quality, productivity and predictability, while at the same time being sufficiently flexible to prevent programmers from loosing their ability to do creative work.
The document in structured by having a vision for such a flexible and standardized design (“improvement plan”) in chapter two, while the chapters three to five will go more detailed into the NTAX case by explaining how we selected data and did data analysis for the case (chapter three), the case itself (chapter four), and a discussion of the results (chapter five) as compared with the improvement plan in chapter two. The total experience will be summarized in chapter six.


2Improvement strategy


Our vision for a sustainable SPI system consists of four simple ideas, corresponding to Deming’s four components of a “system of profound knowledge” (Deming, 1992).


  • Appreciation of a system

  • Understanding variation

  • Theory of knowledge

  • Psychology

A possible translation of these themes into the world of SPI could be (1) capability maturity models, (2) agile project management, (3) knowledge management, and (4) the psychology of measurements and control.


2.1Capability Maturity Models


According to the capability maturity model entry on Wikipedia (2006), a capability maturity model (CMM) may broadly refer to a process improvement approach that is based on a process model, or it may more specifically specifically refer to the first such model, developed by the Software Engineering Institute (SEI) in the mid-1980s, as well as the family of process models that followed.
Based on interviews with an Indian company that has been certified to SEI-CMM level 5 for many years, we were told that a good way of starting the quality improvement process would be by getting certified to the ISO 9001:2000 requirement for quality management systems standard as a first approach, before starting the more software process specific capability maturity models (NTAX, 2002).


Figure 1 – ISO 9000:2000 process model (www.smqe.de/smqe/index.php?page=iso 9000)
The ISO 9001:2000 standard contains requirements for quality management systems, and is as such not a capability maturity model. In the appendix A of ISO 9004:2000, however, there is a five level maturity model for assessing quality management systems according to the structure given in ISO 9001:2000.
The structure is as follows:


Maturity level

Performance level

Guidance

1

No formal approach

No systematic approach evident, no results, poor results or unpredictable results

2

Reactive approach

Problem- or corrective-based systematic approach; minimum data on improvement results available

3

Stable formal system approach

Systematic process-based approach; early stage of systematic improvements; data available on conformance to objectives and existence of improvement trends.

4

Continual improvement emphasized

Improvement process in use; good results and sustained improvement trends.

5

Best-in-class performance

Strongly integrated improved process; best-in-class benchmarked results demonstrated.

If one would like to make the model compliant with ISO/IEC 15504 “SPICE”, then one could also use a maturity level 0 to indicate incomplete process or unknown status (NTAX, 2002).


2.2Agile project management


The Toyota Production System is a system that has gradually evolved since the 1950s, based on continuous series of small changes in order to make it more fit with the problems it is supposed to solve (Fujimoto, 1999; Womack, Jones and Roos, 1990; Womack and Jones, 2003).

2.2.1Defining a standard


The purpose of a standard is to have something to measure against (Imai, 1983).

2.2.2Measurements and feedback


One of the key practices of software process improvement consists of goal-oriented measurements and systematic feedback (e.g. Dybå, Dingsøyr and Moe, 2002: chapter 4). Although there are many ways to measure and give feedback, in this evolutionary approach, we suggest simply to start measuring whatever the software developers have anything that seems vaguely relevant. When giving the software developers feedback,

2.2.3Double loop learning


If the software developers start complaining about the measurements, for instance saying that they are being measured against the wrong standard, then a next step could be to engage the software developers in updating the standard. As pointed out by Imai (1983), the purpose of updating standards is to improve the standards, not to make them

Figure 2 – Double loop learning (Argyris & Schön, 1978)
An organization may learn through the methods of quality management, i.e. identifying errors and opportunities for improvement, and work out ways to improve the system based on such insights, as would correspond to “single loop learning”. However, from time to time it may strike the organization that the whole system should have been designed in a completely different manner, so by challenging the current assumptions and beliefs, the variables governing the single loop learning may be adjusted. This external perspective on the system is referred to as “double loop learning”.

2.2.4Reduce the cost of quality control


As the system is maturing, there should be less need for detailed feedback, and

2.3Epistemology of SPI


When Deming (1992) talks of theory of knowledge, he refers to the philosophy of C. I. Lewis (1929). What appears to make Lewis different from fellow American pragmatists, like Peirce, Dewy and James, is that Lewis was concerned with Kant’s solution of the mind/body problem, and interpreted this into an epistemology that emphasized prediction as the key essence of knowledge.
What seems to be a rough interpretation of the Deming/Lewis epistemology is that process knowledge may be described in terms of flowcharts and evaluated through the use of SPC diagrams. As pointed out by Kjersti Halvorsen (2006), this type of understanding also seems to be the epistemological foundation of Taylor’s “Principles of Scientific Management” (1911), although it was only with Shewhart’s invention of the SPC diagram (1926) that it was possible to talk of “scientific management” being scientific in the way that Shewhart and Deming interpreted the philosophy of science.
For the practical purpose of this study, however, when talking of continuous improvement and double loop learning, we expect to be able to identify this learning in terms of changes in flowcharts and corresponding changes in patterns on SPC diagrams.

2.4The psychology of measurements and control


Deming (1992) was concerned with internal motivation for work, and was frustrated by the fact that measurement systems tended to make people more interested in getting good scores than doing good work. Although this mostly resulted in him attacking management by objectives (MBO) for not understanding statistical variations, or complaining about management implementing SPC wrongly, sometimes he also questioned the use of numbers at all, making controversial statements against the use of marks and “gold stars” in school etc.
More interesting that what Deming was actually saying on this topic is perhaps the reason why he was so concerned with it, namely that “what gets measured gets done”. Measurement is a powerful tool for creating change. One of the single most important issues in the improvement strategy we propose is thus always to measure.

2.5An algorithmic summary of the improvement strategy


We choose to call the approach an improvement strategy rather than an improvement plan as the aim is long-term and the choices for what to do on regular assessment dates depend to issues that will not be possible to see until the assessment date has been reached.

2.5.1Complex Adaptive Systems (CAS)


If we look at complex adaptive systems (CAS) as applied in organizational theory (e.g. Axelrod and Cohen, 2004), insights from evolutionary biology, computer science (agent based artificial intelligence) and social theory based on game theory, seem to provide a framework for designing management systems that are loosely coupled and locally controlled, and constantly being subjected to reviews for making them better. In the theory of CAS, there are three fundamental processes at work:


  • variation

  • interaction

  • selection

In any adaptive process there has to be a variation of the species, there has to be some interaction in order to produce different “children”, and there is a selection deciding which of the “children” that will grow up. Although, rather than looking at evolution in the jungle, in order to draw insights from these easy principles into the world of SPI, it is more useful to think of how human ideas and beliefs evolve. From a selection of books, somebody may variation of ideas, somebody may read two books, linking an idea from the first book with a different idea from the other book, and by testing this newly created idea in practice he is performing a type of selection. The new idea may survive in his mind as interesting or it may be discarded as unfruitful.


2.5.2Action research


The idea of how ideas evolve according to some evolutionary algorithm could be used as a description of how the scientific method works, and if we look at action research (e.g. Susman and Evered, 1978) we get a description of the scientific model (for creating organizational change) runs through a five step circular algorithm.


Figure 3 – Phases within an action research cycle (adapted from Susman & Evered 1978)
The first step of the model is called DIAGNOSING and consists of doing an analysis of the organization, finding out what the improvement…
Unlike other process improvement models, such as GIP (Basili, ref?) or IDEAL (ref?), the aim of the action research model is not only to produce change but to develop knowledge about efficient ways of creating change. In other words, the action research model fits with the epistemological ideas of Shewhart and Deming as described in section 2.3 above, and it is also a framework necessary for making Taylor’s scientific management scientific, in terms of producing new knowledge to be published in academic outlets.

3Research approach and setting


Our approach for collecting knowledge and understanding NTAX has consisted of interviews, observations and document analysis.

4Case description and analysis


As the process of collecting and analyzing data from the COBOL programmers has been going on for several years, it seems reasonable to present the case by explaining the developing within each iteration.

4.1First iteration: Creating the framework for doing SPI

4.1.1Diagnosis


In 1997, one of the programmers died, and others had to take over the software. The ones who had to take over realised that they were dealing with “spaghetti code”, difficult to understand, as there had been no requirements or standards on how to program in a way that would make the software maintainable for the community at large. In the IT strategy plan of 1998, it was thus stated that standards should be designed and implemented, and one of the programmers had started writing a draft suggestion for such a standard, but as of late 2000, nothing had happened. The first research question thus seemed to be related to how to get the standard finalized, accepted by the programming community, the managers, define metrics for making sure the standard was being followed and make the metrics system sustainable.

4.1.2Action planning


The SPI change agent (“action researcher”) believed the best way to make the metrics system work would be by having the programmers themselves complete the standard, present it to management for acceptance, themselves define the metrics and themselves create the software needed for producing the statistics, while the role of the SPI change agent would be restricted to doing statistical analysis of the measurements, as this would be the only task where a sort of competence not found among the programmers (statistical competence) was needed and could not be found.

4.1.3Action taking


A draft standard was completed, and circulated among the programmers for comments. It was then revised and presented to management for acceptance. IT management then decided to ask the director general of the Directorate of Taxes to give a lecture to the programmers on the importance of following standards. This was followed by one of the programmers developing the metrics software, in order to produce data on how the various software packages deviated from the requirements of the standard. The SPI change agent was given data, analysed, and discussed the results with the programmers.



Figure 6 – Generic NTAX software lifecycle model (adapted from NTAX, 1998)

In figure 6 we illustrate how measurements of software maintainability is done as part of quality control phase IV as a part of the measurements that are supposed to be done with quality control procedure V10 at step 4 to evaluate system documentation.


4.1.4Evaluation


As it had been possible to get some historical data as well as new data, the results showed that most software project had been producing software that got more and more filled up with gotos, long paragraphs and other issues that the programmers themselves considered as “bad practices”, the major achievement of the first iteration was that the current measurements defined a baseline for further improvement and that a SPI system was now up and running. The results were documented in a project report (NTAX, 2001) that was distributed among programmers and management.

4.1.5Specified learning


<…>

4.2Second iteration: Starting to predict the improvement rates

4.2.1Diagnosis


The SPI change agent believed in the …

4.2.2Action planned


<…>

4.2.3Action taken


<…>

4.2.4Evaluation


<…>

4.2.5Specified learning


<…>

4.3Third iteration: Trying to improve the SPI system

4.3.1Diagnosis


<…>

4.3.2Action planned


<…>

4.3.3Action taken


Making regression analysis based on linear regression and exponential regression, in order to see if the improvements predictions could be improved. The theory from Sommerville (2004) was used for evaluating the metrics program and getting ideas on how to improve on that.

4.3.4Evaluation


<…>

4.3.5Specified learning


<…>

4.4Fourth iteration: Improving the standard (“double loop learning”)

4.5Fifth iteration: Calibrating old data due to revised standard

4.6Sixth iteration: Insights from information infrastructure theory

4.6.1Diagnosis


<…>

4.6.2Action planned


<…>

4.6.3Action taken


<… her er noen diagrammer…data for 2006 er ikke ferdig innsamlet, så målingene er approksimasjoner… selv om jeg velger å ta med hele datarekken for samtlige diagrammer, så kan det muligens være bedre å kommentere dem i årene der skjer noe, for eksempel diagrammene nedenfor viser interessante sprang for 2003/2004 som følge av revisjon av standard, og diagrammene bør ta trolig presentere i forbindelse med femte iterasjon… problemstillingen nå er at forbedringsraten ikke er like god som den var før, vi har fått flere programmerere som saboterer opplegget, og ledelsen viser ikke lenger engasjement i prosessen… muligens kommer opplegget til å dø med mindre vi finner på noe smart…>



4.6.4Evaluation


<…>

4.6.5Specified learning


<…>

5Discussion


<…>


6Conclusion


sdf

References


Argyris, C. & Schön, D. (1978): “Organizational Learning: A Theory of Action Perspective”, Addison-Wesley: Reading, Massachusettes.
Axelrod, R. and Cohen, M. D. (2000). Harnessing Complexity: Organizational Implications of a Scientific Frontier. Basic Books: New York.
Bach, J. (1994) The Immaturity of the CMM, American Programmer, 7(9), 13-18.

Bach, J. (1995) Enough about Process: What We need are Heroes, IEEE Software, 12 (March), 96-98.


Bollinger, T. B. and McGowan, C. (1991) A Critical Look at Software Capability Evaluations, IEEE Software, 8 (4), 25-41.


Callon, M. (1991): “Techno-economic networks and irreversibility”. In Law J. (Ed.) A sociology of monsters. Essays on power, technology and domination. Routledge: London.


Ciborra, C. U. et al (2000). From Control to Drift: The Dynamics of Corporate Information Infrastructures. Oxford University Press: Oxford.
Coghlan, D. & Brannick, T. (2001): “Doing Action Research in Your Own Organization”, SAGE: London.
Curtis, B. (1994) A Mature View of the CMM, American Programmer, 7 (9), 13-18.

Curtis, B. (1998) Which Comes First, the Organization or Its Processes? IEEE Software, Nov/Dec, 10-13.


Dahlbom, B. (2000): “Postface. From Infrastructure to Networking”. In Ciborra, C. e. a. (Ed.) From Control to Drift, Oxford University Press: Oxford.
Davenport, Thomas H. & Laurence Prusak (1998): “Working Knowledge: How Organizations Manage What They Know”. Harvard Business School Press.
Deming; W. E. (1992): “The New Economics for Industry, Government, Education”, 2nd edition, The MIT Press: Cambridge; Massachusettes.
Dybå, T., Dingsøyr, T. and Moe, N. B. (2002). Praktisk prosessforbedring: En håndbok for IT-bedrifter. Fagbokforlaget: Bergen.
EFQM (2006): http://efqm.org, [Accessed on May 12th 2006]
Fujimoto, T. (1999). The Evolution of a Manufacturing System at Toyota. Oxford University Press: Oxfor.
Hanseth, O. & Monteiro, E. (1998): “Understanding Information Infrastructure”, Oslo.
Imai, M. (1986): “Kaizen: The Key to Japan’s Competitive Success”, McGraw-Hill/Irwin: New York.
ISO (2000a): “Quality Management Systems – Terms and Definitions (ISO 9000:2000)”, International Standards Organization: Geneva.
ISO (2000b): “Quality Management Systems – Requirements (ISO 9001:2000)”, International Standards Organization: Geneva.
ISO (2000c): “Quality Management Systems – Guidelines for Performance Improvement (ISO 9004:2000)”, International Standards Organization: Geneva.
Jashapara, A. (2004): “Knowledge Management: An Integrated Approach”, Prentice-Hall: London.
Juran, J. (1964): “Managerial Breakthrough”, McGraw-Hill: New York.
Kuhn, T. (1962): “The Structure of Scientific Revolutions”, The University of Chicago Press: Chicago.
Latour, B. (1987): “Science in Action”, Harvard University Press: Cambridge, Massachusettes.
Law, J. (1992): “Notes on the Theory of the Actor-Network: Ordering, Strategy, and Heterogeneity”. Systems practice, 5/4, 379-393.
Lee, A. (1991). “A Guide to Selected References on Hermeneutics”. Available: http://www.people.vcu.edu/~aslee/herm.htm [Accessed 11th of May 2006].
Lewis, C. I. (1929): “Mind and the World Order: Outline of a Theory of Knowledge”, Dover Publications: New York.
Mintzberg, H. (1983): “Structures in Fives: Designing Effective Organizations” Prentice-Hall: Englewood Cliffs.
Monteiro, E. (2000): “Actor-Network Theory and Information Infrastructure” in Ciborra, C. U. & Hanseth, O. (Eds.) From Control to Drift, Oxford University Press: Oxford.
Nonaka, Ikujiro (1995): “Dynamic theory of Organizational Knowledge Creation” Organization Science , Vol.5, No. 1, pp 14-37
Nonaka, I. & Takeuchi, H. (1995): “The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation”, Oxford University Press: Oxford.
NTAX (1998): “Strategisk plan for bruk av IT i skatteetaten”, SKD nr 62/96, Oslo.
NTAX (2002). Forbedring av IT-prosesser ved bruk av ISO 15504, SKD 2002–032. Skattedirektoratet: Oslo.
NTAX (2005): ”Stokastisk modell for forvaltningsløp PSA-2004 til støtte for håndtering av klarsignal”, SKD 2005-012, Oslo.
Skatteetaten (2006) online at http://www.skatteetaten.no, download 09052006
Snow, C. P. (1964): ”The Two Cultures: A Second Look”, Cambridge University Press: Cambridge.
Statskonsult (2002): “Organisering av IT-funksjonen i Skatteetaten”, 2002:13, Oslo.
Tsutsui, W. M. (1998): “Manufacturing Ideology: Scientific Management in Twentieth-Century Japan”, Princeton University Press: Princeton.
Wikipedia (2006) online at http://en.wikipedia.org/wiki/Capabilities_Maturity_Model, Downloaded 30102006
Weaver W. & Shannon, C. (1949): “The Mathematical Theory of Communication”, University of Illinois Press: Illinois.
Weber, M. (1979): “Economy and Society”, University of California Press.
Womack, J. P., Jones, D. T. and Roos, D. (1990). The Machine that changed the World: The Story of Lean Production. Harper Perennial: New York.
Womack, J. P. and Jones, D. T. (2003). Lean Thinking. Second Edition. Harper Perennial: New York.
Øgland, P. (2006). Using internal benchmarking as strategy for cultivation: A case of improving COBOL software maintainability. In Proceedings of the 29th Information Systems Research in Scandinavia (IRIS 29): “Paradigms Politics Paradoxes”, 12-15 August, 2006, Helsingør, Denmark.


Autum 2006 Page of



Download 56.71 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2020
send message

    Main page