Reidar Conradi (Ed.):
Version 3.0 of 4 June 2007, IDI, NTNU
Compiled by Reidar Conradi, Dept. of Computer and Information Science (IDI), Software Engineering (SU) Group – after a draft of PhD student Torgrim Lauritsen, IDI.
Some of the terms have alternative sources for their definitions, marked by “Def.1)… Def.2)…”. If there is only one source, the “Def.1)” prefix is omitted. For some terms, the same source also has provided alternative definitions, marked as “1)… 2)…”.
Comment: … are used for clarification.
The source of each definition is sought given, usually from some standards document such as [IEEE SESC] and [ISO terms], or from an acknowledged textbook or overview paper such as [Leveson95] or [Avizienis2004].
There may not be consensus about a term’s name and meaning, and some old definitions really look archaic.
Accident Def.1) An undesired and unplanned (but not necessarily unexpected) event that results in (at least) a specified level of loss [Leveson95]. Def.2) An unplanned event or series of events that results in death, injury, illness, environmental damage, or damage to or loss of equipment or property [IEEE 1228].
Availability Def.1) The degree to which a system or component is operational and accessible when required for use [IEEE 610.12]. Def.2) Readiness for correct service [Avizienis2004]. Comment: reliability then means, that the requested functionality keeps staying available. Often expressed as the probability of being “on-line” or ready.
Business-critical That core computer and other support systems of a business have sufficient QoS to preserve the stability of the business [indirectly after Sommerville04].
Business-safe Def.1) That “relevant issues like reputation, employment practices, intellectual property, competition, supply chains, fraud and data security need to be considered” [Jolly03]. Def.2) That core computer and other support systems of a business are sufficiently safe, i.e. do not threaten the stability of the business [Own definition after Sommerville04]. Comment: subset of business-critical.
Computer system A system containing one or more computers and associated software [IEEE 610.12]. Comment: i.e. an information processing (or ICT) system.
Confidentiality Absence of unauthorized disclosure of information [Avizienis2004].
Cost Sum of all expenses by making a piece of software or an entire computer system [Own definition].
Dependability The trustworthiness of a computing system which allows reliance to be justifiably placed on the service it delivers [Avizienis01]. Dependability is an integrating concept that encompasses the following attributes:
Availability: readiness for correct service;
Reliability: continuity of correct service;
Safety: absence of catastrophic consequences on the user(s) and
Security: the concurrent existence of (a) availability for authorized
users only, (b) confidentiality, and (c) integrity.
In the later [Avizienis04], security is split off as a separate quality, and
dependability is rephrased as:
Availability: readiness for correct service;
Reliability: continuity of correct service;
Safety: absence of catastrophic consequences on the user(s) and
Integrity: absence of improper system alterations;
Maintainability: ability to undergo modifications and repairs.
Comment: How to measure dependability? Not defined in IEEE 610.12!
Efficiency The degree to which a system or component performs its designated functions with minimum consumption of resources [IEEE 610.12].
Error Def.1) That at least one (or more) internal state of the system deviates from the correct service state. The adjudged or hypothesized cause of an error is called a fault. In most cases, a fault first causes an error in the service state of a component that is a part of the internal state of the system and the external state is not immediately affected. … many errors do not reach the system’s external state and cause a failure [Avizienis04]. Def.2) The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. For example, a difference of 30 meters between a computed result and the correct result [IEEE 610.12]. Def.3) Any detected deviation between specification and implementation/expected-result [Popular usage], see below. Comment-1: consider the three most common usages of “error” by Google-based rankings, taken from http://www.softwaredevelopment.ca/bugs.shtml:
“An error has occurred” (~75,000 pages, vs. 68 for “defect”),
“Unknown error” (~57,700 pages, vs. 478 for “defect”), and
“Unrecoverable error” (~26,900 pages, vs. 3 for “defect”).
Comment-2: see explanation under fault. Another term: “active” error. Conclusion: total chaos in terminology, so try to avoid the term “error”.
Failure Def.1) The non-performance or inability of the system or component to perform its intended function for a specified time under specified environmental conditions. A failure may be caused by design flaws – the intended, designed and constructed behaviour does not satisfy the system goal [Leveson95]. Def.2) The inability of a system or component to perform its required function within specified performance requirements [IEEE 610.12]. Def.3) Since a service is a sequence of the system’s external states, a service failure means that at least one (or more) external state of the system deviates from the correct service state [Avizienis04]. Comment: Probability of failure = 1 – Pr(Reliability). Other terms: malfunction, “external-visible” error.
Fault 1) A defect in a hardware device or component; for example, a short circuit or a broken wire. 2) An incorrect step, process, or data definition in a computer program [IEEE 610.12]. Shared comment for Fault, Error and Failure: … Faults can be internal or external of a system. … [taken from Avizienis04]. Contextual (dynamic) execution of a dormant (static) fault usually leads to internal error, and possibly later an external failure. The “incorrectness” of a fault means that it violates stated functional requirements. Other terms: “passive” or “dormant” error, defect, bug (but try to avoid the last one).
FMEA Failure Mode and Effects Analysis (FMEA) is a risk assessment technique for systematically identifying potential failures in a system or a process [Wikipedia].
Functional safety Part of the overall safety relating to the EUC (Equipment under Control) and the EUC control system which depends on the correct functioning of the E/E/PE (Electrical/Electronic/Programmable Electronic) safety-related systems, other technology safety-related systems and external risk reduction facilities [point 3.1.9 in IEC 61508].
Functionality A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs [ISO 9126]. Not defined in IEEE 610.12!
Hardware Physical equipment used to process, store, or transmit computer programs or data. Contrasts with: software [IEEE 610.12]. Comment: Hardware may have design faults (permanently), fabrication faults (initially), and disintegration faults (eventually).
Hazard Def.1) A state or set of conditions that, together with other conditions in the environment, will lead to an accident (loss event). Note that a hazard is not equal to a failure [Leveson95]. Comment: “will” should rather be “may”? Def.2) A software condition that is a prerequisite to an accident [IEEE 1228].
Hazard level A combination of severity (worst potential damage in case of an accident) and likelihood of occurrence of the hazard [Leveson95].
Hazop HAZard and OPerability analysis is a systematic method for examining complex facilities or processes to find actual or potentially hazardous procedures and operations so that they may be eliminated or mitigated [Wikipedia].
Incident An event that involved no loss (or only minor loss) but with the potential for loss under different circumstances [Leveson95].
Integrity Absence of improper system alterations [Avizienis2004].
Maintainability Ability to undergo modifications and repairs [Avizienis2004].
Performance The degree to which a system or component accomplishes its designated functions within given constraints, such as speed, accuracy, or memory usage [IEEE 610.12].
Portability A set of attributes that bear on the ability of the software to be transferred from one environment to another [ISO 9126]. Not defined in IEEE 610.12!
Project stakeholder Anyone who is a direct user, indirect user, manager of users, senior manager, operations staff member, support (help desk) staff member, tester, developer working on other systems that integrate or interact with the one under development, or maintenance professional potentially affected by the development and / or deployment of a software project [Ambler01].
Quality Def.1) 1) The degree to which a system, component or process meets specified requirements. 2) The degree to which a system, component or process meets customer or user needs or expectations [IEEE 610.12]. Def.2) The totality of features and characteristics of a product or service that bears on its ability to satisfy stated or implied needs [ISO 8402] (now being withdrawn). Comment: [ISO 9126] specifies six main “quality characteristics”: functionality, reliability, usability, efficiency, maintainability, and portability – and totally 21 subcharacteristics. In short, quality means a satisfied user or customer.
Quality of Service (QoS) Def.1) In telephony, QoS can simply be defined as “user satisfaction with the service” [ITU-T E.800]. Def.2) "A set of quality requirements on the collective behavior of one or more objects" [ITU standard X.902]. Comment: That is, the behavioral properties of a service must be acceptable (of high enough quality) for the user, which can be another system, an end-user, or a social organization. Such properties encompass technical aspects like dependability (i.e. trustworthiness), security, and timely performance (transfer rate, delay, jitter, and loss), as well as human-social aspects (from perceived multimedia reception to sales, billing, and service handling). NB: not defined in IEEE 610.12! See popular paper on QoS [Helvik03] where the more subjective term QoE (Quality of Experience) is introduced, and also [Zekro99]. But how to measure such a complex property?
Reliability Def.1) The characteristic of an item expressed by the probability that it will perform its required function in the specified manner over a given time period and under specified or assumed conditions. Reliability is not a guarantee of safety [Leveson95]. Def.2) Continuity for correct service [Avizienis2004]. Def.3) The ability of a system or component to perform its required functions under stated conditions for a specified period of time [IEEE 610.12]. Def.4) A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time [ISO 9126]. Comment: Such attributes may be measured by Mean-Time-Between-Failures (MTBF) or probability of non-failure being 1 – Pr(failure).
Requirement 1) A condition or capability needed by a user to solve a problem or achieve an objective. 2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents. 3) A documented representation of a condition or capability as in 1) or 2) [IEEE 610.12]. Comment: often used in plural form, and described in a document called requirements specifications. It has two parts: functional requirements and non-functional requirements. The latter are often called quality requirements and specify the desired ambition level for quality attributes like reliability and safety. When discussing whether a suspected fault, error or failure is genuine, i.e., formally incorrect, we must always relate to explicitly stated functional requirements – which however may be ambiguous, incomplete, or inconsistent.
Risk Def.1) A function of 1) the likelihood of a hazard occurring, 2) the likelihood of the hazard leading to an accident (including duration and exposure), and 3) the severity of consequences of the accident [Leveson95]. Def.2) A measure that combines both the likelihood that a system hazard will cause an accident and the severity of that accident [IEEE 1228]. Comment: How to measure a risk – by the most probable capitalized loss? But certain accidents cannot be quantified, just ask the insurance companies! NB: risk is not defined in IEEE 610.12.
- Operational risk Is primarily a technical responsibility and focus on requirements stability, design performance, code complexity, and test specifications. Operational risks deals with intermediate and final work product characteristics. Because software requirements are often perceived as flexible, the software operational risk is difficult to manage [Hall98].
- Process risk Deals with management and technical work procedures. Management procedures contain activities such as planning, staffing, tracking, quality assurance, and configuration management. Technical procedures contain requirements analysis, design, code and testing [Hall98].
- Product risk Is primarily a technical responsibility and focus on requirements stability, design performance, code complexity, and test specifications. Product risks deals with intermediate and final work product characteristics. Because software requirements are often perceived as flexible, the software product risk is difficult to manage [Hall98].
- Project risk Is primarily a management responsibility and defines operational, organizational, and contractual software development parameters. Project risk includes resource constraints, external interfaces, supplier relationships, and contract restrictions [Hall98].
- Supply risk The potential occurrence of an incident associated with inbound supply from individual supplier failures or the supply market, in which the outcomes result in the inability of the purchasing firm to meet customer demand or cause threats to customer life and safety [Zsidisin03].
- Tolerable risk How willingly we are to live with a risk to secure certain benefits in the confidence that the risk is one that is worth taking and that it is being properly controlled. Risk which is accepted in a given context based on the current values of society [IEC 61508].
Robustness The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions [IEEE 610.12].
Safety Def.1) Freedom from unacceptable risk of physical injury or of damage to the health of people, either directly or indirectly as a result of damage to property or to the environment [IEC 61508]. Def.2) Freedom from software hazards [IEEE 1228]. Comment: The higher the risk, the lower the confidence in safety. Safety is always evaluated against real events, not what some requirements may have specified. But on what scale is safety measured – e.g. by the maximum number of human deaths per 10 million airplane flights or by the expected number of fatalities per billion person-kilometers in a city metro? NB: safety is not defined in IEEE 610.12 or ISO 9126!
Safety function Def.1) A function to be implemented by an E/E/PE (Electrical/Electronic/Programmable Electronic) safety-related system, other technology safety-related system or external risk reduction facilities, which is intended to achieve or maintain a safe state for the EUC (Equipment Under Control), in respect of a specific hazardous event (see 3.4.1) [point 3.5.1 in IEC 61508]. Comment: IEC 61508 does not specify how to meet the design requirements for safety functions, meaning that hazard elimination is not within the scope of IEC 61508 [taken from http://www.cs.york.ac.uk/hise/safety-critical-archive/1999/0115.html]. Def.2) A function that reduces the probability for a hazard to arise and lead to accident or incident. Can be done by hazard elimination (substitution, simplification, decoupling, elimination of specific human errors, reduction of hazardous situations), or by hazard reduction (design for control, barriers, failure minimization) or by hazard control and damage minimization (hazard detection (warnings and actions) that transfers the software into a safe state as soon as possible) [Own definition].
Security Def.1) The protection of computer hardware and software from accidental or malicious access, use, modification, destruction or disclosure. Security also pertains to personnel, data, communications, and the physical protection of computer installations [IEEE SECS]. Def.2) The concurrent existence of (a) availability for authorized users only, (b) confidentiality, and (c) integrity [Avizienis04]. Comment: How is security measured [Littlewood93]? NB: security is not defined in IEEE 610.12!
Service The service delivered by a system (in its role as a provider) is its behavior as it is perceived by its user(s); a user is another system that receives service from the provider. The part of the provider’s system boundary where service delivery takes place is the provider’s service interface. The part of the provider’s total state that is perceivable at the service interface is its external state; the remaining part is its internal state [Avizienis04]. Comment: a service is some piece of functionality offered to a human user by a service provider (computer tool or application) in a telecommunication network.
Software Computer programs, procedures and possibly associated documentation and data pertaining to the operation of a computer system [IEEE 610.12]. Comment: software will not be worn-out in any way; so if it fails, the failure must have been latent.
System An entity that interacts with other entities, i.e., other systems, including hardware, software, humans, and the physical world with its natural phenomena. These other systems are the environment of the given system. The system boundary is the common frontier between the system and its environment. Computing and communication systems are characterized by fundamental properties: functionality, performance, dependability and security, and cost. Other important system properties that affect dependability and security include usability, manageability, and adaptability [Avizienis04].
Unreliability Is the probability of failure [Leveson95]. Comment: 1 – Pr(Reliability).
Usability The ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component [IEEE 610.12].
Note conflict between reliability and safety:
To maximize reliability, errors (i.e. executed faults) should be unable to disrupt the operation of a weapon (i.e. causing a failure); while for safety; errors should often lead to non-operation.
In other words, reliability requires multi-point failure modes, while safety may, in some cases, be enhanced by a single-point failure mode.
Distinguishing hazards from failures is implicit in understanding the difference between safety and reliability engineering [Miller85].
Some IEC 61508 definitions:
Safety integrity Probability of a safety-related system satisfactorily performing the required safety functions under all the stated conditions within a stated period of time.
Safety integrity level (SIL) Discrete level (one out of a possible four) for specifying the safety integrity requirements of the safety functions to be allocated to the E/E/PE safety-related systems, where SIL 4 has the highest level of safety integrity and SIL 1 the lowest.
Each hazard carries a risk
Process of SIL derivation:
Hazard identification, analysis, and risk assessment.
Safety requirements: specification of functional and safety integrity requirements.
Allocation of safety requirements to safety functions.
Allocation of safety functions to safety-related systems.
Common abbreviations: - most standardization bodies charge ca. 150 USD for a paper version
ANSI American National Standards Institute, http://www.ansi.org
IEEE Institute of Electrical and Electronics Engineers, http://www.ieee.org
IEC International Electrotechnical Commission, http://www.iec.ch/
ISO International Organization for Standardization, http://www.iso.org
ITU International Telecommunication Union, http://www.itu.int (w/ free downloadable standards)
[Ambler01] S. W. Ambler, Agile Modeling: Effective Practices for Extreme Programming and the Unified Process, Chapter 1, ISBN: 0-471-20282-7, John Wiley & Sons, February 1, 2001.
[Avizienis01] Algirdas Avizienis, Jean-Claude Laprie, and Brian Randell, Fundamental Concepts of Dependability, Research Report No 1145, LAAS-CNRS, April 2001.
[Avizienis04] Algirdas Avizienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr, “Basic Concepts and Taxonomy of Dependable and Secure Computing”, IEEE Transactions on Dependable and Secure Computing, 1(1):11-33, Jan.-March 2004.
[Hall98] Elaine Hall, Managing risk – Methods for software systems development, ISBN-10: 0-021-25592-8, Addison Wesley Longman, Inc., 1998.
[Helvik03] Peder J. Emstad, Bjarne E. Helvik, Svein J. Knapskog, Øivind Kure, Andrew Perkis and Peter Swensson., ”A Brief Introduction to Quantitative QoS”, in Annual Report for 2003 from Q2S Centre of Excellence, NTNU, pp. 18-29, http://www.q2s.ntnu.no/AnnualReport2003.pdf.
[IEC 61508] “Functional safety and IEC 61508 – A basic guide”, IEC, Geneva, Switzerland, Nov. 2002, 11 p., http://www.iee.org/oncomms/pn/functionalsafety/HLD.pdf (searchable).
[IEEE 1228] “Standard for Software Safety Plans”, IEEE STD 1228-1994, 17 logical p. of 23 physical pages, http://ieeexplore.ieee.org/iel1/3257/9808/00467427.pdf?tp=&isnumber=9808&arnumber=467427 (.pdf, searchable)
[IEEE 610.12] “IEEE Standard Glossary of Software Engineering Terminology”, IEEE STD 610.12-1990, created in 1990 and reaffirmed in 2002, 84 p. http://standards.ieee.org/reading/ieee/std_public/description/se/610.12-1990_desc.html (header; must buy textual document, formally no legal .pdf file).
[IEEE SESC] “Master Plan – Vocabulary” (fairly exhaustive term list), IEEE Software and Systems Engineering Standards Committee (S2ESC), 1993, 5 p. http://standards.computer.org/sesc/s2esc_pols/SP-06_Vocabulary_Objectives.htm. General link: http://standards.computer.org/sesc/.
[ISO 8402] “Quality Management and Quality Assurance – Vocabulary: ASQ A8402”, ISO, 1994. This is withdrawn and will be merged into the revised ISO 9000:2000 series.
[ISO 9126] “Information Technology – Software Product Evaluation – Quality characteristics and guidelines for their use: ISO/IEC 9126”, 1991, one page mini-version. http://www.cse.dcu.ie/essiscope/sm2/9126ref.html. ISO 9126 is under revision, and will be overseen by the project SquaRE: ISO 25000:2005, which follows the same general concepts.
[ISO terms] “ISO Terms and Guidelines – Terminology” (on software), ISO, 1995, two page mini-version, firstname.lastname@example.org, http://www.issco.unige.ch/ewg95/node69.html.
[ITU-T E.800] “Telephone Network and ISDN, Quality of Service, Network Management and Traffic Engineering – Terms and Definitions Related to Quality of Service And Network Performance Including Dependability, ITU-T Recommendation E.800”, ITU, Geneva, Switzerland, August 1994, 54 p., http://www.itu.int/rec/T-REC-E.800-199408-I/en (.pdf , 237 KB), article no. E 5867; same as IEC’s EV191.
[ITU-T X.902] “Open Distributed Processing – Reference Model – Part 2: Foundations, ITU-T Recommendation X.902”, ITU, Geneva, Switzerland, 1995, 20 p. Same as ISO/IEC ISO 10746-2. http://citeseer.ist.psu.edu/cache/papers/cs/4022/ftp:zSzzSzftp.gte.comzSzpubzSzodpzSz1994zSzpart2isd.pdf/open-distributed-processing-reference.pdf.
[Jolly03] Adam Jolly, Managing business risk: a practical guide to protecting your business, ISBN-10: 0-749-44081-3, Publisher: Kogan Page, 2003.
[Leveson95] Nancy Leveson, Safeware – System safety and computers, ISBN: 0-201-11972-2, Addison-Wesley, 1995.
[Leveson07] Nancy Leveson, System Safety Engineering: Back To The Future (web version of updates to 1995 book), http://sunnyday.mit.edu/book2.pdf . – RC: p.t. not cited in this mini-glossary.
[Littlewood93] Bev Littlewood, Sarah Brocklehurst, Norman E. Fenton, P. Mellor, Stella Page, David Wright, J. Dobson, J. McDermid, and Dieter Gollmann, Towards Operational Measures of Computer Security, Journal of Computer Security 2(2-3):211-230, 1993.
[Miller85] C. O. Miller, “A comparison of military and civil approaches to aviation system safety, Hazard Prevention, pp. 29-34, May/June 1985.
[Sommerville04] Ian Sommerville, Software Engineering, Addison-Wesley, 7th Ed., 2004, 784 pages, ISBN-13: 978-0-321-21026-5. Ch. 3 on Critical Systems, http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/Presentations/PDF/ch3.pdf.
[Zekro99] Zlatica Cekro, “Quality of Service – Overview of Concepts and Standards”, Free University of Brussels, April 1999 (report for COST 256), http://www.iihe.ac.be/internal-report/1999/COSTqos.doc.
[Zsidisin03] Georg A. Zsidisin, “Managerial perceptions of supply risk”, Journal of Supply Chain Management, 39(1):14-25, 2003.
This file: http://www.idi.ntnu.no/grupper/su/publ/ese/se-qual-glossary-v3_0-rc-4jun07.doc
Some extra definitions for Torgrim Lauritsen:
BUCS BUsiness Critical Software – an R&D project at NTNU in 2003-07 under the ICT-2010 program at the Research Council of Norway, lead by professor Tor Stålhane. See www.idi.ntnu.no/grupper/su/bucs.html.
COTS (Commercial Off-The-Shelf) software Ready-made software that can be acquired from a commercial vendor for a certain price and with certain usage conditions [own-definition].
NTNU Norwegian University of Science and Technology in Trondheim, Norway. See www.ntnu.no.
RUP The Rational Unified Process [Kruchten01] [Kroll03], an incremental development process around UML [Fowler04], see www.rational.com/products/rup/index.jsp.
XP eXtreme Programming, an agile development method with 12 sub-techniques, proposed by Kent Beck [Beck99], www.extremeprogramming.org/.
Auxiliary Bibliography (to be deleted later; only for Torgrim Lauritsen, email@example.com) – more uniform set-up of author names?
[Beck99] Kent Beck, Extreme programming explained. Embrace change, ISBN: 0201616416, Addison-Wesley Professional, 1999.
[Becker86] Becker, ++??
[Boehm88] Boehm, B. W., “A spiral model of software development and enhancement”, IEEE Computer, May 1988, pp. 61-72.
[Boehm04] Boehm, B. and Turner, R., “Balancing agility and discipline”, ISBN: 0321186125, Pearson Education, 2004.
[Boehm and Papaccio88] Boehm, B., Papaccio, P., “Understanding and controlling software costs”, IEEE Transactions on Software Engineering 14(10):1462-1476, 1988.
[Bowles00] Bowles, John, “Software Failure Modes and Effects Analysis for a Small Embedded Control System”, Proceedings of the Annual Reliability and Maintainability Symposium, January 2000, pp. ??.
[Braude01] Braude, Eric J., “Software Engineering – An object oriented perspective”, ISBN: 0471322083, John Wiley & Sons, Inc., 2001.
[Creswell94] Creswell, J., “Research design, qualitative and quantitative approaches”, Sage Publications, 1994.
[Denne03] Denne, M., Cleland-Huang, J., “Software by Numbers: Low-Risk, High-Return Development”, Chapter 1, ISBN: 0131407287, Prentice Hall PTR, October 10, 2003.
[Denzin94] Denzin, N., Lincoln, Y., “Handbook of qualitative research”, Sage publications, London, UK, 1994.
[DNV07] Det norske Veritas Consulting,”IT Risk Management – Business Critical Software”, web announcement, Oslo, 2007. http://www.dnv.com/consulting/systemsandsoftware/buscriticalss/index.asp
[Fowler00] Fowler, M., Scott, K., “UML Distilled, second edition”, ISBN-10: 0-201-65783-X, Addison-Wesley, 2000.
[Fowler04] Martin Fowler, UML Distilled, third edition, ISBN-10: 0-321-19368-7, Addison-Wesley, 2004.
[Grady99] Grady, Robert B., “An economic release decision model: Insights into software project management”, Proceedings of Conference on the Applications of Software Measurement, pp. 227-239, Orange Park, FL, 1999 (cited by Software quality engineering company). ??what is this
[Hollnagel06] Hollnagel, E., Woods, D. D., Leveson, N., “Resilience Engineering: Concepts and Precepts”, Chapter 15, ISBN: 0754646416, Ashgate Publishing, April 30, 2006
[Kroll03] Per Kroll and Philippe Kruchten, The Rational Unified Process – Made Easy, Addison-Wesley, 2003.
[Kruchten00] Kruchten, P., “The Rational Unified Process: An Introduction”, Chapter 1, ISBN: 0201707101, Addison-Wesley Professional, 2 edition (March 14, 2000).
[Kruchten01] Philippe Kruchten, “Tutorial on RUP”, http://www-128.ibm.com/developerworks/rational/library/content/RationalEdge/jan01/WhatIstheRationalUnifiedProcessJan01.pdf, Rational Software, 2001.
[Leffingwell97] Leffingwell, Dean, “Calculating the return on investment from more effective requirements management”, American Programmer 10(4):13-16, 1997.
[McGraw and Harbison97] McGraw, K., and Harbison, K., “User-centered requirements: the scenario-based engineering process”, Mahwah, NJ: Lawrence Erlbaum Associates, 1997.
[Phillips00] Phillips, Estelle, and Pugh, Derek S., “How to get a PhD”, ISBN: 033520550X, Open University Press, 2000.
[Redmill99] Redmill, F., Chudleigh, M., Catmur, J., System Safety: Hazop and Software Hazop, ISBN: 978-0-471-98280-7, Wiley, 1999.
[Reuvid05] Reuvid, Jonathan, “Managing business risk: a practical guide to protecting your business" – 2nd edition, ISBN: 074944228X, Kogan Page, 2005.
[Seaman99] Seaman, Carolyn B., "Qualitative methods in empirical studies of software engineering", IEEE Transactions on Software Engineering, 25(4):557-572, July/Aug. 1999.
[Towhidnejad03] Towhidnejad, M., Wallace, D., Gallo, A. M., “Fault tree analysis for software design”, Proceedings of the 27th annual NASA Goddard/IEEE software engineering workshop, 2003, pp. ??.
[Wiegers03] Wiegers, K. E., “Software Requirements, Second Edition”, Chapter x and 6, ISBN: 0735618798, Microsoft Press, February 26, 2003.
[Wohlin02] Wohlin, C., et al., “Experimentation in software engineering”, ISBN: 0792386825, Kluwer Academic Publishers, 2002.
File still on: http://www.idi.ntnu.no/grupper/su/publ/ese/se-qual-glossary-v3_0-rc-4jun07.doc