module 4 banner

Content

  1. Introduction
  2. Objective
  3. Risk Information Sheet After Identify
  4. Method and Tool
  5. Brainstorming
  6. Taxonomy Based Risk Identification
  7. Goal/Question/Metric Paradigm GQM
  8. NASA Software Checklist
  9. Mil Std 338 Design Checklist - Partial
  10. Table of Additional Methods and Tools
  11. Summary
  12. References

Introduction

In module 4 the risk information sheet after identify is presented. The example risk information sheet is taken from the case study showing the five fields: the risk statement, the context statement, a unique identifier (ID), date when identified, and who identified the risk. Method and tools will be presented with examples. An additional table of methods and tools is also presented.

Objective

Risk Information Sheet After Identify

This is an example risk information sheet from the case study showing the risk statement and the context statement information. The risk information sheet is from risk ID11. This risk will be carried throughout the rest of the modules on the risk management paradigm.

Risk Information Sheet After Identify

Notice the Risk Statement is a well-formed risk statement with one condition and at one or more consequence. The condition component focuses on what is currently causing concern. This is something that is true or widely perceived to be true. This component provides information that is useful when determining who to mitigate a risk. The consequence component focuses on the intermediate and long-term impact of the risk. Understanding the depth and breadth of the impact is useful in determining how much time, resources, and effort should be allocated to the mitigation effort. The context statement captures the what, when, where, how and why of the risk by describing the circumstances, contributing factors, and related issues (background and additional information that are not in the risk statement). The textual comments may include information on personnel, technical, or management issues, communication, or other pertinent aspects of the project.

Method and Tool

In the module 3 you spent time looking for the information to put in a risk statement and context statement. In this module you will look at the methods and tools use to identify the risks. Not all of the methods and tools will be explained. A table at the end of the module will present the most widely used set. The following tools will be discussed:

Brainstorming

Brainstorming is a group process where participants quickly generate ideas on a particular problem. Participants verbally identify ideas as they think of them, providing opportunities for others to build upon or spring off each other’s ideas. The technique is an excellent way of bringing out the creative thinking from a group. The emphasis is on the quantity of ideas, not the quality. Criticism or evaluation of ideas is not performed at this time.

The brainstorming method for identify:

TaxonomyBased Risk Identification

What is meant by taxonomy?

Webster’s dictionary:

  1. The study of the general principles of scientific classification.
  2. The orderly classification of plants and animals according to their presumed natural relationships.

The definition provided by the IEEE Software Engineering Standards Collection, Spring 1991 Edition is:

“A scheme that partitions a body of knowledge and defines the relationships among the pieces. It is used for classifying and understanding the body of knowledge.”

The Taxonomy-Based Questionnaire (TBQ) was developed by the Software Engineering Institute(SEI) and first documented the technical report Taxonomy Based Risk Identification [Carr 93]. TBQ is a set of questions organized according to the taxonomy, or division into ordered groups or categories, of project development for the purpose of identifying risks by interviewing a group of one or more individuals (TBQ Interviews).

For this course you shall use the following as a definition for system/software engineering risk management. The definition for Taxonomy-Based Questionnaire (TBQ) is: A questionnaire organized according to the taxonomy of system and software development for the purpose of identifying risks by interviewing a group of one or more individuals.

The taxonomy partitions either a system or software development into components. The Figure SEI Taxonomy Structure is an example of how to partition these relationships.

software development risk graphic

Figure SEI Taxonomy Structure

The software development risk taxonomy has three classes:

  1. Product engineering: what you are trying to build
  2. Development environment: how you are building it
  3. Program constraints: what you can’t control

Under each of the three classes is a series of Elements (requirements) associated with its class.

  1. Product engineering

Under each Element are the associated Attributes such as:

This tree thus represents the orderly classification of system or software development according to their presumed natural relationships.

Example TBQ Questions

This next table is a sample of some TBQ questions. Questions are listed under the attribute. For each attribute there is at least one question.

Hyperlink to Taxonomy_Carr_93

Class

A. Product Engineering

Element

2. Design

Attribute

d. Performance

Are there stringent response time or throughput requirements?

Starter

[22] Are there any problems with performance?

Cues

  • Throughput
  • Scheduling asynchronous real-time events
  • Real-time responses
  • Recovery timeliness
  • Response time
  • Database response, contention, or access

Starter

[23] Has a performance analysis been done?

(Yes)[23.a] what is your confidence in performance analysis?

Follow-up

(Yes)[23.b] Do you have a model to track performance through design and implementations?

There are three types of questions:

  1. Question with cues: (Look at question [22] in the table.) The intent is to ask the question and if the participants have trouble answering then you’d provide some cues to see if they trigger any thoughts.
  2. Questions with follow-up question(s): (Look at question [23] in the table.) The intent is to ask the question and based on the response (yes or no) you ask the appropriate follow-up questions.
  3. Question only: These questions have no cues or explicit follow-up questions.

The questions are meant to stimulate conversation about possible risks and are not necessarily worded to indicate whether a risk exists or not. Project personnel will ultimately decide whether or not a risk statement should be captured.

For example: Are there any problems with performance? An answer of yes, we have throughput problems doesn’t automatically mean it’s a risk. Natural follow-up questions might be:

  1. What type of throughput problems?
  2. Are they a cause for concern?
  3. What is the cause or condition?
  4. What are the possible effects or consequences?

Goal/Question/Metric Paradigm GQM

Goal/Question/Metric (GQM) is a paradigm for stating goals and deriving from them questions that need to be answered. These sets of questions provide a specification for the data needed to identify and track risks. The paradigm allows a project to develop or extend a metrics program to address its specific risks. GQM has the following three Steps:

  1. Generate a set of goals based upon the needs of the organization.
  2. Derive a set of questions.
  3. Develop a set of metrics, which provide the information needed to answer the questions.

The first step is to develop goals for the project and the management of risks. What is it that concerns the project? This is where the risks are initially identified.

The second step is to derive a set of questions that, if answered, would provide the status of the goals. The questions help quantify the goals and may give further details on the risks.

The third step is to develop a set of metrics that will answer the questions. Metrics can be used for both process and product related questions. Metrics will be discussed in the module on tracking risks.

GQM can be used as an iterative process. Initial goals tend to be rather generic. As risks are identified, goals/questions home in on specific areas of risk.

NASA Software Checklist

The NASA software checklist is organized by the project development phases, with emphasis on the software portion of the overall project lifecycle. NASA’s checklist can be used in the same manner as the TBQ to identify risks. It can be used in interviews, as a checklist, as a period review list, or in conjunction with the GQM paradigm. The following table is a list of possible examples of the generic risks that should be considered when any project contains software.

System Requirements Phase

RISK Yes/No

/Partial

ACTION

Accept/ Work

Are system-level requirements documented?

To what level?

Are they clear, unambiguous, verifiable?

Is there a project-wide method for dealing with future requirements changes?

Have software requirements been clearly delineated/allocated?

Have these system-level software requirements been reviewed, inspected with system engineers, hardware engineers, and the users to insure clarity and completeness?

Have firmware and software been differentiated; who is in charge of what and is there good coordination if H/W is doing “F/W”?

Are the effects on command latency and its ramifications on controllability known?

Is an impact analysis conducted for all changes to baseline requirements?

The table also contains practical questions that were gathered by experienced NASA engineers. The checklist is laid out with generic risks listed, followed by a column to indicate if it applies to the specific project. If it is partially a problem as stated, further clarification should be added. The last column is to indicate if the risk should be accepted or needs to be worked, i.e., the risk needs to be researched or mitigated.

Mil Std 338 Design Checklist - Partial

Mil Std 338 is used in a similar fashion as the taxonomy and the software checklist but contains questions on hardware design and reliability. This is only a partial list from Mil Std Handbook 338 identifying typical questions for a design review.

  1. Is the design simple?
  2. Minimum number of parts?
  3. Are there adequate indicators to verify critical functions?
  4. Are reliability requirements established for critical items?
  5. Are standard high-reliability parts being used?
  6. Have parts been selected to meet reliability requirements?
  7. Are circuit safety margins ample?
  8. Has provision been made for the use of electronic failure
  9. Prediction techniques, including marginal testing?
  10. Have normal modes of failure and magnitude of each mode for each item or critical part been identified?
  11. Has redundancy been provided where needed to meet specified reliability?
  12. Does the design account for early failure, useful life and wear out?

Table of Additional Methods and Tools

The Table Identify Methods and Tools summarize additional methods and tools for capturing statements of risk.

Table Identify Methods and Tools

Methods and Tools

Description

Brainstorming Project personnel verbally identify risks as they think of them, thus providing the opportunity for participants to build on each other’s ideas.
Event Tree An event tree is a logic diagram that starts with a single event and explores all possible combinations of success and failure events to lead to an (unfavorable or favorable) outcome.
Fault Tree Analysis A fault tree is a model that logically and graphically represents the various combinations of possible events, both faulty and normal, occurring in a system that leads to the top undesired event.
Failure Modes & Effect Analysis

Failure modes and effects analysis (FMEA) is a tool to analyze as much information as possible on a given hardware or system to identify all possible failure modes and assess the consequences or effects of each of these failures.

Goal/Question/Metric Paradigm (GQM)

Paradigm for formalizing the characterization, planning, construction, analysis, learning and feedback of tasks.

Hazard Analysis

A hazard analysis is a tabular inventory of nontrivial system hazards and a qualitative assessment of them after countermeasures have been imposed.

Mil Std 338 Design Checklist

Mil Std 338 contains questions on hardware design and reliability. Used in a similar fashion as TBQ.

NASA Software Check list

Organized by development phases of a project, with emphasis on the software portion of the overall project lifecycle. See TBQ to identify risk.

Periodic Risk Reporting

Periodic (mandatory and scheduled) reporting of risks by the project personnel.

Project Profile Questions

A description of how to tailor the taxonomy-based questionnaire based on the project characteristics.

Risk Form

A form used to document new risks as they are identified.

Risk Information Sheet

A means of documenting information about a risk, much as a software trouble or problem report documents a problem in software. Information is added to the sheet as it is acquired or developed.

Short Taxonomy-Based Questionnaire (TBQ)

A shortened version of the TBQ used in meetings, one-on-one interviews and as a memory jogger adjunct to voluntary or periodic risk reporting.

Taxonomy-Based Questionnaire

A listing of interview questions organized according to the software development risk taxonomy [Carr 93].

TBQ Interview

Structured peer group interviews and structured interviews of individuals using the TBQ [Carr 93].

Voluntary Risk Reporting

Routine distribution and processing of risk forms, voluntarily submitted by project personnel as risks are identified.

Summary

There are many methods and approaches to identifying risk using the knowledge and uncertainty of the project personnel as input. Regardless of the approach, it is important to capture for each risk

Looking at a list of all the risks provides a global picture of the risks on the project that may not be seen looking at each risk individually. (Note: This does not mean you necessarily have two different artifacts. They could be generated from a single set of data in a database.)

References

  1. [Carr 93] Carr; Marvin, Konda; Suresh, Monarch; Ira; Ulrich; Carol, & Walker; Clay. Taxonomy-Based Risk Identification (CMU/SEI-93-TR-6, ADA266992). Pittsburgh, Pa.: Software Engineering Institute, Carnegie Mellon University, 1993. (Note: This reference is included in the guidebook reference list.)
  2. [Gluch 94a]Gluch, David P.A. Construct for describing Software Development Risk (CMU/SEI-94-TR-14, ADA284922). Pittsburgh, PA.: Software Engineering Institute, Carnegie Mellon University, 1994.
  3. Risk Management Plan Rational
  4. Risk Management Database
  5. The Software Engineering Information Repository (SEIR) is free, but you must be a registered member in order to access the information.

January 1, 2006 James C. Helm, PhD., P.E.