Track

module 9 banner

Content

  1. Track Objectives
  2. Track Element
  3. Risk information Sheet Before Track
  4. What Is Tracking?
  5. Tracking Definition
  6. Acquire
  7. Compile
  8. Report
  9. Summary
  10. References

Track Objectives

At the conclusion of Track, the student will be able to:

Track Element

Lets go back and visualize Roger L Van Scoy's paradigm [Van Scoy 92, p.9] and what he said about the track element. To refresh your memory, Van Scoy said: Risk tracking is required to ensure effective action plan implementation. This means that we must devise the risk metrics and triggering events needed to ensure that the planned risk actions are working. Tracking is the watchdog function of the risk action plan." The paradigm illustrates a set of functions that are identified as continuous activities throughout the life cycle of a project. The Track element is highlighted in Figure Track Risk Management Paradigm.
Track Risk Management Paradigm image

Figure Track Risk Management Paradigm

The risk management team will use the risk information sheet to indicate what information will be gathered during the track phase of the paradigm. The shaded areas in Figure Risk Information Sheet Before Track indicate the fields, which the risk team members will be complete and/or updated during track:

Risk information Sheet Before Track

ID 11 Risk Information SheetBefore Track Identified:_9/_1/_02_
Priority

Probability



Impact

7

M

M
Statement has recently been decided that the Infrared sensors will be developed in-house and how they will communicate and how sensor data will be processed will be based on assumptions until the detailed design is baselined; the accuracy and completeness of those assumptions will determine the magnitude of change in the IR-SIP Instrument Controller CI and Infrared Sensing Unit CI interface requirements - it could be minor or catastrophic.
Timeframe N Originator Class Requirements Assigned To:
Context The AA program is in the Systems Preliminary Design Phase and the IR-SIP project software is in the Software Specification Phase.
  • This is the first time these sensors will be used on a NASA mission. They will still be under design and definition during the IR-SIP Controller's software specification through implementation phases.Therefore, assumptions about the interface will have to be made in implementing the IR-SIP CSCI and if those assumptions are incorrect, then software rewrites will be necessary. We do have access to a reasonable set of assumptions and information from a contractor who has developed very similar sensors, but again, we don't really feel 100% confident in those assumptions.
  • Problems were not anticipated in the current success-oriented schedule so there is no slack time if the impact of the changes is major. Schedule slips, cost overruns, and reduction in adequate testing time are all possible if the assumptions prove false.
  • System testing does not begin until very late in the development, so if problems are encountered there is usually no time to make changes in the hardware. Therefore, software must provide work-arounds for problems encountered.
Approach:Research/ Accept/ Watch/ Mitigate

Mitigation goal/success measures:
Reduce the probability and impact of incorrect interface assumptions to a minimum: estimated low probability and low impact. Ideally, completion of prototype tests will show that assumptions we got from EasySensor were correct and there is no impact at all.]
  1. Build prototypes of the IR-SIP CSCI software primitives needed to control the interface with the Infrared Sensing Unit early in the software requirements phase.
    • Start by 1/10/02. Prototype should contain all the functionality defined by that date for the configuration of the Infrared Sensing Unit. Complete by 1/30/02.
  2. Have early interface tests with the Infrared Sensor Unit to confirm functionality and control issues. Allocate enough time for software work-arounds to be developed if problems arise.
    • Test of the interface between the two subsystems will be completed by 2/3/02.
    • Second prototype to command the transmission of sensor data from the Unit to the IR-SIP CSCI will be started by 2/12/02 and completed by 2/20/02.
    • All subsequent interface tests will be performed by 2/28/02.
  3. Feed information from the two prototype tests into updates to the Interface Requirements Specification and the associated sections of the schedule by 3/2/02.
  4. Determine the impact of the revised requirements by 3/6/02.
Contingency Plan and Trigger
  • Trigger:If the 2/12/02 or 2/28/02 dates cannot be met, put the contingency plan in place.
  • Contingency Plan:Elevate this as one of the top 10 project risks and request that project reserves be used to pay for additional contract support to get the two sets of requirements firmed up (i.e., configuration and data transfer).If additional contract resources are not available, slip the schedule for completion of the prototypes to be done by March 20, and request that project reserves be used to pay for additional resources to be added to the software design and implementation to make up the schedule slip.
Status Status Date




















Approval __________________________

Closing Date

__/__/__

Closing Rationale

Figure Risk Information Sheet Before Track

What Is Tracking?

Tracking is a process in which risk data are acquired, compiled, and reported by the person(s) responsible for tracking watched and mitigated risks. Project personnel define the data required in status reports during the Plan function of the paradigm. During tracking, the data are collected and the results are compiled and presented in the reports. The generated document or presentation is input to the Control function, which is described in the next chapter. The objectives of the Track function is to collect accurate, timely, and relevant risk in- formation and to present it in a clear and easily understood manner appropriate to the person/group who receives the status report. Project personnel use the status reports generated during tracking in the control element to make decisions on how to manage risks. The Figure Track Function shows the inputs and outputs of the Track function.

track figure
Figure Track Function

The Table Track Data Items describes the data items the risk management team will use in the Track function. Table Track Data Items
Data Item

Description

Risk statement

Context

Impact

Probability

Timeframe

Classification

Rank

Plan Approach
Prior to tracing, the risk information for each risk comprises the statement of risk, supporting context, impact, probability, timeframe, class,, rank, and plan approach. This could be for all of the risks or for a small subset of risks targeted for risk tracking.

Action plans

Action plans describe what action will be taken to deal with the Risk. Mitigation plans and tracking requirements for watched risks identify the measures, indicators, and triggers to track both the statuses of the risks and the mitigation progress.

Risk & mitigation plan measures

These consist of the current values for all watched-risk and mitigation-plan measures and indicators. These data can be used to determine the current status of the risk action plan and can be compiled and presented as part of a report.



Resources

These are the available resources for mitigation. In order to develop effective status reports, project personnel need to know the limits of the available resources to mitigate and watch risks.

Project data

Project information, such as schedule and budget variances, critical path changes, and project/performance indicators can be used as triggers, thresholds, and risk- or plan-specific measures where appropriate. This data can be used to determine the current status of the project plan as it relates to risk management and can be compiled and presented as part of a status report.

Status reports

Risks plans

Mitigation plans

The output of tracking is a variety of status reports highlighting the current values of the risk indicators and the statuses of action plans. These reports can be verbal or written, covering the status of both individual risks and aggregated risk areas as appropriate.

Risk statement

Context

Impact

Probability

Timeframe

Classification

Rank

Plan Approach

Status



In addition to the delivery of status reports, tracking updates the information associated with each risk to include the current status data for the risk (e.g.. measure, indicator, and trigger values).

Coordination of Tracking and Control Risk tracking and control should be closely coordinated, because decisions that are made about risks and action plans during control require the data that are collected during tracking. Example: The decision of whether to continue tracking a risk or to close it is made by project personnel during control, based on the data acquired during tracking. Tracking and Control vs. Project Management Risk tracking and control are closely related to standard project management monitoring techniques in which project data, such as schedule and cost data, are tracked. Project decisions are then based on the tracked data. When appropriate, the data used for risk management can be integrated and coordinated with existing project management activities for a project or organization. Standard project management techniques that are already being used on a project can also be employed to monitor the risk management processes (e.g., the number of risks opened and closed, changes to the risk management plan, etc.). Sets of Related Risks During risk identification and analysis, risks that are related can be grouped together for easier management; they can also be tracked as a set. If an overall plan has been developed for the set, then the set's mitigation plan is tracked, and risk and plan status data are reported as an aggregate. However, any individually critical risks can also be tracked separately from the set. Approaches There are not many tools specifically designed for tracking risks. Rather, there are approaches for tracking risks, which utilize existing, general methods and tools. The Table Track Approach summarizes the approaches used to support each of the tracking activities. More details on the approaches can be found in subsequent sections of this chapter and in the appendix chapters.

Table Track Approach

Activity Approach Method or Tool
Acquire
  • Re-evaluate risk attributes (e.g.. Binary or Tri-level attributes).
  • Interview knowledgeable project personnel
  • Review technical documentation and engineering summary reports (e.g., PERT charts, schedules, budgets, requirements traces, etc.).
  • Review status reports or meeting minutes.
  • Collect data from project products using automation
Binary attribute evaluation



Tri-level attribute
Compile

Data are analyzed and compiled into status reports according to the project's reporting requirements. This is the step where trends are examined. Reporting approaches supported by the compile activity may include any of the following:

  • Mitigation plan status summaries
  • Risk status summaries
  • Trend summaries
Bar graph

Mitigation status report

Risk information sheet

Spreadsheet risk tracking

Stoplight chart

Time correlation chart

Time graph



Report

  • Deliver verbal reports.
    • Deliver written reports.
  • Give formal presentations.
Note: Any of the above reports can show status for individual risks, aggregated areas of risks, trends, or a mixture.

Mitigation status report

Risk information sheet

Spreadsheet risk tracking

Stoplight chart

Tracking Definition

This section defines terms and types of tracking data used in both the Track and Control chapters. A software metric defines a standard way of measuring some attribute of the software development process [Grady 87]. Likewise, a risk metric defines a standard way of measuring some attribute of the risk management process.

Risk Metrics Associated with The Track Element

Program and risk metrics provide the risk manager and risk team members with the information needed to make effective decisions. A program manager uses program metrics to assess the cost and schedule of a program as well as the performance and quality of a product. They can also be used to help Identify and Track risks. We have used metrics to:

During the Track phase a program metric should be used to look at the rate of a hardware or software module completion schedule. If this metric indicates that the rate of completion is lower than expected, then a schedule risk can be identified and a trigger raised. Risk metrics are also used to measure a risk�s attributes and assess the progress of a mitigation or task plan.

We can also use the Figure Metrics by Life Cycle Phases to identify where to collect metrics during the life cycle phases. The darker borders in the figure are the phases where metrics can be used to track risks.

Figure Metrics by Life Cycle Phases 6-10

Figure Metrics by Life Cycle Phases

Metrics can be used to identify new risks within each phase of the life cycle as well as across all phases of project development, which includes maintenance and sustaining engineering. Program metrics collected during the development process requires tracking at the schedule and budget. The schedule is both personnel and funding. The rate of component completion is program metrics collection that must also be tracked. During the requirements phase, the text of the requirements document, the requirements changes, and requirements-to-test mappings are examples of program metrics to be collected. The requirements-to-test mappings are the requirements that trace to the test cases that the testing personnel use to verify and validate that the requirements are met. Metrics are kept during the testing phase. During the testing phase metrics are collected on when and where errors are detected. This includes unit testing, integration testing, subsystem testing, integration, and installation testing. During Maintenance metrics are collected to know which modules are being changed and the change schedule.

It is important to evaluate the requirements as early as possible in the development process since requirement problems not identified until testing can cost 100 times more. The Case Study recognized requirement risks stating the initial system requirements are insufficient and unknowns would not be fired up until late in the project. This is the Risk ID7 that stated the science software is expected to have substantial TBDs (to be determined).

The following are example of metrics to observe by lifecycle phase and collected for the project documentation:

There are tools available to automatically collect metrics directly from documents. The tool Automated Requirements Measurement (ARM) automatically derives metrics from the text of a requirements document. The tool is available at no cost through the SATC homepage-tool section. http://satc.gsfc.nasa.gov/tools/arm

During requirements the use of weak phrases (e.g., adequate, appropriate, etc.) and options (e.g., can, may, etc.) leads to requirements ambiguity. The completeness of requirements can be measured by counting the number of TBDs (to be determined), TBAs (to be added), and the TBRs (to be resolved) contained in the requirements document.

Traceability is a term used to indicate the number of items that can be traced between the requirement specifications and the detailed design document. Tracing down ensures that all of the requirements in the specification are documented in the detailed design document. Tracing up ensures that all requirements that are in the detailed design document can be traced to requirements in the requirement specifications (i.e., no additional requirements were added in the detailed design document). The requirement specifications also should trace to the test documents and the verification and validation documents. Theoretically the software engineer should never leave the requirements phase and start the design phase with TBDs remaining.

Figure Requirements Metrics Example - Text Analysis 6-16

Figure Requirements Metrics Example - Text Analysis

Figure Requirements Metrics Example - Text Analysis is the result of a textual analysis of 56 software requirement documents. ARM was used to evaluate other project document such as design, code, and test documents. It could be used to identify requirements that may not be testable due to weak phrases and options, like the word normal, fast etc.

During Track other metrics that could be collected relate to the software or hardware testing. The risk statement in Risk ID100: Project resources and schedules were underestimated; schedule slips, cost overruns, testing time inadequate. Will be used as an example to obtain tracking indicators from.

The test engineers must collect and track at a minimum the following two pieces of information about errors (also called Bugs). The two are: Open/closed rate of errors and Number of errors per module of code. These trace back and relate to schedule risks and code reliability/maintainability.

Open/closed rate of errors

The rate in which errors (bugs) are found and resolved (fixed) is important to know for scheduling purposes. The rate in which errors are found should decrease as testing nears its conclusion. A continual rate of closure is necessary to ensure that the errors are being resolved in a timely fashion. This concept relates to Risks concerned with testing resources and schedule such as ID 6 and ID 21 & ID100

Errors per module of code

It is important to correlate all errors to the modules that were changed. An excessive number of errors for a module indicate that there may have been multiple changes to the original structure and that the module�s integrity may be compromised. In Case Study, since requirement completeness is a risk, a high number of changes in a specific module may indicate an area of requirement instability. The changes made to a module should be indicated by comments in the code. This concept relates to Risks such as #101, C++ and OOD usage.

Figure Testing Metrics Example - Tracking Errors/Faults/Changes

Figure Testing Metrics Example - Tracking Errors/Faults/Changes

In the Figure Testing Metrics Example - Tracking Errors/Faults/Changes, the graph on the left shows that the rate of closure is inconsistent over time. With Testing ending in three months, the number of errors being found should decrease and closure rate should exceed open rate. The graph on the right shows the number of changes per module.

A process metrics example can be obtained form Risk ID 6. Risk ID 6: Project software schedule and resources were underestimated; schedule slips, reduction in adequate testing time. The data the project manager should collect is the effort per activity. Effort is also means man-hours. The project manager should establish the trigger to look for when effort exceeds an expected percentage.

Figure Process Metrics Example -Effort per Life Cycle Phase

Figure Process Metrics Example Effort per Life Cycle Phase

The Figure Process Metrics Example Effort per Life Cycle Phase can be applied to track risks, this type of graph assists in evaluation schedule and testing risks. The graph on the left is an industry guideline for effort per phase [Grady 92]. In the project data on the right graph, notice that the development time far exceeds the projected development time, causing a substantial decrease in testing time. The current status line shows where the project is now, with the remaining information being projected. The risk to sufficient test time is high. A risk measure (which is synonymous with metric [Baumert 92]) defines a standard way; of measuring some attribute of the risk management process. Risk and mitigation plan measures can be qualitative or quantitative. Example: The values of the risk attributes, e.g., the impact of a risk and the probability o a risk occurring, are examples of risk measures.- Indicators are representations of measurement data that provide insight into a process oi improvement activity [Baumert 92]. They can be used to show status and, in this document, are also called status indicators. Indicators may use one or more measures, and the; can give a more complex measure of the risk and mitigation plan. In the following diagram, a measure from Risk A as well as two measures from tasks in the mitigation plan is used to create Status Indicator B. Triggers are thresholds for indicators that specify when an action, such as implementing a contingency plan, may need to be taken. Triggers are generally used to

Trigger Example 1 A given risk on a project is the following: Not all developers are trained in the new compiler; delivery of coded modules may be delayed. This example is related to the previous diagram. Measure M3 is the number of developer; trained each week; M4 is the schedule of milestones indicating the beginning of development for each module; and M5 is the number of developers required for each module. The combination of M3, M4, and M5 yields Status Indicator B, which is the available number of trained developers for modules under development. Project personnel could define the trigger in this example to be the point at which the number of available trained developers is 10% below the required number. Trigger Example 2 During Plan, indicators may be chosen to track risks and mitigation plans, Acceptable values or limits (i.e., thresholds/triggers) for the indicator values can also be determined.If the value of an indicator rises above the trigger value, this provides valuable information to the risk manager or project manager that control action should be considered. Triggers should provide risk manager or project manager with meaningful information to enable more informed control decisions.An effective trigger will give risk personnel enough time to take appropriate action or to focus extra attention on a risk.

Figure Triggers Percent Within Budget

Figure Triggers Percent Within Budget

In Risk ID 100 Project resources (personnel number and availability) and schedules were underestimated; schedules slips, cost overruns, reduction in adequacy of development processes (especially testing time adequacy) likely.In this example the trigger is when under or over budget by 10%.The project is under budget (a risk) because, if the project manager doesn't spend the budget, the authorizing agent will want the money back.If the project is over budget then an accounting as to why the project is over budget will be needed.The project management should focus on the risk causing this and take steps to reduce spending.In the case study risk 100 is one risk that relates to this data, other risks such as 4, 6, and 15 also relate to budget issues. Measure vs. Indicator In general, a measure reflects a characteristic of a risk, while an indicator uses one or common risk measures to provide insight into or show the status of the management of a risk. Example: Risk exposure, which is the product of the probability and impact of a risk, can be used as a status indicator. Impact and probability are usually risk measures. What Makes a Good Risk Indicator? For an indicator to be categorized as "good," it needs to possess the following characteristics [Baumert 92]: Both qualitative and quantitative data can be used to track risks and plans. While quantitative data are more precise and more likely to be accurate, it is not always feasible or an effective use of resources to refine data to a quantitative level. Qualitative, even instinctive, evaluations of status can be used to support decision making when quantitative data are unavailable. Effective Indicators for Risk Tracking Effective tracking indicators focus on the anticipatory aspects of the available data. The trend of a measure over time is often a good indicator. With historical information, trends in the data are more important than the values at any one time. Example: A useful status indicator may be the number of coding errors debugged per week, and the trend of this indicator can be' used by project personnel for risk management as appropriate. Stoplight or Fever Chart Indicator

Stoplight or Fever Chart Indicator Image

Figure Stoplight or Fever Chart Indicator

Stoplight charts provide a means of communicating the status of risk mitigation actions.They indicate to the decision-maker how well the current plans are doing and whether or not management action is required.

Not all printers are color printers.There needs to be a black and white equivalent for stoplight colors.Dark colors can be used to attract attention.Thus, black can be used in place of red if the chart doesn't have colors.Also, meaningful symbols (e.g., $) could be used to focus an executive's attention (if colors are not an option). What's an Effective Trigger? Effective triggers Risk Example Background The following example presents a risk and a set of tracking measures, indicators, and triggers for the chosen risk.
Risk Statement: No simulation of the system's display performance has been done we may not meet the performance requirements.

Context: During the initial phases of planning, a high-fidelity' performance simulation of the system was defined but was cut due to budget considerations. Nothing was substituted, not even a limited low-fidelity simulation or an order-of-magnitude analysis. We have implemented 20% of the screen display code, and it already takes 30% of the total available frame-time for updating the sensor displays. No one is monitoring the performance.



Risk Example Data In this example, attribute values are estimated based on the AFSC/AFLC Pamphlet 800-45 [Air Force 88].From the risk's impact and probability attribute values, project personnel determine the level of risk exposure, which will be one of the indicators used to track the risk.Next, personnel determine the trigger value for risk exposure. For this particular risk, additional measures are used to calculate a second indicator, "frametime used/code complete ratio," and project personnel determine a trigger for that indicator as well.The measures, indicators, and triggers and their values for this example are shown in Table Measures, Indicators, and Triggers.

Table Measures, Indicators, and Triggers

Data

Type

Value/Description

Probability

Measure

Probable

Impact

Measure

Critical

Risk exposure

Indicator

Moderate

Trigger value for risk exposure

Trigger

If the risk exposure value becomes "High," then project personnel will consider implementing a contingency plan.

% Frametime

Measure

30%

%Code Complete

Measure

20%

Frametime used/code complete ratio

Indicator

30% / 20% = 1.5

Trigger value for Frametime used/code complete ratio

Trigger

The Frametime used/code complete ratio must be 0.75 when the code is 45% finished. If it exceeds this value, then a contingency plan will be implemented.


Acquire

The Acquire activity is a process, which includes all of the steps associated with collecting information about, and updating the values of risk measures and status indicators for watched and mitigated risks. The required data are defined by project personnel during planning and are used to track the progress of watched risks and risk mitigation plans. After the data are collected, the compile activity organizes them. This section outlines that Acquire activity.The objective of the acquire activity is to collect all relevant tracking data for a given risk.The Figure Acquire shows the inputs and outputs for acquiring risk data.

Figure Acquired

figure acquired

Risk data for watched risks, mitigation plan data, and other project data are collected during the Acquire activity. The frequency of data collection is defined in risk action plans. Risk exposure is the indicator, which is tracked over time; the risk exposure is given in the tool called the Mitigation Status Reports.Risk exposure is derived by using two measures: the impact level of the risk and the probability of the risk occurring.Project personnel estimate the impact (e.g.. on a scale of 1-5) and the probability (e.g., on a scale of 1-10).After the project personnel estimate these data, the measures are considered to be "acquired" for the risk. Related risks indicators can be grouped together and then tracked as a set. The impact, probability, and timeframe measures as well as set indicators can be estimated, and triggers, or even a set of them, can be established for the indicators. If an overall mitigation plan has been developed for the set, then it is tracked. Both set risk indicators and individual risk indicators could be acquired and reported, particularly if the set of risks includes one or more individually critical risks. This is an example of how the project personnel should group related risks into a set.There are several training-related risks associated with a project.Collectively, they represent a critical mass of potential problems that could cripple the project's schedule.The project manager has requested a weekly report on the status of the training effort.Individual measures are gathered for the types of training being provided, the personnel being trained, and the availability of self-training materials and tool documentation.A cumulative indicator is then derived from the individual measures.However, the most critical training issue is focused on compiler training.Its associated measure is the number of development programmers who have received-training for the chosen compiler.That information is retained and reported as a separate indicator. The risk personnel when acquiring tracking data should keep the following considerations in mind:

The Table Methods and Tools summarize the approaches, methods, and tools that can be used to acquire risk data.Detailed descriptions of the methods and tools are provided as separate selectable pages in the course outline.

Table Methods and Tools

Approach

Description

Usefulness

Re-evaluate risk attributes

The individual responsible for the risk should periodically re-evaluate the risk attributes to determine changes in probability, impact, and timeframe. The following methods are designed to evaluate risk attributes:

  • Binary Attribute Evaluation
  • Tri-level Attribute Evaluation


Access to knowledgeable individuals or other data may be required.
This provides timely communication of potential new risk areas.



This provides status information for watched risk and mitigation plans.

Direct communication

This is informal communication with the personnel closest to the risk or risk mitigation activity. Often, the software engineers working on the project or other personnel directly responsible for actions on the risk or the plan are interviewed. In some cases, the individual who is interviewed may be the manager responsible for the risk o mitigation plan.

This provides timely communication of potential new risk areas. This provides status information for watched risk and mitigation plans.

Review of technical documentation or engineering summary reports

This involves looking at the technical aspects of the progress of the development effort.

These reviews can be useful for technical risks but can also provide insight into general project issues.



These can also be used to look for new risk information.

Review of status report or meeting minutes.

This involves a review of documentation available from the routine protect status meetings.

These reviews can provide insight into general project issues.



They provide status information for watched risk and mitigation plans.

Automated data collection from project products

This involves using commercially available tools to track and collect progress and quality measures from the project's products and reports.

These tools provide consistent, often quantitative risk data.



The measures collected can be used as indicators to track risks and the progress of mitigation efforts.



Compile

The Compile activity is the process in which data for a given risk is analyzed, combined, calculated, and organized for the tracking of the risk and its associated mitigation plan.The data are collected during the acquire activity and are presented during the report activity.The objective of the Compile activity is to organize the relevant tracking data for a given risk.The report can include a summary of the risk, its watch requirements or mitigation plan, and other key issues relevant to the risk or mitigation plan.The Figure Compile shows the inputs and outputs for compiling risk data.

figure compile

Figure Compile

While data are being analyzed and compiled into reports, project personnel must keep in mind the overall strategies and goals of the watch requirements or risk mitigation plan.Paying attention to the triggers for risk indicators is only one aspect of data analysis.Other factors to keep in mind are: the mitigation goal, expected plan progress, broad-based trends, and specific milestones or events. The risk's tracking requirements and mitigation plan should identify what indicators need to be compiled. For a set of risks, individual related risk data are combined, calculated, and reformulated to present a cohesive picture of the current risk status. Databases or appropriate analysis and reporting forms can be used to aid the compilation of data for this activity. The project teams' reports can be either written or verbal and can be part of either formal or informal reporting processes. The following are the primary considerations of reporting:

The report content and format should be driven by the following factors: the tracking requirements of the risk and mitigation efforts as well as the intended audience of the report (e.g., senior managers usually have limited time available and prefer abstracted, summarized reports). The project team members should be aware of data trends and patterns. Trends can be observed through the evaluation-of successive reports. Persistent lateness in taking action, oscillating priority values, significant changes in the number of high-impact risks or risks of a particular type, and other trends should be identified, analyzed, and evaluated for additional negative or positive indicators. These may not be trends that are specifically examined at every opportunity, but patterns that are identified over time and investigated when appropriate. Analysis of trends and patterns can also lead to the identification of new risks to the project. Data Trend Example The following is a data trend example.

A technical lead notices an unusual increase in the number of testing-related risks in the top N project risks during the last three weeks. While it might be expected that as coding progresses more testing issues will surface, software coding for this project has not begun. Analysis of the testing-related risks showed that the test plans, which have been completed and distributed for review, are perceived to be inadequate. The technical lead identifies a new risk to the program, which focuses on the completeness of the test plans. The mitigation plan for the new risk calls for project personnel to receive more training in the area of software testing and in the development of test plans.

The Table Methods and Tools summarize the approaches, methods, and tools used to compile data. Effective approaches include graphic and tabular summaries of the key measures and indicators for risks and their related mitigation actions. Effective summaries also include time history information, which facilitates the identification of trends and variations. Detailed descriptions of the methods and tools are provided as separate selectable pages in the course outline.

Table Methods and Tools

Approach

Description

Usefulness

Mitigation plan status summaries

Plan summaries are reports, which require compiled data showing mitigation plan progress. Mitigation Status Reports are designed to track plan status.

Mitigation status reports employ textual information and graphics (e.g., time graphs) to document detailed information on specific risk mitigation plans and are used to support decisions.

Risk status summaries

Summary tables are concise tabular compilations of key data items. The following methods and tools are designed to produce and use tabular formats:

Risk Information Sheet

Spreadsheet Risk Tracking

Stoplight Chart

The analysis of current status data can identify changes in priority or the need for outside help. It can also identify new risks to the project.

Risk information sheets are used to document detailed information on specific risks and to support decisions. Spreadsheet risk tracking reports are used to summarize the current status of all risks. They are best used to support routine project activities. Stoplight charts summarize the status of important risks and their mitigation efforts. They are effective tools for reporting risk information to senior management.

Trend summaries

Trend summaries arc graphical representations of compiled risk data. The following are used to present risk data on graphs or-charts:

Bar Graph

Time Correlation Chart

Time Graph

Bar graphs are graphical representations of data across distinct categories. They highlight changes in the number of risks in individual categories and can be used to identify trends.



Time correlation charts show the relationship of one indicator with respect to another over time. They are useful for identifying the trend over time in the relationship of two indicators.



Time graphs are graphical representations of data variations over time. They are useful for identifying the trend over time of an indicator for a risk. They are also used in Mitigation Status Reports.

Report

The Report activity is a process in which status information about risks and mitigation plans is communicated to decision makers and team members. The delivered reports summarize the data that were analyzed and organized in the Compile activity and are the input to the Control function. The Compile and Report activities are related. Reporting requirements drive how project personnel compile the data. The objective of reporting is to communicate risk status reports to support effective decision-making. The Figure Report shows the inputs and outputs for communicating risk data.

Figure Report

figure report

Reports are generally delivered as part of routine project management activities (e.g., as part of a weekly or monthly project status update).A critical event or condition may require exception reporting to management rather than waiting for the next report period. The frequency of reporting depends upon the following:

This is an example of a typical reporting schedule for a project that is several months or years in duration.  These time frequencies could vary depending on the criticality of the project.

On a given project, spreadsheet risk tracking reports are normally used as read-ahead material for weekly project meetings. They contain only the important risks. The important risks are those being watched and planned, as well as new risks. However, once a month, all risks are included in the report. This gives project personnel the opportunity to review the less important risks and determine whether any have become more critical.

Also, once a month, senior managers get a stoplight chart on the top N risks to the project. These charts indicate which risks may become critical and where senior management decisions are required.



Formal presentations of the important risks are made each quarter to all organizations at a site. This is done to keep other projects informed of the progress being made.

Spreadsheet Risk Tracking Spreadsheet risk tracking is a method, which monitors project risks by summarizing and periodically reviewing their statuses. The data for this method are documented in a spreadsheet format. The basic process involves a periodic (e.g., weekly or monthly) update and review of the risks, generally held in conjunction with regularly scheduled project status meetings. Constraints

Benefits
The following is an example of a spreadsheet. The spreadsheet should be formatted to easily indicate risk status and information. In this example there are open risks, closed, and a watch list.

Case Study

Spreadsheet Risk Tracking

Monthly Project Review


Risk Status Spreadsheet - October 4, 2012


Priority Risk ID
Risk Statement Status Comments Probability Impact Assigned To
1

22

A Satellite Simulator is being developed; impacts to current project plan and other mitigation plans are unknown but could be significant - availability of resources to make use of simulator is questionable

New risk - resulted from closure of Risk 18.

H

H

Helm
 

2

100

Project resources (personnel number and availability) and schedules were underestimated; schedule slips, cost overruns, reduction in adequacy of development processes (especially testing time adequacy) likely.

New risk 22 has made this worse. Key personnel had designated back-ups in case availability slips, but Simulator work negates that.

H

H

Helm
 

3

23

Metrics are being reported only on a quarterly basis; schedules may slip and recognition of their slip may be too late for effective Replanning to take place.

New risk identified by C. Lopez

M

M

Ferris

4

7

Science requirements have substantial TBDs; late completion of TBDs likely, with reduction in adequate testing time, possible science application software failure, incorrect science data being captured, hardware damage if incorrect safety limits were provided, extensive rework and substantial cost overruns, mission failure if problems not found before system is in operation.

TBD's are being analyzed and researched. Expect completion of first set next week.

M

H

Helm
 

5

11

It has recently been decided that the Infrared sensors will be developed in-house and how they will communicate and how sensor data will be processed will be based on assumptions until the detailed design is baselined; the accuracy and completeness of those assumptions will determine the magnitude of change in the Instrument Controller CI and Infrared Sensing Unit CI interface requirements - it could be minor or catastrophic.

So far the assumptions we used continue to hold as we complete prototypes. Only very minor requirement changes have resulted so far and the ripple has been negligible.

L

M

Helm
 

7

13

Waterfall lifecycle model is being used to develop all software; it may cause serious integration problems between CI and IR sensor and/or between CI and AA platform leading to a missed launch window, excessive cost to meet window, or failure to successfully integrate the system.

Project plan revised for incremental life cycle. Recommendation to move to Watch negated by new risk 22. Revisit next month.

L

L

Lopez

.

.

.



Include the other Top N risks....









CLOSED

2

Commercial parts suitability for space applications is unknown; parts failure may lead to system failure and use of space grade parts may cause schedule delays since space qualified parts procurement have a procurement lead time of at least 18 months.

Commercial parts appear to be working and same reliability as space qualified parts





Ferris

CLOSED

18

There is no AA Satellite Simulator currently scheduled for development; probable that the CSCI will fail when initially integrated with the actual AA Satellite since prior interface testing will not have been possible, thus software fixes will be done very late in the project schedule and may cause the launch date to slip.

Helm authorized development of simulator on an accelerated schedule. Project plan must be revisited to enable us to make use of the simulator. Recommendation to close risk and open a new risk 21, accepted.





Helm





WATCH LIST









W

101

Use of C++, the selected compiler, and OOD are new for software staff; decreased productivity due to unexpected learning curve may cause design and coding schedules to slip.

Training appears to be effective. Only 2 people left to be trained. Calls to help desk reduced by 80%. Use of expert from ORB project has been successful. Recommend moving this risk to Watch

L

L

Lopez

W

15

The funding and development schedule for the AA satellite is subject to change and cancellation; IR-SIP schedule slips, cost overruns, and a reduction in adequate testing time are likely as unscheduled changes will have to be made to the software to match AA project changes.

No change

L

H

Helm
 





And all other risks, which are not on the top N list and have not been accepted or closed.










The Table Methods and Tools summarize the approaches, methods, and tools for reporting status. Detailed descriptions of the methods and tools are provided as separate selectable pages in the course outline. Table Methods and Tools
Approach

Description

Usefulness

Verbal reporting

Verbal reports are generally informal. The people responsible for the risks give verbal reports on the general status of their risks. They may also use this forum to inform management of critical issues as they arise (written status would usually be required as a follow-up).

Verbal reports are useful for informal reporting of status to management and immediate notification of critical issues or changes.

Written reports

Written reports may be either formal or informal memoranda (e.g., electronic mail. reports, etc.). They should be integrated into the normal status reporting mechanisms used by the organization. The following can be used for this activity:

Mitigation Status Report

Risk Information Sheet

Spreadsheet Risk Tracking

Stoplight Chart

Mitigation status reports employ graphics to document detailed information on specific risks and are used to support decisions.



Risk information sheets are used to document detailed information on specific risks and to support decisions.



Spreadsheet risk tracking reports are used to summarize the current status of all or selected risks. They are best used to support routine project activities.



Stoplight charts summarize the status of important risks and their mitigation efforts. They are effective tools for reporting risk information to senior management.

Formal presentations

Presentations use the media and format, which is appropriate for the organization. Written reports are produced to support formal presentations.

Formal presentations usually contain material that explains risk management, the status of ongoing mitigation efforts, etc. This information might not be included in written reports.

Data Collection Exercise In this data collection exercise you are to think about the type of data you would collect don't just think about cost and schedule but also what to measure.The Table Risk contains the Risks you are to use to collect the data for.


Table Risk

Risk

Data to be Collected

#1This is the first time that the software staff will use OOD; The staff may have a lower-than-expected productivity rate and schedules may slip because of the associated learning curve.



# 20Subset of IR Post Processing CSCI requirements is to be satisfied with COTS products;Integration time and lifecycle costs may increase from original estimates which assumed significant saving from COTS use, leading to schedule slips and cost overruns.



#12Resource availability estimates were overly optimistic- schedule shows all resources are available at the start of each WBS element; schedule slips, cost overruns, and reduction in adequate testing time are likely.

 Answers

  1. Risk #1:Productivity rates, schedule information, lines of code, hours of training, etc.
  2. Risk #20: Lifecycle costs, integration time, schedule ops, number of requirements met, number of requirements that can't be met, etc.
  3. Risk #12: Schedule, cost data, testing time, etc.

Summary

The risk information should be openly available and in a database for the entire to project team to review. The data should be presented in a clear and concise manner for the intended audience. The team should choose indicators that give insight into the important project risks by being predictive in nature. The team should choose trigger values that give the project team and personnel enough time to react to current conditions and to take appropriate actions in a timely manner.

References

[Air Force 88] Air Force Systems Command/Air Force Logistics Command Pamphlet 800-45. Software Risk Abatement, September 30,1988.

[Baumert 92] Baumert, John H. & McWhinney, Mark S. Software Measures and the Capably Maturity Model (CMU/SEI-92-TR-25). Pittsburgh, Pa.: Software Engineering Institute; Carnegie Mellon University, 1992.

[Grady 87] Grady, Robert B. & Caswell, Deborah L. Software Metrics: Establishing a Company-Wide Program. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1987.

[Clark 95] Clark, Bill. 'Technical Performance Measurement in the Risk Management of Systems," Presented at the Fourth SEI Conference on Software Risk, Monterey, Ca November 6-8, 1995. For information about how to obtain copies of this paper, contact SEI Customer Relations at (412) 268-5800 or customer-relations@sei.cmu.edu.

[Rosenau 92] Rosenau, Milton D. Successful Project Management: A Step-by Step Approach With Practical Examples. New York: Van Nostrand Reinhold, 1992.