1. Programmatic Risk Management:A “not so simple” introduction to the complex but critical process of building a “credible” schedule Workshop, Lewis & Fowler Team, Denver, Colorado October 6th and October 14th 2008 1/69 Programmatic Risk Management Work (Handbook)
3. When we say “Risk Management” What do we really mean? 3/69
4. Five Easy Pieces†:The Essentials of Managing Programmatic Risk Managing the risk to cost, schedule, and technical performance is the basis of a successful project management method. † With apologies to Carole Eastman and Bob Rafelson for their 1970 film staring Jack Nicholson Risk in Five Easy Pieces 4/69
5. Hope is Not a Strategy When General Custer was completely surrounded, his chief scout asked, “General what's our strategy?” Custer replied, “The first thing we need to do is make a note to ourselves – never get in this situation again.” Hope is not a strategy! A Strategy is the plan to successfully complete the project If the project’s success factors, the processes that deliver them, the alternatives when they fail, and the measurement of this success are not defined in meaningful ways for both the customer and managers of the project – Hope is the only strategy left. Risk in Five Easy Pieces 5/69
6. No Single Point Estimate can be correct without knowing the variance SinglePoint Estimates use sample data to calculate a single value (a statistic) that serves as a "best guess" for an unknown (fixed or random) population parameter Bayesian Inference is a statistical inference where evidence or observations are used to infer the probability that a hypothesis may be true Identifying underlying statistical behavior of the cost and schedule parameters of the project is the first step in forecasting future behavior Without this information and the model in which it is used any statements about cost, schedule and completion dates are a 50/50guesses When estimating cost and duration for planning purposes using Point Estimates results in the least likely result. A result with a 50/50 chance of being true. Risk in Five Easy Pieces 6/69
7. Without Integrating $, Time, and TPM you’re driving in the rearview mirror Cost ($) Schedule (t) Technical Performance (TPM) Addressing customer satisfaction means incorporating product requirements and planned quality into the Performance Measurement Baseline to assure the true performance of the project is made visible. Risk in Five Easy Pieces 7/69
8. Without a model for risk management, you’re driving in the dark with the headlights turn off The Risk Management process to the right is used by the US DOD and differs from the PMI approach in how the processes areas are arranged. The key is to understand the relationships between these areas. Risk Management means using a proven risk management process, adapting this to the project environment, and using this process for everyday decision making. Risk in Five Easy Pieces 8/69
9. Risk Communication is … An interactive process of exchange of information and opinion among individuals, groups, and institutions; often involving multiple messages about the nature of risk or expressing concerns, opinions, or reactions to risk messages or to legal or institutional arrangements for risk management. Bad news is not wine. It does not improve with age — Colin Powell Risk in Five Easy Pieces 9/69
10. Basic Statistics for Programmatic Risk Management Since all point estimates are wrong, statistical estimates will be needed to construct a credible cost and schedule model Basic Statistics 10/69
11. Uncertainty and Risk are not the same thing – don’t confuse them Uncertainty stems from unknown probability distributions Requirements change impacts Budget Perturbations Re–work, and re–test phenomena Contractual arrangements (contract type, prime/sub relationships, etc) Potential for disaster (labor troubles, shuttle loss, satellite “falls over”, war, hurricanes, etc.) Probability that if a discrete event occurs it will invoke a project delay Risk stems from known probability distributions Cost estimating methodology risk resulting from improper models of cost Cost factors such as inflation, labor rates, labor rate burdens, etc Configuration risk (variation in the technical inputs) Schedule and technical risk coupling Correlation between risk distributions Basic Statistics 11/69
12. There are 2 types of Uncertainty encountered in cost and schedule Static uncertainty is natural variation and foreseen risks Uncertainty about the value of a parameter Dynamic uncertainty is unforeseen uncertainty and “chaos” Stochastic changes in the underlying environment System time delays, interactions between the network elements, positive and negative feedback loops Internal dependencies Basic Statistics 12/69
13. The Multiple Sources of Schedule Uncertainty and Sorting Them Out is the Role of Planning Unknown interactions drive uncertainty Dynamic uncertainty can be addressed by flexibility in the schedule On ramps Off ramps Alternative paths Schedule “crashing” opportunities Modeling of this dynamic uncertainty requires simulation rather than static PERT based path assessment Changes in critical path are dependent on time and state of the network The result is a stochastic network Basic Statistics 13/69
14. Statistics at a Glance Probability distribution – A function that describes the probabilities of possible outcomes in a "sample space.” Random variable – variable a function of the result of a statistical experiment in which each outcome has a definite probability of occurrence. Determinism – a theory that phenomena are causally determined by preceding events or natural laws. Standard deviation (sigma value) – An index that characterizes the dispersion among the values in a population. Bias –The expected deviation of the expected value of a statistical estimate from the quantity it estimates. Correlation – A measure of the joint impact of two variables upon each other that reflects the simultaneous variation of quantities. Percentile – A value on a scale of 100 indicating the percent of a distribution that is equal to or below it. Monte Carlo sampling – A modeling technique that employs random sampling to simulate a population being studied. Basic Statistics 14/69
15. Statistics Versus Probability In building a risk tolerant schedule, we’re interested in the probability of a successful outcome “What is the probability of making a desired completion date?” But the underlying statistics of the tasks influence this probability The statistics of the tasks, their arrangement in a network of tasks and correlation define how this probability based estimated developed. Basic Statistics 15/69
16.
17. The independence or dependency of each task with others in the network, greatly influences the outcome of the total project duration
18. Understanding this dependence is critical to assessing the credibility of the plan as well as the total completion time of that planBasic Statistics 16/69
19.
20. “The number of times a task duration appears in a Monte Carlo simulation”Basic Statistics 17/69
21. Statistics of a Triangle Distribution 50% of all possible values are under this area of the curve. This is the definition of the median Triangle distributions are useful when there is limited information about the characteristics of the random variables are all that is available. This is common in project cost and schedule estimates. Minimum 1000 hrs Maximum 6830 hrs Mode = 2000 hrs Mean = 3879 hrs Median = 3415 hrs Basic Statistics 18/69
22. Basics of Monte Carlo Simulation Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise. — John W. Tukey, 1962 Basics of Monte Carlo 19/69
31. Schedule logic can include branching – both probabilistic and conditional
32. When resource loaded schedules are used – provides integrated cost and schedule probabilistic modelBasics of Monte Carlo 20/69
33. First let’s be convinced that PERT has limited usefulness The original paper (Malcolm 1959) states The method is “the best that could be done in a real situation within tight time constraints.” The time constraint was One Month The PERT time made the assumption that the standard deviation was about 1/6 of the range (b–a), resulting in the PERT formula. It has been shown that the PERT mean and standard deviation formulas are poor approximations for most Beta distributions (Keefer 1983 and Keefer 1993). Errors up to 40% are possible for the PERT mean Errors up to 550% are possible for the PERT standard deviation Basics of Monte Carlo 21/69
34.
35. There is a likelihood that some durations will comprise a path that is off the critical path
36. The single number for the estimate – the “single point estimate” is in fact a most likely estimate
37. The completion date is not the most likely date, but is a confidence interval in the probability distribution function resulting from the convolution of all the distributions along all the paths to the completion of the projectBasics of Monte Carlo 22/69
45. Most likely is not the same as the averageBasics of Monte Carlo 23/69
46. Foundation of Monte Carlo Theory George Louis Leclerc, Comte de Buffon, asked what was the probability that the needle would fall across one of the lines, marked in green. That outcome occurs only if: Basics of Monte Carlo 24/69
47. Mechanics of Risk+ integrated with Microsoft Project Any credible schedule is a credible model of its dynamic behavior. This starts with a Monte Carlo model of the schedule’s network of tasks Mechanics of Risk+ 25/69
48. The Simplest Risk+ elements Task to “watch” (Number3) Most Likely (Duration3) Distribution (Number1) Pessimistic (Duration2) Optimistic (Duration1) Mechanics of Risk+ 26/69
49.
50. The S–Curve shows the cumulative probability of completing on or before a given date.
51. The standard deviation of the completion date and the 95% confidence interval of the expected completion date are in the same units as the “most likely remaining duration” field in the scheduleMechanics of Risk+ 27/69
52. A Well Formed Risk+ Schedule For Risk+ to provide useful information, the underlying schedule must be well formed on some simple way. Mechanics of Risk+ 28/69
72. The standard deviation and confidence interval for cost at the total project level Mechanics of Risk+ 30/69
73. Programmatic Risk Ranking The variance in task duration must be defined in some systematic way. Capturing three point values is the least desirable. Programmatic Risk Ranking 31/69
74. Thinking about risk ranking These classifications can be used to avoid asking the “3 point” question for each task This information will be maintained in the IMS When updates are made the percentage change can be applied across all tasks Programmatic Risk Ranking 32/69
75. Steps in characterizing uncertainty Use an “envelope” method to characterize the minimum, maximum and “most likely” Fit this data to a statistical distribution Use conservative assumptions Apply greater uncertainty to less mature technologies Confirm analysis matches intuition Remember Sir Francis Bacon’s quote about beginning with uncertainty and ending with certainty. If we start with a what we think is a valid number we will tend to continue with that valid number. When in fact we should speak only in terms of confidence intervals and probabilities of success. Programmatic Risk Ranking 33/69
76. Sobering observations about 3 point estimates when asking engineers In 1979, Tversky and Kahneman proposed an alternative to Utilitytheory. Prospecttheory asserts that people make predictably irrational decisions. The way that a choice of decisions is presented can sway a person to choose the less rational decision from a set of options. Once a problem is clearly and reasonably presented, rarely does a person think outside the bounds of the frame. Source: “The Causes of Risk Taking By Project Managers,” Proceedings of the Project Management Institute Annual Seminars & Symposium November 1–10, 2001 • Nashville, Tenn Tversky, Amos, and Daniel Kahneman. 1981. The Framing of Decisions and the Psychology of Choice. Science 211 (January 30): 453–458 Programmatic Risk Ranking 34/69
77. Building a Credible Schedule A credible schedule contains a well formed network, explicit risk mitigations, proper margin for these risks, and a clear and concise critical path(s). All of this is prologue to analyzing the schedule. Building a Credible Schedule 35/69
78.
79. The schedule contingency is the amount of time added (or subtracted) from the baseline schedule necessary to achieve the desired probability of an under run or over run.
98. Activities that assessing the increasing compliance to the technical performance measure can be show in the IMS
99. These can be Accomplishment CriteriaBuilding a Credible Schedule 38/69
100. The Monte Carlo Process starts with the 3 point estimates Estimates of the task duration are still needed, just like they are in PERT Three point estimates could be used But risk ranking and algorithmic generation of the “spreads” is a better approach Duration estimates must be parametric rather than numeric values A geometric scale of parametric risk is one approach Branching probabilities need to be defined Conditional paths through the schedule can be evaluated using Monte Carlo tools This also demonstrate explicit risk mitigation planning to answer the question “what if this happens?” These three point estimates are not the PERT ones. They are derived from the ordinal risk ranking process. This allows them to be “calibrated” for the domain, correlated with the technical risk model. Building a Credible Schedule 39/69
101. Expert Judgment is required to build a Risk Management approach Expert judgment is typically the basis of cost and schedule estimates Expert judgment is usually the weakest area of process and quantification Translating from English (SOW) to mathematics (probabilistic risk model) is usually inconsistent at best and erroneous at worst One approach Plan for the “best case” and preclude a self–fulfilling prophesy Budget for the “most likely” and recognize risks and uncertainties Protect for the “worst case” and acknowledge the conceivable in the risk mitigation plan The credibility of the “best case” estimates if crucial to the success of this approach Building the variance values for the ordinal risk rank is a technical process, requiring engineering judgment. Building a Credible Schedule 40/69
102. Guiding the Risk Factor Process requires careful weighting of each level of risk For tasks marked “Low” a reasonable approach is to score the maximum 10% greater than the minimum. The “Most Likely” is then scored as a geometric progression for the remaining categories with a common ratio of 1.5 Tasks marked “Very High” are bound at 200% of minimum. No viable project manager would like a task grow to three times the planned duration without intervention The geometric progress is somewhat arbitrary but it should be used instead of a linear progression Building a Credible Schedule 41/69
103. Assume now we have a well formed schedule – now what? With all the “bone head” elements removed, we can say we have a well formed schedule But the real role of Planning is to forecast the future, provide alternative Plan’s for this forecast and actively engage all the participants in the projects in the Planning Process For the role of PP&C is to move “reporting past performance” to “forecasting future performance” it must break the mold of using static models of cost and schedule Building a Credible Schedule 42/69
109. Turn off for alternative for a “success” path assessment
110. Turn off primary for a “failure” path assessmentBuilding a Credible Schedule 43/69
111.
112. Risk Margin is added to the IMS where risk alternatives are identified
113. Margin that is not used in the IMS for risk mitigation will be moved to the next sequence of risk alternatives
114. This enables us to buy back schedule margin for activities further downstream
115. This enables us to control the ripple effect of schedule shifts on Margin activitiesBuilding a Credible Schedule 44/69
116. Simulation Considerations Schedule logic and constraints Simplify logic – model only paths which, by inspection, may have a significant bearing on the final result Correlate similar activities No open ends Use only finish–to–start relationships with no lags Model relationships other than finish–to–start as activities with base durations equal to the lag value Eliminate all date constraints Consider using branching for known alternatives Building a Credible Schedule 45/69
117. The contents of the schedule Constraints Lead/Lag Task relationships Durations Network topology Building a Credible Schedule 46/69
118. Simulation Considerations Selection of Probability Distributions Develop schedule simulation inputs concurrently with the cost estimate Early in process – use same subject matter experts Convert confidence intervals into probability duration distributions Number of distributions vary depending on software Difficult to develop inputs required for distributions Beta and Lognormal better than triangular; avoid exclusive use of Normal distribution Building a Credible Schedule 47/69
119. Sensitivity Analysis describes which tasks drive the completion times Concentrates on inputs most likely to improve quality (accuracy) Identifies most promising opportunities where additional work will help to narrow input ranges Methods Run multiple simulations Use criticality index “Tornado” or Pareto graph Building a Credible Schedule 48/69
120. What we get in the end is a Credible Model of the schedule All models are wrong. Some models are useful. – George Box (1919 – ) Concept generator from Ramon Lull’s Ars Magna (C. 1300) Building a Credible Schedule 49/69
121. Conclusion At this point there is too much information. Processing this information will take time, patience, and most of all practice with the tools and the results they produce. Conclusion 50/69
122. Conclusions Project schedule status must be assessed in terms of a critical path through the schedule network Because the actual durations of each task in the network are uncertain (they are random variables following a probability distribution function), the project schedule duration must be modeled statistically Conclusion 51/69
123. Conclusions Quality (accuracy) is measured at the end points of achieved confidence interval (suggest 80% level) Simulation results depend on: Accuracy and care taken with base schedule logic Use of subject matter experts to establish inputs Selection of appropriate distribution types Through analysis of multiple critical paths Understanding which activities and paths have the greatest potential impact Conclusion 52/69
124. Conclusions Cost and schedule estimates are made up of many independent elements. When each element is planned as best case – e.g. a probability of achievement of 10% The probability of achieving best case for a two–element estimate is 1% For three elements, 0.01% For many elements, infinitesimal In effect, it is zero. In the beginning no attempt should be made to distinguish between risk and uncertainty Risk involves uncertainty but it is indeed more For initial purposes it is unimportant The effect is combined into one statistical factor called “risk,” which can be described by a single probability distribution function Conclusion 53/69
129. Points to remember Good project management is good risk management Risk management is how adults manage projects The only thing we manage is project risk Risks impact objectives Risks come from the decisions we make while trying to achieve the objectives Risks require a factual condition and have potential negative consequences that must be mitigated in the schedule Conclusion 55/69
130. Usage is needed before understanding is acquired Here and elsewhere, we shall not obtain the best insights into things until we actually see them growing from the beginning. — Aristotle Conclusion 56/69
131. The End This is actually the beginning, since building a risk tolerant, credible, robust schedule requires constant “execution” of the plan. A planning algorithm from Aristotle’s De Motu Animalium c. 400 BC Conclusion 57/69
132. Resources “The Parameters of the Classical PERT: An Assessment of its Success,” Rafael Herrerias Pleguezuelo, http://www.cyta.com.ar/biblioteca/bddoc/bdlibros/pert_van/PARAMETROS.PDF “Advanced Quantitative Schedule Risk Analysis,” David T. Hulett, Hulett & Associates, http://www.projectrisk.com/index.html “Schedule Risk Analysis Simplified,” David T. Hulett, Hulett & Associates, http://www.projectrisk.com/index.html “Project Risk Management: A Combined Analytical Hierarchy Process and Decision Tree Approach,” Prasanta Kumar Dey, Cost Engineering, Vol. 44, No. 3, March 2002. “Adding Probability to Your ‘Swiss Army Knife’,” John C. Goodpasture, Proceedings of the 30th Annual Project Management Institute 1999 Seminars and Symposium, October, 1999. “Modeling Uncertainty in Project Scheduling,” Patrick Leach, Proceedings of the 2005 Crystal Ball User Conference “Near Critical Paths Create Violations in the PERT Assumptions of Normality,” Frank Pokladnik and Robert Hill, University of Houston, Clear Lake, http://www.sbaer.uca.edu/research/dsi/2003/procs/237–4203.pdf Resources 58/69
133. Resources “Teaching SuPERT,” Kenneth R. MacLeod and Paul F. Petersen, Proceedings of the Decision Sciences 2003 Annual Meeting, Washington DC, http://www.sbaer.uca.edu/research/dsi/2003/by_track_paper.html “The Beginning of the Monte Carlo Method,” N. Metropolis, Los Alamos Science, Special Issue, 1987. http://www.fas.org/sgp/othergov/doe/lanl/pubs/00326866.pdf “Defining a Beta Distribution Function for Construction Simulation,” Javier Fente, Kraig Knutson, Cliff Schexnayder, Proceedings of the 1999 Winter Simulation Conference. “The Basics of Monte Carlo Simulation: A Tutorial,” S. Kandaswamy, Proceedings of the Project Management Institute Annual Seminars & Symposium, November, 2001. “The Mother of All Guesses: A User Friendly Guide to Statistical Estimation,” Francois Melese and David Rose, Armed Forces Comptroller, 1998, http://www.nps.navy.mil/drmi/graphics/StatGuide–web.pdf “Inverse Statistical Estimation via Order Statistics: A Resolution of the Ill–Posed Inverse problem of PERT Scheduling,” William F. Pickard, Inverse Problems 20, pp. 1565–1581, 2004 Resources 59/69
134. Resources “Schedule Risk Analysis: Why It Is Important and How to Do It, “Stephen A. Book, Proceedings of the Ground Systems Architecture Workshop (GSAW 2002), Aerospace Corporation, March 2002, http://sunset.usc.edu/GSAW/gsaw2002/s11a/book.pdf “Evaluation of the Risk Analysis and Cost Management (RACM) Model,” Matthew S. Goldberg, Institute for Defense Analysis, 1998. http://www.thedacs.com/topics/earnedvalue/racm.pdf “PERT Completion Times Revisited,” Fred E. Williams, School of Management, University of Michigan–Flint, July 2005, http://som.umflint.edu/yener/PERT%20Completion%20Revisited.htm “Overcoming Project Risk: Lessons from the PERIL Database,” Tom Hendrick , Program Manager, Hewlett Packard, 2003, http://www.failureproofprojects.com/Risky.pdf “The Heart of Risk Management: Teaching Project Teams to Combat Risk,” Bruce Chadbourne, 30th Annual Project Management Institute 1999 Seminara and Symposium, October 1999, http://www.risksig.com/Articles/pmi1999/rkalt01.pdf Resources 60/69
135. Resources Project Risk Management Resource List, NASA Headquarters Library, http://www.hq.nasa.gov/office/hqlibrary/ppm/ppm22.htm#art “Quantify Risk to Manage Cost and Schedule,” Fred Raymond, Acquisition Quarterly, Spring 1999, http://www.dau.mil/pubs/arq/99arq/raymond.pdf “Continuous Risk Management,” Cost Analysis Symposium, April 2005, http://www1.jsc.nasa.gov/bu2/conferences/NCAS2005/papers/5C_–_Cockrell_CRM_v1_0.ppt “A Novel Extension of the Triangular Distribution and its Parameter Estimation,” J. Rene van Dorp and Samuel Kotz, The Statistician 51(1), pp. 63 – 79, 2002. http://www.seas.gwu.edu/~dorpjr/Publications/JournalPapers/TheStatistician2002.pdf “Distribution of Modeling Dependence Cause by Common Risk Factors,” J. Rene van Dorp, European Safety and Reliability 2003 Conference Proceedings, March 2003, http://www.seas.gwu.edu/~dorpjr/Publications/ConferenceProceedings/Esrel2003.pdf Resources 61/69
136. Resources “Improved Three Point Approximation To Distribution Functions For Application In Financial Decision Analysis,” Michele E. Pfund, Jennifer E. McNeill, John W. Fowler and Gerald T. Mackulak, Department of Industrial Engineering, Arizona State University, Tempe, Arizona, http://www.eas.asu.edu/ie/workingpaper/pdf/cdf_estimation_submission.pdf “Analysis Of Resource–constrained Stochastic Project Networks Using Discrete–event Simulation,” Sucharith Vanguri, Masters Thesis, Mississippi State University, May 2005, http://sun.library.msstate.edu/ETD–db/theses/available/etd–04072005–123743/restricted/SucharithVanguriThesis.pdf “Integrated Cost / Schedule Risk Analysis,” David T. Hulett and Bill Campbell, Fifth European Project Management Conference, June 2002. “Risk Interrelation Management – Controlling the Snowball Effect,” Olli Kuismanen, Tuomo Saari and Jussi Vähäkylä, Fifth European Project Management Conference, June 2002. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century, David Salsburg, W. H. Freeman, 2001 Resources 62/69
137. Resources “Triangular Approximations for Continuous Random Variables in Risk Analysis,” David G. Johnson, The Business School, Loughborough University, Liecestershire. “Statistical Dependence through Common Risk Factors: With Applications in Uncertainty Analysis,” J. Rene van Dorp, European Journal of Operations Research, Volume 161(1), pp. 240–255. “Statistical Dependence in the risk analysis for Project Networks Using Monte Carlo Methods,” J. Rene van Dorp and M. R. Dufy, International Journal of Production Economics, 58, pp. 17–29, 1999. http://www.seas.gwu.edu/~dorpjr/Publications/JournalPapers/Prodecon1999.pdf “Risk Analysis for Large Engineering Projects: Modeling Cost Uncertainty for Ship Production Activities,” M. R. Dufy and J. Rene van Dorp, Journal of Engineering Valuation and Cost Analysis, Volume 2. pp. 285–301, http://www.seas.gwu.edu/~dorpjr/Publications/JournalPapers/EVCA1999.pdf “Risk Based Decision Support techniques for Programs and Projects,” Barney Roberts and David Frost, Futron Risk Management Center of Excellence, http://www.futron.com/pdf/RBDSsupporttech.pdf Resources 63/69
138. Resources Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, Office of Safety and Mission Assurance, April 2002. http://www.hq.nasa.gov/office/codeq/doctree/praguide.pdf “Project Planning: Improved Approach Incorporating Uncertainty,” Vahid Khodakarami, Norman Fenton, and Martin Neil, Track 15 EURAM2005: “Reconciling Uncertainty and Responsibility” European Academy of Management. http://www.dcs.qmw.ac.uk/~norman/papers/project_planning_khodakerami.pdf “A Distribution for Modeling Dependence Caused by Common Risk Factors,” J. Rene van Dorp, European Safety and Reliability 2003 Conference Proceedings, March 2003. “Probabilistic PERT,” Arthur Nadas, IBM Journal of Research and Development, 23(3), May 1979, pp. 339–347. “Ranked Nodes: A Simple and effective way to model qualitative in large–scale Bayesian Networks,” Norman Fenton and Martin Neil, Risk Assessment and Decision Analysis Research Group, Department of Computer Science, Queen Mary, University of London, February 21, 2005. Resources 64/69
139. Resources “Quantify Risk to Manage Cost and Schedule,” Fred Raymond, Acquisition Review Quarterly, Spring 1999, pp. 147–154 “The Causes of Risk Taking by Project Managers,” Michael Wakshull, Proceedings of the Project Management Institute Annual Seminars & Symposium, November 2001. “Stochastic Project Duration Analysis Using PERT–Beta Distributions,” Ron Davis. “Triangular Approximation for Continuous Random Variables in Risk Analysis,” David G. Johnson, Decision Sciences Institute Proceedings 1998. http://www.sbaer.uca.edu/research/dsi/1998/Pdffiles/Papers/1114.pdf “The Cause of Risk Taking by Managers,” Michael N.Wakshull, Proceedings of the Project Management Institute Annual Seminars & Symposium November 1–10, 2001, Nashville Tennessee , http://www.risksig.com/Articles/pmi2001/21261.pdf “The Framing of Decisions and the Psychology of Choice,” Tversky, Amos, and Daniel Kahneman. 1981, Science 211 (January 30): 453–458, http://www.cs.umu.se/kurser/TDBC12/HT99/Tversky.html Resources 65/69
140. Resources “Three Point Approximations for Continuous Random Variables,” Donald Keefer and Samuel Bodily, Management Science, 29(5), pp. 595 – 609. “Better Estimation of PERT Activity Time Parameters,” Donald Keefer and William Verdini, Management Science, 39(9), pp. 1086 – 1091. “The Benefits of Integrated, Quantitative Risk Management,” Barney B. Roberts, Futron Corporation, 12th Annual International Symposium of the International Council on Systems Engineering, July 1–5, 2001, http://www.futron.com/pdf/benefits_QuantIRM.pdf “Sources of Schedule Risk in Complex Systems Development,” Tyson R. Browning, INCOSE Systems Engineering Journal, Volume 2, Issue 3, pp. 129 – 142, 14 September 1999, http://sbufaculty.tcu.edu/tbrowning/Publications/Browning%20(1999)––SE%20Sch%20Risk%20Drivers.pdf “Sources of Performance Risk in Complex System Development,” Tyson R. Browning, 9th Annual International Symposium of INCOSE, June 1999, http://sbufaculty.tcu.edu/tbrowning/Publications/Browning%20(1999)––INCOSE%20Perf%20Risk%20Drivers.pdf Resources 66/69
141. Resources “Experiences in Improving Risk Management Processes Using the Concepts of the Riskit Method,” Jyrki Konito, Gerhard Getto, and Dieter Landes, ACM SIGSOFT Software Engineering Notes , Proceedings of the 6th ACM SIGSOFT international symposium on Foundations of software engineering SIGSOFT '98/FSE-6, Volume 23 Issue 6, November 1998. “Anchoring and Adjustment in Software Estimation,” Jorge Aranda and Steve Easterbrook, Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering ESEC/FSE-13 “The Monte Carlo Method,” W. F. Bauer, Journal of the Society of Industrial Mathematics, Volume 6, Number 4, December 1958, http://www.cs.fsu.edu/~mascagni/Bauer_1959_Journal_SIAM.pdf. “A Retrospective and Prospective Survey of the Monte Carlo Method,” John H. Molton, SIAM Journal, Volume 12, Number 1, January 1970, http://www.cs.fsu.edu/~mascagni/Halton_SIAM_Review_1970.pdf. Resources 67/69
143. Lewis & Fowler 8310 South Valley Highway Suite 300 Englewood, Colorado 80112 www.lewisandfowler.com 303.524.1610 Deliverables Based Planningsm Integrated Master Plan Integrated Master Schedule Earned Value Risk Management Proposal Support Service Glen B. Alleman, VP, Program Planning and Controls galleman@lewisandfowler.com 303.437 5226 69/69
Editor's Notes
The statistical processes needed for probabilistic risk analysis vary according to the level of understanding needed. A top level view is that each task’s duration is a random variable. When these random variables are connected in a network (a schedule), the end date is a random variable as well.There are many more subtleties though in this concept..What kind of random variables?From what distributions are they drawn?How are they independent for each run of the model?What couplings are there between the individual tasks and the end date of the project?What is the confidence interval on this probabilistic end date – that is what is the variance in the variance?Why are the counter intuitive aspects of the modeling process? For example, when a task duration is reduced the probabilities end date gets longer. Why? How can this be?
When we use the term uncertainty or risk it means at least 4 things.First let’s sort of “uncertainty”There are two classes of uncertainty in large complex programs.Static uncertainty emerges from the natural variations in the completion times of tasks. This is a Deming uncertainty. http://webserver.lemoyne.edu/~wright/deming.htm is an example of this type of uncertaintyThe dynamic uncertainty is about the unknowns and the unknowable
In deterministic PERT the durations are defined as a three point estimate and the PERT formula is used to compute the mean and standard deviation for the program duration as well as the critical path. This is the algorithm used in Microsoft Project when the PERT tool bar is turned on and the three point estimates entered into the appropriate columns.It is billed as probabilistic but in fact the 3–point estimates work against a fixed probability distribution function with no way to adjust its shape, bounds or moments.As well, there is no way to insert the correlations that naturally occur in the IMS.
In deterministic PERT the durations are defined as a three point estimate and the PERT formula is used to compute the mean and standard deviation for the program duration as well as the critical path. This is the algorithm used in Microsoft Project when the PERT tool bar is turned on and the three point estimates entered into the appropriate columns.It is billed as probabilistic but in fact the 3–point estimates work against a fixed probability distribution function with no way to adjust its shape, bounds or moments.As well, there is no way to insert the correlations that naturally occur in the IMS.
The result is a cumulative distribution and a probability distribution function.Interpreting this result is straight forward.The confidence of each date is shown in the table on the right. This is the probability of completing the task by the date.
When we use the term “credible” the first question is “what are the units of measure of credible?”
Thinking about schedule contingency is different in a Probabilistic Risk Analysis context. For a simple project, 15% contingency is assumed. But placing the contingency is the first problem. The process is:Run Risk+ and watch the final date.Compare the 80% confidence date against the deterministic date. This difference is the first cut at the needed margin.Assign this duration across the project in front of the critical (high risk) milestones.Rerun Risk+ and add or subtract this margin until the desired confidence date is achieved.
The use of “expert judgment” itself needs to be calibrated.The unanswered question on this program and many others is “what does a good risk tolerant IMS actually look like?”The “units of measure” for risk tolerance and the confidence in the probabilistic estimates needs to be established before the estimating and modeling process can be “calibrated”
The ranking of risk or the ranking of anything needs to be done in a structured manner.A geometric progression is a very useful approach, since it forces the focus on ranking.
The use of branching probabilities is important for the assessment of the “risk tolerance.”
Simulating the schedule requires an understanding of the underlying probability distributions. This can range from simple to sophisticated. The simple approach takes the standard out of the box distributions from Risk+ - a Low, Medium, High confidence and a Triangle distribution and applies this to all the tasks.The sophisticated approach looks at groups of tasks – flight software, hardware development, testing, integration, operations, etc. and develops actual probability distributions for these classes of activities.Risk+ provides only Triangle, Normal, and Beta distributions. @Risk for Project provides many others. Either way, the development of the details of the distribution is needed before any assessment can be done.
By this time your into information overload. But this is the nature of detailed project schedule analysis. There is just a lot of information. This is why there is usually one person responsible for the modeling of the schedule. It’s a full time job just to get your head around the topic.
Using Risk+ produces a probability distribution function (pdf) for the activity being “watched.” The confidence measure of the resulting date distributions needs to be in the 70% to 80% range to have any credibility.Getting the schedule “tuned” for result requires effort and patience. The first runs will be very disappointing and will require more work to figure out what is going on.
The confidence intervals produced by the CDF can be assessed over time against targets. These targets can be Technical Performance Measures or any other style of metric that is connected with cost, schedule and technical performance.As the program proceeds, the risks should be reduced as well as the tolerance for risk.
This whole topic is about risk management. But in the end risk management is about project management.
Managing risk is a project management practice. As a practice, it takes practices. And like all practice, practice make perfect – or as close to perfect as you can get in the project management business.
This is a list of books and papers that are the “tip of the iceberg” for programmatic risk management. Look through some of these to start to get a handle on the complexities of the topic. It may appear daunting but with a little work, the topic starts to reveal itself.