3. ‗‗AGAINST THE GODS
• The story that I have to tell is marked all
the way through by a persistent tension
between those who assert that the best
decisions are based on quantification and
numbers, determined by the patterns of
the past, and those who base their
decisions on more subjective degrees of
belief about the uncertain future. This is a
controversy that has never been
resolved.‘
• — FROM THE INTRODUCTION TO ‗‗AGAINST THE GODS: THE
REMARKABLE STORY OF RISK,‘‘ BY PETER L. BERNSTEIN
3
4. Risk
Wei-jan:
Chinese for 'opportunity through danger'
As long as we wish for safety,
we will have difficulty pursuing what matters.
- Peter Block
Risk has a double-edged nature.
Risk can cut, risk can heal.
- James Neill
4
8. Value at Risk VaR
• David Einhorn, who founded Greenlight
Capital, a prominent hedge fund, wrote not
long ago that VaR was ―relatively useless as
a risk-management tool and potentially
catastrophic when its use creates a false
sense of security among senior managers
and watchdogs. This is like an air bag that
works all the time, except when you have a
car accident.”
• NY Times Magazine pp27 4 January 2009
8
9. Wall Street Journal
• Any of these metrics that work in a typical oscillating market…are not working
right now,‖ Mr. Rueckert said.
• Among the other indicators that aren‘t working: 10-day, 50-day and 200-day
moving averages, the put/call ratio and the idea of ―capitulation.‖
• ―Capitulation‖ is the concept that stocks require a purgative, high-volume plunge
to mark the bottom of the bear market. Guess what: the stock market has seen
little other than purgative and high-volume plunges since the failure of Lehman
Brothers hit the tape on Sept. 15, and there‘s no sign of a bottom yet.
• In an attempt to debunk ―the capitulation myth,‖ Mr. Rueckert, of Birinyi
Associates, found an item in the New York Times three days after the bottom of
the 1982 bear market that promised the end would take the form of a
―crushing…swift plunge.‖ According to his analysis, bear markets usually end
with a whimper rather than a bang.
• To date, the best predictor of a market turn was probably an email that
circulated among Wall Street traders on Oct. 27, the day of the interim bottom.
The analysis was based on lunar cycles, a cornerstone of astrology.
• Wall Street Journal January 5th 2009
9
10. Probabilities/Risk
• The major mistake that people make is that
they are not very good at dealing with a lot of
uncertainty.
• So, rather than a rational assessment of data
and probabilities, they like stories and they
make decisions based more on mental
images rather than a sober assessment of
their portfolio and how a particular stock fits
into it."
•James Scott is Managing Director of Global Public Markets for General Motors Asset Management and a member of its Management and Investment Committees. Before joining GMAM, he was President of Quantitative
Managements Associates, a subsidiary of Prudential Financial. Prior to that, Mr. Scott was a Professor at Columbia Business School.
Mr. Scott holds a B.A. from Rice University and a Ph.D. in economics from Carnegie Mellon University. He serves as an Associate Editor of the Financial Analysts Journal and the Journal of Investment Management, as a
Director of the Institute for Quantitative Research in Finance and as Research Director of the Heilbrunn Center for Graham and Dodd Investing at Columbia Business School.
10
11. John Maynard Keynes
Risk vs Uncertainty
• By ―uncertain‖ knowledge … I do not mean merely to distinguish what
is known for certain from what is only probable. The game of roulette is
not subject, in this sense, to uncertainty…. The sense in which I am
using the term is that in which the prospect of a European war is
uncertain, or the price of copper and the rate of interest twenty years
hence, or the obsolescence of a new invention…. About these matters,
there is no scientific basis on which to form any calculable probability
whatever. We simply do not know!
11
12. Predictive Tools
• Prediction is very difficult,
especially about the future.’
- Niels Bohr – Physicist (1885-1962)
“It is tough to make predictions,
especially about the future.”
- Yogi Berra, Baseball Savant
12
13. Predicting the Future
• Presidents, Movies, and Influenza
• Such markets have been created to predict the next president, Hollywood blockbusters, and
flu outbreaks. The newest prediction market, launched in February 2008, focuses on
predicting future events in the tech industry, such as whether Yahoo! will accept Microsoft's
acquisition. But Ho and his co-author, Kay-Yut Chen, a principal scientist at Hewlett-Packard
Laboratories, believe that prediction markets also are well-suited to forecasting demand for
new product innovations, particularly in the high-tech arena.
H-P tested prediction markets to forecast sales of several existing and new products and
found that six of eight prediction markets were more accurate than official forecasts.
"Prediction markets work because you get a lot of people and ask them to put their money
where their mouth is," Chen says.
Based on their analysis of several existing prediction markets, Ho and Chen provide a step-
by-step guide for firms on how to create a prediction market. They suggest recruiting at least
50 participants and providing a strong monetary incentive to promote active trading. Ho and
Chen recommend average compensation of at least $500 for each participant.
The firm then creates ten different forecasts – either according to sales or units sold – and
gives each participant a set number of shares and cash to trade, buy, and sell, according to
their beliefs about which forecast is most accurate.
After a product is launched and sales are observed, participants who own shares in the
prediction that matches actual sales receive $1 a share.
13
14. Cal Berkeley
• One novel way to improve such forecasts is a
prediction market, says Teck-Hua Ho, the Haas
School's William Halford Jr. Family Professor of
Marketing. Ho recently coauthored an article titled "New
Product Blockbusters: The Magic and Science of
Prediction Markets" in the 50th anniversary issue of the
Haas School business journal, California Management
Review.
A prediction market is an exchange in which
participants vote on a possible outcome by buying and
selling shares that correspond to a particular forecast,
similar to trading in the stock market. Shares in a
forecast that participants believe is most likely trade for
a higher price than shares in a less likely scenario.
"The key idea behind a prediction market is pooling the
knowledge of many people within a company," Ho says.
"It's a very powerful tool for firms with many different
pockets of expertise or a widely dispersed or isolated
workforce."
14
15. Phony Forecasting
(or Nerds and Herds)
• Extremistan might not be so bad if you could predict when outliers
would occur and what their magnitude might be. But no one can
do this precisely.
• Consider hit movies: Screenwriter William Goldman is famous for
describing the ―secret‖ of Hollywood hits: “Nobody can predict
one”.
• Similarly, no one knew whether a book by a mother on welfare
about a boy magician with an odd birthmark would flop or make
the author a billionaire.
• Stock prices are the same way. Anyone who claims to be able to
predict the price of a stock or commodity years in the future is a
charlatan.
• Yet the magazines are filled with the latest ―insider‖ advice about
what the market will do. Ditto for technology.
• Do you know what the ―next big thing‖ will be? No. No one does.
Prognosticators generally miss the big important events – the
black swans that impel history. 15
16. Astrology
Astrology
Astrology
Universum - C. Flammarion, Holzschnitt, Paris 1888, Kolorit : Heikenwaelder Hugo, Wien 1998
16
17. Physics Envy
• Social scientists have suffered from physics envy,
since physics has been very successful at creating
mathematical models with huge predictive value.
• In financial economics, particularly in a field called
risk management, the predictive value of the models
is no different from astrology. Indeed it resembles
astrology (without the elegance).
• They give you an ex-post ad-hoc explanation.
• Nassim Taleb
17
18. POLICY ANALYSIS MARKET
Strategic Insight
DARPA's Policy Analysis Market for Intelligence: Outside the Box or Off the Wall?
by Robert Looney
Sept 2003
• Although the Policy Analysis Market appears to be a dead issue, it did break
new ground in the country's search for better intelligence. The PAM idea
embodied a solid body of theory and proven empirical capability. While one can
quibble about how closely PAM markets would approximate the efficient market
hypothesis, there is no doubt trading on many future events would come close
enough to provide valuable intelligence. Thus, while it was a public relations
disaster, some version of the program will likely be introduced on a restricted
basis, perhaps along the lines suggested above, in an attempt to better tap the
country's disperse knowledge base, human insight, and analytical expertise.
This solution is far from perfect, not allowing realization of the full potential of the
program.
• Lou Dobbs (2003), has perhaps best summed up this unfortunate episode:
―We will never know if the Policy Analysis Market would have been successful. But if
there were even a small chance that it could have been a useful tool, there should
be, at a minimum, further discussion of the idea. This is, after all, not a matter of just
partisan politics but one of national security. And forcing the resignations of those
involved with the planning is a strong deterrent to progressive thinking, of which we
have no surplus.‖
18
19. POLICY ANALYSIS MARKET
• Poindexter also faced immense criticism from the media and
politicians about the Policy Analysis Market project, a prediction
market that would have rewarded participants for accurately
predicting geopolitical trends in the Middle East. This was portrayed
in the media as profiting from the assassination of heads of state
and acts of terrorism due to such events being mentioned on
illustrative sample screens showing the interface.
• The controversy over the futures market led to a Congressional
audit of the Information Awareness Office in general, which revealed
a fundamental lack of privacy protection for American citizens.
• Funding for the IAO was subsequently cut and Poindexter retired
from DARPA on August 12, 2003. Wikipedia
19
20. Analysis of DOD Major Defense Acquisition Program Portfolios
( FY 2008 dollars)
•Source: GAO analysis of DOD data. FY 2000 FY 2005 FY 2007
Number of Programs 95 75 91
•Total planned commitments $790 B $1.5 T $1.6 T
•Commitments outstanding $380 B $887 B $858 B
•Portfolio performance
•Change RDT&E costs from first estimate 27% 33% 40%
•Change acquisition cost from first estimate 6% 18% 26%
•Estimated total acquisition cost growth $42 B $202 B $295 B
Programs with = >25% increase in Program Acquisition Unit Cost 37% 44% 44%
•Ave schedule delay delivering initial capability 16 mos 17 mos 21 mos
20
21. DoD Risk Definition
“A measure of future uncertainties in achieving
program goals and objectives within defined
cost, schedule and performance constraints.”
Each risk event has three components:
− A future root cause;
− The probability of the future root cause occurring;
and
− The consequence / impact if the root cause occurs.
22. Risk Identification
• After establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are
about events that, when triggered, cause problems. Hence, risk identification can start with the source of
problems, or with the problem itself.
• Source analysis Risk sources may be internal or external to the system that is the target of risk
management. Examples of risk sources are: stakeholders of a project, employees of a company or the weather
over an airport.
• Problem analysis Risks are related to identified threats. For example: the threat of losing money, the threat of
abuse of privacy information or the threat of accidents and casualties. The threats may exist with various entities,
most important with shareholders, customers and legislative bodies such as the government.
• When either source or problem is known, the events that a source may trigger or the events that can lead to a
problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the
project; privacy information may be stolen by employees even within a closed network; lightning striking a Boeing
747 during takeoff may make all people onboard immediate casualties.
• The chosen method of identifying risks may depend on culture, industry practice and compliance. The
identification methods are formed by templates or the development of templates for identifying source, problem or
event. Common risk identification methods are:
– Objectives-based risk identification Organizations and project teams have objectives. Any event that may
endanger achieving an objective partly or completely is identified as risk.
– Scenario-based risk identification In scenario analysis different scenarios are created. The scenarios may
be the alternative ways to achieve an objective, or an analysis of the interaction of forces in, for example, a
market or battle. Any event that triggers an undesired scenario alternative is identified as risk - see Futures
Studies for methodology used by Futurists.
– Taxonomy-based risk identification The taxonomy in taxonomy-based risk identification is a breakdown of
possible risk sources. Based on the taxonomy and knowledge of best practices, a questionnaire is compiled.
The answers to the questions reveal risks. Taxonomy-based risk identification in software industry can be
found in CMU/SEI-93-TR-6.
• Common-risk Checking In several industries lists with known risks are available. Each risk in the list can be
checked for application to a particular situation. An example of known risks in the software industry is the Common
Vulnerability and Exposures list found at http://cve.mitre.org
• Risk Charting This method combines the above approaches by listing Resources at risk, Threats to those
resources Modifying Factors which may increase or reduce the risk and Consequences it is wished to avoid.
Creating a matrix under these headings enables a variety of approaches. One can begin with resources and
consider the threats they are exposed to and the consequences of each. Alternatively one can start with the
threats and examine which resources they would affect, or one can begin with the consequences and determine
which combination of threats and resources would be involved to bring them about. 22
23. Assessment
• Once risks have been identified, they must then be assessed as to their potential
severity of loss and to the probability of occurrence. These quantities can be either
simple to measure, in the case of the value of a lost building, or impossible to know
for sure in the case of the probability of an unlikely event occurring. Therefore, in
the assessment process it is critical to make the best educated guesses possible in
order to properly prioritize the implementation of the risk management plan.
• The fundamental difficulty in risk assessment is determining the rate of occurrence
since statistical information is not available on all kinds of past incidents.
• Furthermore, evaluating the severity of the consequences (impact) is often quite
difficult for immaterial assets. Asset valuation is another question that needs to be
addressed. Thus, best educated opinions and available statistics are the primary
sources of information.
• Nevertheless, risk assessment should produce such information for the
management of the organization that the primary risks are easy to understand and
that the risk management decisions may be prioritized. Thus, there have been
several theories and attempts to quantify risks. Numerous different risk formulae
exist, but perhaps the most widely accepted formula for risk quantification is:
• Rate of occurrence multiplied by the impact of the event equals
risk frequency x impact = risk
23
24. Assessment
• Later research has shown that the financial benefits of risk
management are less dependent on the formula used but
are more dependent on the frequency and how risk
assessment is performed.
• In business it is imperative to be able to present the
findings of risk assessments in financial terms. Robert
Courtney Jr. (IBM, 1970) proposed a formula for presenting
risks in financial terms.
• The Courtney formula was accepted as the official risk
analysis method for the US governmental agencies. The
formula proposes calculation of ALE (annualized loss
expectancy) and compares the expected loss value to the
security control implementation costs (cost-benefit
analysis).
24
25. Potential risk treatments
• Once risks have been identified and assessed, all techniques to manage the risk
fall into one or more of these four major categories:
• Avoidance (elimination) AVOID
• Reduction (mitigation) / CONTROL
• Retention (acceptance and budgeting) / ACCEPTANCE
• Transfer (insurance or hedging) / TRANSFER
• Ideal use of these strategies may not be possible. Some of them may involve
trade-offs that are not acceptable to the organization or person making the risk
management decisions.
25
26. Risk avoidance
• Includes not performing an activity that could carry risk.
– Examples:
• not buying a property or business in order to not take on the liability that
comes with it.
• not flying in order to not take the risk that the airplane were to be hijacked.
26
27. Risk reduction
– Involves methods that reduce the severity of the loss or the likelihood of the
loss from occurring. Examples include sprinklers designed to put out a fire
to reduce the risk of loss by fire. This method may cause a greater loss by
water damage and therefore may not be suitable. Halon fire suppression
systems may mitigate that risk but the cost may be prohibitive as a strategy.
– Modern software development methodologies reduce risk by developing
and delivering software incrementaly. Early methodologies suffered from
the fact that they only delivered software in the final phase of development;
any problems encountered in earlier phases meant costly rework and often
jeopardized the whole project. By developing in iterations, software projects
can limit effort wasted to a single iteration.
– Outsourcing could be an example of risk reduction if the outsourcer can
demonstrate higher capability at managing or reducing risks. In this case
companies outsource only some of their departmental needs. For example,
a company may outsource only its software development, the
manufacturing of hard goods, or customer support needs to another
company, while handling the business management itself. This way, the
company can concentrate more on business development without having to
worry as much about the manufacturing process, managing the
development team, or finding a physical location for a call center.
27
28. • Involves accepting the loss when it occurs.
True self insurance falls in this category.
Risk retention Risk retention is a viable strategy for small
risks where the cost of insuring against the
risk would be greater over time than the
total losses sustained.
– All risks that are not avoided or
transferred are retained by default.
– This includes risks that are so large
or catastrophic that they either
cannot be insured against or the
premiums would be infeasible
• War is an example since most property
and risks are not insured against war, so
the loss attributed by war is retained by the
insured.
• Also any amounts of potential loss (risk)
over the amount insured is retained risk.
This may also be acceptable if the chance
of a very large loss is small or if the cost to
insure for greater coverage amounts is so
great it would hinder the goals of the
organization too much.
28
29. Risk transfer
• Means causing another party to accept the risk, typically by contract or
by hedging.
• Insurance is one type of risk transfer that uses contracts.
• Other times it may involve contract language that transfers a risk to another party
without the payment of an insurance premium.
– Liability among construction or other contractors is very often transferred this
way.
• On the other hand, taking offsetting positions in derivatives is
typically how firms use hedging to financially manage risk.
29
30. Sunk Cost
• In economics and business decision-making, sunk costs are costs that cannot be
recovered once they have been incurred. Sunk costs are sometimes contrasted with
variable costs, which are the costs that will change due to the proposed course of action,
and which are costs that will be incurred if an action is taken. In microeconomic theory, only
variable costs are relevant to a decision. Economics proposes that a rational actor does not
let sunk costs influence one's decisions, because doing so would not be assessing a
decision exclusively on its own merits. The decision-maker may make rational decisions
according to their own incentives; these incentives may dictate different decisions than
would be dictated by efficiency or profitability, and this is considered an and distinct from a
sunk cost problem.
• For example, when one pre-orders a non-refundable and non-transferable movie ticket, the
price of the ticket becomes a sunk cost. Even if the ticket-buyer decides that he would rather
not go to the movie, there is no way to get back the money he originally paid. Therefore, the
sunk cost of the ticket should have no bearing on the decision of whether or not to actually
go to the movie. In other words, it is a fallacy to conclude that he should go to the movie so
as to avoid "wasting" the cost of the ticket.
• While sunk costs should not affect the rational decision maker's best choice, the sinking of a
cost can. Until you commit your resources, the sunk cost becomes known as an avoidable
fixed cost, and should be included in any decision making processes. If the cost is large
enough, it could potentially alter your next best choice, or opportunity cost. For example, if
you are considering pre-ordering movie tickets, but haven't actually purchased them yet, the
cost to you remains avoidable. If the price of the tickets rises to an amount that requires you
to pay more than the value you place on them, the cost should be figured into your decision-
making, and you should reallocate your resources to your next best choice.
30
31. Opportunity Lost?
– Avoidance may seem the answer to all risks, but avoiding risks also means
losing out on the potential gain that accepting (retaining) the risk may have
allowed.
– Not entering a business to avoid the risk of loss also avoids the possibility of
earning profits.
31
32. Portfolio Investment Management
• Large-scale Defense infrastructure modernization programs such as Global Combat Support have
complex inter-dependencies and long-time horizons that render fully
informed investment decisions difficult to achieve before substantial, and unrecoverable, resources are committed. (sunk cost)
– However complex these decisions, they, nonetheless, can be decomposed along
three basic dimensions:
– Uncertainty
– Timing
– Irreversibility
• These primary parameters define the value of investment options available to a firm,
regardless of whether it is in the public or private sector.
R Suter Managing Uncertainty and Risk in Public Sector Investments, Richard Suter, Information Technology Systems, Inc., R Consulting A paper
presented at the 4th Annual Acquisition Research Symposium, Graduate School of Business & Public Policy, Naval Postgraduate School
32
33. Level of Activity over Life Cycle
Monitoring and
Control
Level of Activity
Execute
Plan
Close
Initiate
Start Finish
Time
Average Duty Cycle for DOD
systems is ten years
33
34. System of Systems Engineering SoSe
• System of Systems Engineering (SoSE) methodology is heavily used in Department of
Defense applications, but is increasingly being applied to non-defense related problems
such as architectural design of problems in air and auto transportation, healthcare, global
communication networks, search and rescue, space exploration and many other System of
Systems application domains. SoSE is more than systems engineering of monolithic,
complex systems because design for System-of-Systems problems is performed under
some level of uncertainty in the requirements and the constituent systems, and it involves
considerations in multiple levels and domains (as per [1]and [2]). Whereas systems
engineering focuses on building the system right, SoSE focuses on choosing the right
system(s) and their interactions to satisfy the requirements.
• System-of-Systems Engineering and Systems Engineering are related but different fields of
study. Whereas systems engineering addresses the development and operations of
monolithic products, SoSE addresses the development and operations of evolving
programs. In other words, traditional systems engineering seeks to optimize an individual
system (i.e., the product), while SoSE seeks to optimize network of various interacting
legacy and new systems brought together to satisfy multiple objectives of the program.
SoSE should enable the decision-makers to understand the implications of various choices
on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE
methodology should prepare the decision-makers for informed architecting of System-of-
Systems problems.
• Due to varied methodology and domains of applications in existing literature, there does not
exist a single unified consensus for processes involved in System-of-Systems Engineering.
One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-
phase method where a SoS problem is defined (understood), abstracted, modeled and
analyzed for behavioral patterns.
34
38. Complex system of systems
• Difficulty with System of systems?
The technical complexity
The programmatic complexity of
integrating software intensive
systems
The absence of accurate cost
information at the onset of major
systems/ software Programs
38
39. Portfolio Investment Management-Uncertainty
Unfortunately, algorithms capable of modeling the effects of these variables are relatively few, especially for the
uncertainty and irreversibility of investment decisions (Dixit & Pyndik, 1994, p. 211).
For large-scale information Technology (IT) modernization programs, there are at least three sources of uncertainty—
and, thus, risk
The technical complexity
The programmatic complexity of integrating software intensive systems
The absence of accurate cost information at the onset of major systems/ software
Programs
• Software-intensive systems are particularly sensitive to the systematic underestimation of risk,
primarily because the level of complexity is hard to manage, let alone comprehend.
Investment in software-intensive systems tends to be irreversible because it is
spent on the labor required to develop the intellectual capital embedded in software.
The outcome of software development is almost invariably unique, a one-of-kind
artifact—despite the numerous efforts to develop reusable software.
Unlike physical assets ,the salvage value of software is zero because no benefit is realized until
the system is deployed; and that labor, once invested, is unrecoverable.
One result is an (implicit) incentive to continue projects that have little chance of success, despite
significant cost overruns, schedule delays.
R Suter Managing Uncertainty and Risk in Public Sector Investments, Richard Suter, Information Technology Systems, Inc., R Consulting A paper presented at the 4th
Annual Acquisition Research Symposium, Graduate School of Business & Public Policy, Naval Postgraduate School 39
41. Uncertainty
For large-scale information Technology (IT) modernization programs,
there are at least three sources of uncertainty—and, thus,
risk
The technical complexity
The programmatic complexity of integrating
software intensive systems
The absence of accurate cost information
at the onset of major systems/ software Programs
41
44. Better Methods of Analyzing Cost Uncertainty Can Improve Acquisition Decision making
• Cost estimation is a process that attempts to forecast the future expenditures for some capital asset,
hardware, service, or capability. Despite being a highly quantitative field, cost estimation and the values it
predicts are uncertain. An estimate is a possible or likely outcome, but not necessarily the outcome that will
actually transpire. This uncertainty arises because estimators do not have perfect information about future
events and the validity of assumptions that underpin an estimate.
• Uncertainty may result from an absence of critical technical information,
• the presence of new technologies or
• approaches that do not have historical analogues for comparison,
• the evolution of requirements, or
• changes in economic conditions.
• The Office of the Secretary of Defense and the military departments have
historically underestimated and under funded the cost of buying new
weapon systems (e.g., by more than 40 percent at Milestone II).
• Much of this cost growth is thought to be the result of unforeseen (but knowable)
circumstances when the estimate was developed. In the interest of generating more
informative cost estimates, the Air Force Cost Analysis Agency and the Air Force cost
analysis community want to formulate and implement a cost uncertainty analysis policy.
• To help support this effort, RAND Project AIR FORCE (PAF) studied a variety of cost uncertainty
assessment methodologies, examined how these methods and policies relate to a total portfolio of
programs, and explored how risk information can be communicated to senior decision makers in a clear and
understandable way.
44
45. •Project Air Force (USAF Rand Project) recommends that any cost uncertainty analysis policy reflect the following:
• A single uncertainty analysis method should not be stipulated for all
•
circumstances and programs.
• It is not practical to prefer one specific cost uncertainty analysis methodology in all cases. Rather, the policy should offer
the flexibility to use different assessment methods. These appropriate methods fall into three classes:
historical, sensitivity, and probabilistic. Moreover, a combination of methods might be desirable and more
effective in communicating risks to decision makers.
• • A uniform communications format should be used. PAF (USAF Rand Project)
suggests a basic three-point format consisting of low, base, and high values as a
minimum basis for displaying risk analysis. The advantages of such a format are that it is
independent of the method employed and that it can be easily communicated to decision makers.
• A record of cost estimate accuracy should be tracked and updated
•
periodically. Comparing estimates with final costs will enable organizations to identify
areas where they may have difficulty estimating and sources of uncertainty that were not
adequately examined.
• • Risk reserves should be an accepted acquisition and funding practice.
• Establishing reserves to cover unforeseen costs will involve a cultural change within the Department of
Defense and Congress. The current approach of burying a reserve within the elements of the estimate
makes it difficult to do a retrospective analysis of whether the appropriate level of reserve was set, and
to move reserves, when needed, between elements of a large program.
• Effective cost uncertainty analysis will help decision makers understand the nature of potential risk and
funding exposure and will aid in the development of more realistic cost estimates by critically
evaluating program assumptions and identifying technical issues. RAND
45
47. COST ESTIMATING CHALLENGES
Developing a good cost estimate requires stable program requirements, access to
detailed documentation and historical data, well-trained and experienced cost analysts, a
risk and uncertainty analysis, the identification of a range of confidence levels, and
adequate contingency and management reserves.
Cost estimating is nonetheless difficult in the best of circumstances. It requires both
science and judgment. And, since answers are seldom—if ever—precise, the goal is to
find a ―reasonable‖ answer. However, the cost estimator typically faces many challenges
in doing so. These challenges often lead to bad estimates, which can be characterized
as containing poorly defined assumptions,
OMB first issued the Capital Programming Guide as a Supplement to the 1997 version of
Circular A-11,
• Part 3, still available on OMB‘s Web site at
http://www.whitehouse.gov/omb/circulars/a11/cpgtoc.html.
• Our reference here is to the 2006 version, as we noted in the preface: Supplement to
Circular A-11, Part 7,
• available at http://www.whitehouse.gov/omb/circulars/index.html.
47
48. John Wilder Tukey
• "An appropriate answer to the right
problem is worth a good deal more than
an exact answer to an approximate
problem."
48
50. The absence of accurate cost information at the onset of major systems/
software Programs
Measures of uncertainty for cost/schedule estimates and the rate at which that uncertainty declines are a key
concern—because, they govern whether and to what extent confidence can be placed in cost and schedule
estimates. The key to overcoming initial estimate uncertainty is the capability to harness and to
apply information as it becomes available, thus, enabling a Firm to capture
the time value of that information.
Indeed, where IT infrastructure modernization projects are supported by a strong quality-assurance, systems-engineering
culture (e.g., have institutionalized best-practice regimes such as the CMMI, 6-Sigma, Agile Methods are likely to quickly
reduce estimate errors incurred at project start-up. Firms without that culture tend to have limited information
efficiency. (Drawing an analogy to thermo-dynamic systems, such firms constitute highly
dissipative systems in that they exhibit a high degree of entropy, which takes the form of
information disorganization).
Unfortunately, traditional methods of discounting investment risk such as Net Present Value (NPV) do not account for
irreversibility and uncertainty. In part, this is due to the fact that NPV computes the value of a portfolio of investments as
the maximized mean of discounted cash flows on the assumption that the risk to underlying investment options can
be replicated by assets in a financial market.
NPV also implicitly assumes that the value of the underlying asset is known and
accurate at the time the investment decision is made.
These assumptions seldom apply for large-scale infra-modernization programs, in
either the public or the private sector. In addition, NPV investment is undertaken when the
value of a unit of capital is at least as large as its purchase and installation costs. But, this
can be error prone since opportunity costs are highly sensitive to the uncertainty
surrounding the future value of the project due to factors such as the riskiness of future cash
flows. These considerations also extend to econometric models, which exclude
irreversibility, the incorporation of which transforms investment models into non-linear
equations (Dixit & Pindyck, 1994, p. 421). Nonetheless, irreversibility constitutes both a
negative opportunity cost and a lost-option value that must be included in the cost of
investment.
R Suter Managing Uncertainty and Risk in Public Sector Investments, Richard Suter, Information Technology Systems, Inc., R Consulting A paper presented at the 4th
Annual Acquisition Research Symposium, Graduate School of Business & Public Policy, Naval Postgraduate School 50
52. Risk Assessment on Costs:
A Cost Probability Distribution
COMBINED COST
MODELING AND
TECHNICAL RISK
Cost = a + bXc
COST MODELING
UNCERTAINTY
Cost
Estimate
Historical data point
$
Cost estimating relationship
TECHNICAL RISK Standard percent error bounds
Cost Driver (Weight)
Input
variable
Jeff Kline, Naval Postgraduate School 52
52
53. COST ESTIMATING METHODOLOGY
TIME OF USE
GROSS ESTIMATES DETAILED ESTIMATES
PARAMETRIC ACTUAL
(Program
A B Initiation) C IOC FOC
Concept Technology System Development Production & Operations &
Refinement Development & Demonstration Deployment Support
Concept Design FRP
Decision Readiness LRIP/IOT&E Decision
Review Review
Pre-Systems Acquisition Systems Acquisition Sustainment
EXPERT OPINION
ANALOGY ENGINEERING
53
54. SOFTWARE DEVELOPMENT CONE OF UNCERTAINTY
All software projects are subject to inherent errors in early estimates. The Cone of Uncertainty represents the best-case
reduction in estimation error and improvement in predictability over the course of a project. Skillful project leaders treat the cone
as a fact of life and plan accordingly.
4X
Project predictability and control are attainable only through
2X active, skillful, and continuous efforts that force the cone to
narrow. The cone represents the best case; results can
Remaining variability in
easily be worse.
project scope
1.5X
1.25X
1.0X
0.8X
0.67X
Estimates are possible anywhere in the cone, but
0.5X
organizational commitments tied to project completion should
not be made until about here – and only if work has been
done to narrow the cone.
0.25X Square Peg in a Round Hole
Initial Marketing Detailed Project
Concept Approved Requirements Detailed Tech Design Complete
Product Complete Requirements Complete
Definition Complete
Source: Construx, Bellevue WA
55. Software Cost Estimating
• All commercial models (COCOMO II, SEER-SEM, and Price-S) are productivity-
based models, and basically based on the same equation: Labor Rate ($/hr) * Software Size/
Productivity.
• Maximize use Of actual data for Labor Rate, Productivity, Size.
• Good source for productivity rates:
http://www.stsc.hill.af.mil/CrossTalk/2002/03/reifer.html
• COCOMO II does not capture requirement analysis and government V&V.
• As man-effort increases, schedule and productivity decreases. However, cost increases and
possible rework.
3
• Schedule rule of thumb: Time ~ 3.67* Effort
• CAUTIONS:
– Code Re-use Lowers Cost, Modification Increases Cost
• Per OSD/ CAIG: modified code, with more than 25% of the lines changed or added, is considered
new code. (based on NASA Study)
• with SEER-SEM cost of 99% Modified Code < Cost of New Code
– Analogies: Don’t treat non-similar languages as equivalent
Example in PLCCE: SLOC= C + C++ + IDL + JAVA + XML
56. Cost Risk Analysis
The process of quantifying uncertainty in a cost estimate.
• By definition a point estimate is precisely wrong
– Assessment of risk is not evident in a point estimate
– The influence of variables may not be understood by the decision maker
• Cost risk predicts cost growth.
• Cost risk = cost estimating risk + schedule risk+ technical risk +
change in requirements/ threat
• Risk analysis adjusts the cost estimate to provide decision makers an
understanding of funding risks.
1 0.12
0.9
0.1
0.8
0.7
0.08
0.6
0.5 0.06
0.4
0.04
0.3
0.2
0.02
0.1
0 0
Probability Density Function PDF Cumulative Density Function CDF
56
57. Simplified
Cost Risk Simulation Model
If no actual data available
Methodology perform the following steps
Basis of Estimate
Schedule Assign Risk
Producibility to Each Element:
Reliability None, Low, Med
Influenced
by
Complexity High, etc.
Technology Status
availability
of actual Assess Risk
data Categories Assign Risk Limits to
or expert For Data Inputs statistical distribution
opinion By WBS (e.g. + X; -X to +Y, etc.)
8
7
6
Total Cost PDF
5
4
3
2
Select
1
0 Run statistical
1
4
7
3
6
9
2
5
8
1
4
7
3
6
9
1.2
1.5
1.1
1.1
1.1
1.2
1.2
1.2
1.3
1.3
1.3
1.4
1.4
1.4
1.5
1.5
1.5
1
0.9
Model distribution
0.8
0.7
0.6
0.5
0.4
0.3
CDF
0.2
0.1
0
Input PDFs
59. Example (Cont’d)
Pre & Post Software Contract Data
350000 30
01-6 ICE 4/05 New
300000 SLOC 25
250000
20
Dollars in Millions
SLOC in Units
200000
Offeror SLOC
Estimate 6/05 15
with 38% Reuse
150000
Code
PM SLOC 10
100000 Estimate 4/05 Software Metrics Report (SLOC)
with 76% Reuse
Code Ktr EAC
5
50000
0 0
R
R
05
06
10 6
07
07
06
06
06
06
06
07
07
11 5
05
12 5
12 6
06
11 6
00
0
0
00
PD
0
CD
20
20
20
20
20
20
20
20
20
20
20
20
/20
/20
/20
/20
/2
/2
3/
9/
2/
4/
2/
4/
5/
7/
8/
1/
3/
4/
10
Date
60. Example (Cont’d)
Schedule Risk
Software Development Schedule
Months
01-6 ICE (4/05)
COCOMO Equation 25
NCCA Equation 30
PM Estimate (4/05) 18
Contract - Initial (6/05) 18
Contract - Current (3/07) (82% Complete) 31
01-6 ICE - Current (3/07) 35
61. The Refining of a
Life Cycle Cost Estimate
LCCE
Cost Estimating Uncertainty
MS A MS B MS C
Concept Trades Ktr Selection Design Reviews Production
AOA Test & Eval / Design Mods
CARD Logistics
Program / System Evolution
61
63. DIFFERENTIATING COST ANALYSIS AND COST
ESTIMATING
Cost analysis, used to develop cost estimates for such things as hardware systems,
automated information systems, civil projects, manpower, and training, can be defined as
1. the effort to develop, analyze, and document cost estimates with analytical
approaches and techniques;
2. the process of analyzing and estimating the incremental and total resources
required to support past, present, and future systems—an integral step in selecting alternatives;
and
3. a tool for evaluating resource requirements at key milestones and decision points in the
acquisition process.
Cost estimating involves collecting and analyzing historical data and applying
quantitative models, techniques, tools, and databases to predict a program‘s future cost.
More simply, cost estimating combines science and art to predict the future cost of something
based on known historical data that are adjusted to reflect new materials, technology,
software languages, and development teams.
Because cost estimating is complex, sophisticated cost analysts should combine concepts from
such disciplines as accounting, budgeting, computer science, economics, engineering,
mathematics, and statistics and should even employ concepts from marketing and public
affairs. And because cost estimating requires such a wide range of disciplines, it is important
that the cost analyst either be familiar with these disciplines or have access to an expert in
these fields.
63
65. Jackson Lears‘s analyzed why the dominant
American ―culture of control‖ denies the
importance of luck
• Drawing on a vast body of
research, Lears ranges
through the entire sweep of
American history as he
uncovers the hidden
influence of risk taking,
conjuring, soothsaying, and
sheer dumb luck on our
culture, politics, social lives,
and economy.
T.J. Jackson Lears “Something for Nothing” (2003)
65
66. Illusion of Control
• In a series of experiments, Ellen Langer (1975) demonstrated
first the prevalence of the illusion of control and second, that
people were more likely to behave as if they could exercise
control in a chance situation where ―skill cues‖ were present. By
skill cues, Langer meant properties of the situation more
normally associated with the exercise of skill, in particular the
exercise of choice, competition, familiarity with the stimulus and
involvement in decisions.
• One simple form of this fallacy is found in casinos: when rolling
dice in craps, it has been shown that people tend to throw
harder for high numbers and softer for low numbers.
• Under some circumstances, experimental subjects have been
induced to believe that they could affect the outcome of a purely
random coin toss. Subjects who guessed a series of coin tosses
more successfully began to believe that they were actually
better guessers, and believed that their guessing performance
would be less accurate if they were distracted.
66
67. Critque of Taleb
• Taleb's point is rather that most specific forecasting is pointless,
as large, rare and unexpected events (which by definition could
not have been included in the forecast) will render the forecast
useless.
• However, as Black Swans can be both negative and positive,
we can try to structure our lives in order to minimize the effect of
the negative Black Swans and maximize the impact of the
positive ones.
I think this is excellent advice on how to live one's life and
seems to be equivalent, for example, to the focus on downside
protection (rather than upside potential) that has led to the
success of the 'value' approach to investing.
67
68. Risk = Variance
• Risk: Well, it certainly doesn't mean standard deviation.
People mainly think of risk in terms of downside risk. They
are concerned about the maximum they can lose. So that's
what risk means.
• In contrast, the professional view defines risk in terms of
variance, and doesn't discriminate gains from losses. There
is a great deal of miscommunication and misunderstanding
because of these very different views of risk. Beta does not
do it for most people, who are more concerned with the
possibility of loss
• Daniel Kahneman
Daniel Kahneman is the Eugene Higgins Professor of Psychology at Princeton University) and Professor of Public Affairs at Woodrow Wilson School. Kahneman was born in Israel and educated at the Hebrew University in
Jerusalem before taking his PhD at the University of California. He was the joint Nobel Prize winner for Economics in 2002 for his work on applying cognitive behavioural theorie to decision making in economics .
68
69. Cicero
Born: January 3, 106 B.C.E.
Arpinum, Latinum
Died: December 7, 43 B.C.E.
Formiae, Latinum
Roman orator and writer
Marcus Tullius Cicero
―Probability is the very guide of life.‖
• Pp 31 The Drunkards Walk
69
70. Probability
• “ in no other branch of mathematics is it so
easy to blunder as in probability theory.”
– Martin Gardiner, ―Mathematical Games," Scientific American, October 1959 pp 180-182
70
71. The Monte
Hall problem
• Probability Theory The Monte Hall
problem, birthday pairings, counting
principles, conditional probability and
independence, Bayes Rule, random
variables and their distributions,
Gambler's Ruin problem, random walks,
and Markov chains.
71
74. Probability Theory
• Probability theory is the branch of mathematics concerned with analysis
of random phenomena. The central objects of probability theory are
random variables, stochastic processes, and events: mathematical
abstractions of non-deterministic events or measured quantities that may
either be single occurrences or evolve over time in an apparently random
fashion.
• Although an individual coin toss or the roll of a die is a random event,
if repeated many times the sequence of random events will exhibit certain
statistical patterns, which can be studied and predicted. Two
representative mathematical results describing such patterns are the law
of large numbers and the central limit theorem.
• As a mathematical foundation for statistics, probability theory is essential
to many human activities that involve quantitative analysis of large sets of
data. Methods of probability theory also apply to description of complex
systems given only partial knowledge of their state, as in statistical
mechanics. A great discovery of twentieth century physics was the
probabilistic nature of physical phenomena at atomic scales, described in
quantum mechanics.
74
76. 1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
random variables
Probability Density • In mathematics, random variables are used in the study
Function PDF of chance and probability. They were developed to assist
in the analysis of games of chance, stochastic events, and
the results of scientific experiments by capturing only the
mathematical properties necessary to answer probabilistic
questions. Further formalizations have firmly grounded the
entity in the theoretical domains of mathematics by
making use of measure theory.
• Fortunately, the language and structure of random
variables can be grasped at various levels of
mathematical fluency. Set theory and calculus are
fundamental.
• Broadly, there are two types of random variables —
discrete and continuous. Discrete random variables take
on one of a set of specific values, each with some
probability greater than zero. Continuous random
variables can be realized with any of a range of values
(e.g., a real number between zero and one), and so there
are several ranges (e.g. 0 to one half) that have a
probability greater than zero of occurring.
• A random variable has either an associated probability
distribution (discrete random variable) or probability
density function (continuous random variable).
76
77. Probability Density Function
NEED A BETTER DEFINITION
• it shows the probability density function
(pdf) of a non-linear communications
channel - i.e. the embedded output of a
2D system. It has been estimated by
using a characteristic function estimator
(the characteristic function is the Fourier
transform of the pdf so by estimating the
characteristic function you can get an
estimate of the pdf by an inverse FFT).
77
78. Game theory
Is a branch of applied mathematics that is used in the social sciences (most notably
economics), biology, engineering, political science, computer science (mainly for
artificial intelligence), and philosophy. Game theory attempts to mathematically
capture behavior in strategic situations, in which an individual's success in making
choices depends on the choices of others. While initially developed to analyze
competitions in which one individual does better at another's expense (zero sum
games), it has been expanded to treat a wide class of interactions, which are
classified according to several criteria. Today, ―game theory is a sort of umbrella or
‗unified field‘ theory for the rational side of social science, where ‗social‘ is
interpreted broadly, to include human as well as non-human players (computers,
animals, plants)‖ (Aumann 1987).
• Traditional applications of game theory attempt to find equilibria in these games—
sets of strategies in which individuals are unlikely to change
their behavior. Many equilibrium concepts have been developed (most famously
the Nash equilibrium) in an attempt to capture this idea. These equilibrium concepts
are motivated differently depending on the field of application, although they often
overlap or coincide. This methodology is not without criticism, and debates continue
over the appropriateness of particular equilibrium concepts, the appropriateness of
equilibria altogether, and the usefulness of mathematical models more generally.
• Although some developments occurred before it, the field of game theory came into
being with the 1944 book Theory of Games and Economic Behavior by John von
Neumann and Oskar Morgenstern. This theory was developed extensively in the
1950s by many scholars. Game theory was later explicitly applied to biology in the
1970s, although similar developments go back at least as far as the 1930s. Game
theory has been widely recognized as an important tool in many fields. Eight game
theorists have won Nobel prizes in economics, and John Maynard Smith was
awarded the Crafoord Prize for his application of game theory to biology.
78
79. Itō's lemma
• In mathematics, Itō's lemma is used in Itō stochastic calculus to find the
differential of a function of a particular type of stochastic process. It is the
stochastic calculus counterpart of the chain rule in ordinary calculus and is
best memorized using the Taylor series expansion and retaining the second
order term related to the stochastic component change. The lemma is widely
employed in mathematical finance.
• Itō's lemma is the version of the chain rule or change of variables formula
which applies to the Itō integral. It is one of the most powerful and
frequently used theorems in stochastic calculus. For a continuous d-
dimensional semimartingale X = (X1,…,Xd) and twice continuously
differentiable function f from Rd to R, it states that f(X) is a semimartingale an
• This differs from the chain rule used in standard calculus due to the term
involving the quadratic covariation [Xi,Xj ]. The formula can be generalized to
non-continuous semimartingales by adding a pure jump term to ensure that
the jumps of the left and right hand sides agree (see Itō's lemma).
79
80. EVENT
• In probability theory, an event is a set of outcomes (a subset of the sample space) to which a
probability is assigned. Typically, when the sample space is finite, any subset of the sample space
is an event (i.e. all elements of the power set of the sample space are defined as events).
However, this approach does not work well in cases where the sample space is infinite, most
notably when the outcome is a real number. So, when defining a probability space it is possible,
and often necessary, to exclude certain subsets of the sample space from being events (see §2,
below).
• A simple example
• If we assemble a deck of 52 playing cards and no jokers, and draw a single card from the deck,
then the sample space is a 52-element set, as each individual card is a possible outcome. An
event, however, is any subset of the sample space, including any single-element set (an
elementary event, of which there are 52, representing the 52 possible cards drawn from the deck),
the empty set (which is defined to have probability zero) and the entire set of 52 cards, the sample
space itself (which is defined to have probability one). Other events are proper subsets of the
sample space that contain multiple elements. So, for example, potential events include:
• A Venn diagram of an event. B is the sample space and A is an event.
By the ratio of their areas, the probability of A is approximately 0.4.
• "Red and black at the same time without being a joker" (0 elements),
• "The 5 of Hearts" (1 element),
• "A King" (4 elements),
• "A Face card" (12 elements),
• "A Spade" (13 elements),
• "A Face card or a red suit" (32 elements),
• "A card" (52 elements).
• Since all events are sets, they are usually written as sets (e.g. {1, 2, 3}), and represented
graphically using Venn diagrams. Venn diagrams are particularly useful for representing events
because the probability of the event can be identified with the ratio of the area of the event and the
area of the sample space. (Indeed, each of the axioms of probability, and the definition of
conditional probability can be represented in this fashion.)
80
81. EVENT (continued)
• Events in probability spaces
• Defining all subsets of the sample space as events works well when there are only finitely many
outcomes, but gives rise to problems when the sample space is infinite. For many standard
probability distributions, such as the normal distribution the sample space is the set of real
numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the
real numbers run into difficulties when one considers 'badly-behaved' sets, such as those which
are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets.
For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is
necessary to use a σ-algebra, that is, a family closed under countable unions and intersections.
The most natural choice is the Borel measurable set derived from unions and intersections of
intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice.
• In the general measure-theoretic description of probability spaces, an event may be defined as an
element of a selected σ-algebra of subsets of the sample space. Under this definition, any subset
of the sample space that is not an element of the σ-algebra is not an event, and does not have a
probability. With a reasonable specification of the probability space, however, all events of interest
will be elements of the σ-algebra.
81
82. Law of Large Numbers
• was first described by Jacob Bernoulli. It took him over 20 years to develop a
sufficiently rigorous mathematical proof which was published in his Ars
Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden
Theorem" but it became generally known as "Bernoulli's Theorem" (not to be
confused with the Law in Physics with the same name.)
• In 1835, S.D. Poisson further described it under the name "La loi des grands
nombres" ("The law of large numbers").[3] Thereafter, it was known under
both names, but the "Law of large numbers" is most frequently used.
• After Bernoulli and Poisson published their efforts, other mathematicians also
contributed to refinement of the law, including Chebyshev, Markov, Borel,
Cantelli and Kolmogorov. These further studies have given rise to two prominent
forms of the LLN. One is called the "weak" law and the other the "strong" law.
These forms do not describe different laws but instead refer to different
ways of describing the mode of convergence of the cumulative sample
means to the expected value, and the strong form implies the weak.
82
83. Law of Large Numbers
• Both versions of the law state that the sample average converges to the expected value
• where X1, X2, ... is an infinite sequence of i.i.d. random variables with finite expected value;
– E(X1)=E(X2) = ... = µ < ∞.
• An assumption of finite variance Var(X1) = Var(X2) = ... = σ2 < ∞ is not necessary.
Large or infinite variance will make the convergence slower, but the LLN holds anyway. This assumption
is often used because it makes the proofs easier and shorter.
• The difference between the strong and the weak version is concerned with the mode of convergence being
asserted.
• The weak law
• The weak law of large numbers states that the sample average converges in probability towards the expected
value.
• Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small,
with a sufficiently large sample there will be a very high probability that the average of the observations will be
close to the expected value, that is, within the margin.
• Convergence in probability is also called weak convergence of random variables. This version is called the weak
law because random variables may converge weakly (in probability) as above without converging strongly (almost
surely) as below.
• A consequence of the weak LLN is the asymptotic equipartition property.
• The strong law
• The strong law of large numbers states that the sample average converges almost surely to the expected value
• That is, the proof is more complex than that of the weak law. This law justifies the intuitive interpretation of the
expected value of a random variable as the "long-term average when sampling repeatedly".
• Almost sure convergence is also called strong convergence of random variables. This version is called the strong
law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in
probability). The strong law implies the weak law.
• The strong law of large numbers can itself be seen as a special case of the ergodic theorem.
83
84. Bayesian Analysis
• Bayesian inference uses aspects of the scientific method, which involves
collecting evidence that is meant to be consistent or inconsistent with a given
hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought
to change. With enough evidence, it should become very high or very low. Thus,
proponents of Bayesian inference say that it can be used to discriminate
between conflicting hypotheses: hypotheses with very high support should be
accepted as true and those with very low support should be rejected as false.
However, detractors say that this inference method may be biased due to initial
beliefs that one holds before any evidence is ever collected. (This is a form of
inductive bias).
• Bayesian inference uses a numerical estimate of the degree of belief in a
hypothesis before evidence has been observed and calculates a numerical
estimate of the degree of belief in the hypothesis after evidence has been
observed. (This process is repeated when additional evidence is obtained.)
Bayesian inference usually relies on degrees of belief, or subjective probabilities,
in the induction process and does not necessarily claim to provide an objective
method of induction. Nonetheless, some Bayesian statisticians believe
probabilities can have an objective value and therefore Bayesian inference can
provide an objective method of induction 84
85. The Reverend Thomas Bayes, F.R.S. --- 1701?-1761
Bayes‘ Equation
To convert the Probability of event A given event B to
the Probability of event B given event A, we use Bayes’
theorem. We must know or estimate the Probabilities of
the two separate events.
Pr (A|B) Pr (B)
Pr(B|A) = Pr (A)
Pr (A) = Pr(A|B)Pr(B) + Pr(A|B)Pr(B)
Law of Total Probability 85
85
86. Bayesian Analysis
– Example of Bayesian search theory
• In May 1968 the US nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of Norfolk
Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an extensive search
failed to discover the wreck. The US Navy's deep water expert, John Craven USN, believed that it was elsewhere and he
organized a search south west of the Azores based on a controversial approximate triangulation by hydrophones. He was
allocated only a single ship, the Mizar, and he took advice from a firm of consultant mathematicians in order to maximize
his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to
construct hypotheses about what could have caused the loss of the Scorpion.
• The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses,
to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall
probability grid. The probability attached to each square was then the probability that the wreck was in that square. A
second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that
square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result
of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of
the sea if it were to be searched.
• This sea grid was systematically searched in a manner which started with the high probability regions first and worked
down to the low probability regions last. Each time a grid square was searched and found to be empty its probability was
reassessed using Bayes' theorem. This then forced the probabilities of all the other grid squares to be reassessed
(upwards), also by Bayes' theorem. The use of this approach was a major computational challenge for the time but it was
eventually successful and the Scorpion was found about 740 kilometers southwest of the Azores in October of that year.
• Suppose a grid square has a probability p of containing the wreck and that the probability of successfully detecting the
wreck if it is there is q. If the square is searched and no wreck is found, then, by Bayes' theorem, the revised probability of
the wreck being in the square is given by XXXXXXXXX
86
87. Stochastic
• Stochastic is synonymous with
"random." The word is of Greek origin
and means "pertaining to chance" (Parzen
1962, p. 7).
• It is used to indicate that a particular
subject is seen from point of view of
randomness.
• Stochastic is often used as counterpart
of the word "deterministic," which
means that random phenomena are not
involved.
• Therefore, stochastic models are based
on random trials, while deterministic
models always produce the same
output for a given starting condition.
87
89. Stochastic modeling
• "Stochastic" means being or having a random variable.
A stochastic model is a tool for estimating probability
distributions of potential outcomes by allowing for random
variation in one or more inputs over time. The random
variation is usually based on fluctuations observed in
historical data for a selected period using standard time-
series techniques. Distributions of potential outcomes are
derived from a large number of simulations (stochastic
projections) which reflect the random variation in the
input(s).
• Its application initially started in physics (sometimes
known as the Monte Carlo Method). It is now
being applied in engineering, life sciences, social
sciences, and finance.
89
90. • Valuation
• Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the
insurance industry, however, assets and liabilities are not known entities. They depend on how many
policies result in claims, inflation from now until the claim, investment returns during that period, and so on.
• So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus
coming up with the best estimate for assets and liabilities, and therefore for the company's level of
solvency.
• Deterministic approach The simplest way of doing this, and indeed the
primary method used, is to look at best estimates. The projections in financial analysis usually use the most
likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The
projections in engineering analysis usually use both the mostly likely rate and the most critical rate. The
result provides a point estimate - the best single estimate of what the company's current
solvency position is or multiple points of estimate - depends on the problem definition. Selection and
identification of parameter values are frequently a challenge to less experienced analysts. The downside
of this approach is it does not fully cover the fact that there is a whole range of possible outcomes
and some are more probable and some are less.
• Stochastic modeling
• A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio
or an entire company. But rather than setting investment returns according to their most likely estimate, for
example, the model uses random variations to look at what investment conditions might be like.
• Based on a set of random outcomes, the experience of the policy/portfolio/company is projected, and the
outcome is noted. Then this is done again with a new set of random variables. In fact, this process is
repeated thousands of times.
• At the end, a distribution of outcomes is available which shows not only what the
most likely estimate, but what ranges are reasonable too.
• This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per
annum. A deterministic simulation, with varying scenarios for future investment return, does not provide a
good way of estimating the cost of providing this guarantee. This is because it does not allow for the
volatility of investment returns in each future time period or the chance that an extreme event in a
particular time period leads to an investment return less than the guarantee. Stochastic modeling
builds volatility and variability (randomness) into the simulation and therefore provides a better
representation of real life from more angles. 90
91. Mont Carlo Simulations
• Monte Carlo simulation methods are especially useful in studying systems with a large number of
coupled degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and
cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are
useful for modeling phenomena with significant uncertainty in inputs, such as
the calculation of risk in business (for its use in the insurance industry, see
stochastic modeling). A classic use is for the evaluation of definite integrals, particularly
multidimensional integrals with complicated boundary conditions.
• Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate
investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo
method is intended for financial analysts who want to construct stochastic or probabilistic financial
models as opposed to the traditional static and deterministic models.
• Monte Carlo methods are very important in computational physics, physical chemistry, and related
applied fields, and have diverse applications from complicated quantum chromo dynamics
calculations to designing heat shields and aerodynamic forms.
• Monte Carlo methods have also proven efficient in solving coupled integral differential equations
of radiation fields and energy transport, and thus these methods have been used in global
illumination computations which produce photorealistic images of virtual 3D models, with
applications in video games, architecture, design, computer generated films, special effects in
cinema, business, economics and other fields.
• Monte Carlo methods are useful in many areas of computational mathematics, where a lucky
choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for
any n which is not prime, a random x has at least a 75% chance of proving that n is not prime.
Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10
different random x say that "n is probably prime" when it is not, we have observed a one-in-a-
million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a
guarantee n is composite, and x proves it so, but another one without, but with a guarantee of
not getting this answer when it is wrong too often — in this case at most 25% of the time. See
also Las Vegas algorithm for a related, but different, idea.
91
92. Fitting Lifetime Data to a Weibull Model
• This Demonstration shows how to analyze
lifetime test data from data-fitting to a Weibull
distribution function plot.
• The data fit is on a log-log plot by a least
squares fitting method.
• The results are presented as Weibull
distribution CDF and PDF plots.
92