SlideShare ist ein Scribd-Unternehmen logo
1 von 120
Downloaden Sie, um offline zu lesen
DouglasW. Hubbard
A critique of doug hubbards the failure of risk management
 divided into three parts:
 (1) the first part introduces the crisis in risk
management;
 (2) the second deals with why some popular
risk management practices are flawed;
 (3) the third discusses what needs to be done
to fix these.
 Code of Hammurabi –
 compensation or indemnification for those
harmed by bandits or floods.
 Careful selection of debtors- Called underwriting
in insurance
 Development of probability theory and
statistics
 There are several risk management methodologies and
techniques in use ; a quick search will reveal some of
them. Hubbard begins his book by asking the following
simple questions about these:
 Do these risk management methods work?
 Would any organization that uses these techniques
know if they didn’t work?
 What would be the consequences if they didn’t?
 His contention is that for most organizations the answers to the first two
questions are negative.
 To answer the third question, he gives the example of the crash of United
Flight 232 in 1989. The crash was attributed to the simultaneous failure
of three independent (and redundant) hydraulic systems. This happened
because the systems were located at the rear of the plane and debris
from a damaged turbine cut lines to all them. This is an example of
common mode failure – a single event causing multiple systems to fail.
 The probability of such an event occurring was estimated to be less than
one in a billion. However, the reason the turbine broke up was that it
hadn’t been inspected properly (i.e. human error).
 The probability estimate hadn’t considered human oversight, which is
way more likely than one-in-billion. Hubbard uses this example to make
the point that a weak risk management methodology can have huge
consequences.
 Following a very brief history of risk management from historical times to the present, Hubbard
presents a list of common methods of risk management. These are:
 Expert intuition – essentially based on “gut feeling”
 Expert audit – based on expert intuition of independent consultants. Typically involves the
development of checklists and also uses stratification methods (see next point)
 Simple stratification methods – risk matrices are the canonical example of stratification
methods.
 Weighted scores – assigned scores for different criteria (scores usually assigned by expert
intuition), followed by weighting based on perceived importance of each criterion.
 Non-probabilistic financial analysis –techniques such as computing the financial consequences
of best and worst case scenarios
 Calculus of preferences – structured decision analysis techniques such as multi-attribute utility
and analytic hierarchy process. These techniques are based on expert judgements. However, in
cases where multiple judgements are involved these techniques ensure that the judgements are
logically consistent (i.e. do not contradict the principles of logic).
 Probabilistic models – involves building probabilistic models of risk events. Probabilities can be
based on historical data, empirical observation or even intuition. The book essentially builds a
case for evaluating risks using probabilistic models, and provides advice on how these should be
built
 The book also discusses the state of risk management practice (at
the end of 2008) as assessed by surveys carried out by
The Economist, Protiviti and Aon Corporation. Hubbard notes that
the surveys are based largely on self-assessments of risk
management effectiveness. One cannot place much confidence in
these because self-assessments of risk are subject to well known
psychological effects such as cognitive biases (tendencies to base
judgments on flawed perceptions) and the Dunning-Kruger effect
(overconfidence in one’s abilities).
 The acid test for any assessment is whether or not it use sound
quantitative measures. Many of the firms surveyed fail on this
count: they do not quantify risks as well as they claim they do.
Assigning weighted scores to qualitative judgements does not
count as a sound quantitative technique – more on this later.
 The Dunning–Kruger effect is a cognitive bias in which unskilled people make
poor decisions and reach erroneous conclusions, but their incompetence
denies them the metacognitive ability to recognize their mistakes.[1]
 The unskilled therefore suffer from illusory superiority, rating their ability as
above average, much higher than it actually is, while the highly skilled
underrate their own abilities, suffering from illusory inferiority.
 Actual competence may weaken self-confidence, as competent individuals
may falsely assume that others have an equivalent understanding.
 As Kruger and Dunning conclude, "the miscalibration of the incompetent
stems from an error about the self, whereas the miscalibration of the highly
competent stems from an error about others" (p. 1127).[2] The effect is about
paradoxical defects in cognitive ability, both in oneself and as one compares
oneself to others.
 So, what are some good ways of measuring
the effectiveness of risk management?
Hubbard lists the following:
 Statistics based on large samples
 Direct evidence
 Component testing
 Check of completeness
 Statistics based on large samples – the use of this depends on the availability of historical or
other data that is similar to the situation at hand.
 Direct evidence – this is where the risk management technique actually finds some problem that
would not have been found otherwise. For example, an audit that unearths dubious financial
practices
 Component testing – even if one isn’t able to test the method end-to-end, it may be possible to
test specific components that make up the method. For example, if the method uses computer
simulations, it may be possible to validate the simulations by applying them to known situations.
 Check of completeness – organisations need to ensure that their risk management methods
cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the
probability of another. Further, as Hubbard states, “A risk that’s not even on the radar cannot be
managed at all.” As far as completeness is concerned, there are four perspectives that need to be
taken into account. These are:
 Internal completeness – covering all parts of the organisation
 External completeness – covering all external entities that the organisation interacts with.
 Historical completeness – this involves covering worst case scenarios and historical data.
 Combinatorial completeness – this involves considering combinations of events that may occur together;
those that may lead to common-mode failure discussed earlier.
 Hubbard begins this section by identifying
the four major players in the risk
management game.
These are:
 Actuaries
 Physicists and Mathematicians
 Economists
 Management Consultants
 These are perhaps the first modern professional risk
managers. They use quantitative methods to manage
risks in the insurance and pension industry.
 Although the methods actuaries use are generally sound,
the profession is slow to pick up new techniques.
 Further, many investment decisions that insurance
companies make do not come under the purview of
actuaries.
 So, actuaries typically do not cover the entire spectrum of
organizational risks.
 Many rigorous risk management techniques came out of statistical
research done during the second world war. Hubbard therefore
calls this group War Quants.
 One of the notable techniques to come out of this effort is the
Monte Carlo Method – originally proposed by Nick Metropolis,
John Neumann and Stanislaw Ulam as a technique to calculate the
averaged trajectories of neutrons in fissile material (see
this article by Nick Metropolis for a first-person account of how
the method was developed).
 Hubbard believes that Monte Carlo simulations offer a sound,
general technique for quantitative risk analysis. Consequently he
spends a fair few pages discussing these methods, albeit at a very
basic level. More about this later.
 Risk analysts in investment firms often use quantitative techniques from
economics. Popular techniques include modern portfolio theory and models
from options theory (such as the Black-Scholes model) . The problem is that
these models are often based on questionable assumptions.
 For example, the Black-Scholes model assumes that the rate of return on a stock
is normally distributed (i.e. its value is lognormally distributed) – an assumption
that’s demonstrably incorrect as witnessed by the events of the last few years .
 Another way in which economics plays a role in risk management is through
behavioural studies, in particular the recognition that decisions regarding future
events (be they risks or stock prices) are subject to cognitive biases. Hubbard
suggests that the role of cognitive biases in risk management has been
consistently overlooked.
 See my post entitled Cognitive biases as meta-risks and its follow-up for more on
this point.
 In Hubbard’s view, management consultants and
standards institutes are largely responsible for many of
the ad-hoc approaches to risk management.
 A particular favorite of these folks are ad-hoc scoring
methods that involve ordering of risks based on subjective
criteria. The scores assigned to risks are thus subject to
cognitive bias.
 Even worse, some of the tools used in scoring can end up
ordering risks incorrectly.
 Bottom line: many of the risk analysis techniques used by
consultants and standards have no justification.
A critique of doug hubbards the failure of risk management
 Following the discussion of the main players in the risk arena, Hubbard discusses
the confusion associated with the definition of risk.
 There are a plethora of definitions of risk, most of which originated in academia.
Hubbard shows how some of these contradict each other while others are
downright non-intuitive and incorrect.
 In doing so, he clarifies some of the academic and professional terminology
around risk.
 As an example, he takes exception to the notion of risk as a “good thing” – as in
the PMI definition, which views risk as “an uncertain event or condition that, if it
occurs, has a positive or negative effect on a project objective.”
 This definition contradicts common (dictionary) usage of the term risk (which
generally includes only bad stuff). Hubbard’s opinion on this may raise a few
eyebrows (and hackles!) in project management circles, but I reckon he has a
point.
 The story that I have to tell is marked all the way through by a
persistent tension between those who assert that the best
decisions are based on quantification and numbers, determined
by the patterns of the past, and those who base their decisions
on more subjective degrees of belief about the uncertain future.
This is a controversy that has never been resolved.’
 — FROM THE INTRODUCTION TO ‘‘AGAINST THE GODS: THE REMARKABLE STORY OF RISK,’’ BY PETER L. BERNSTEIN
 http://www.mckinseyquarterly.com/Peter_L_Bernstein_on_risk_2211
Uncertainty
 Frank H. Knight was one of the founders of the so-called Chicago school of
economics, of which milton friedman and george stigler were the leading
members from the 1950s to the 1980s.
 Knight made his reputation with his book Risk, Uncertainty, and Profit, which
was based on his Ph.D. dissertation. In it Knight set out to explain why “perfect
competition” would not necessarily eliminate profits.
 His explanation was “uncertainty,” which Knight distinguished from risk.
According to Knight, “risk” refers to a situation in which the probability of an
outcome can be determined, and therefore the outcome insured against.
“Uncertainty,” by contrast, refers to an event whose probability cannot be
known.
 Knight argued that even in long-run equilibrium, entrepreneurs would earn
profits as a return for putting up with uncertainty. Knight’s distinction between
risk and uncertainty is still taught in economics classes today.
 [To differentiate] the measurable uncertainty
and an unmeasurable one we may use the
term “risk” to designate the former and the
term uncertainty for the latter.
 Probability, then, is concerned with
professedly uncertain [emphasis added]
judgments.2
 The word risk has acquired no technical
meaning in economics, but signifies here as
elsewhere [emphasis added] chance of
damage or loss.
 If you wish to converse with me, define your terms
 Voltaire
 Uncertainty. The lack of complete certainty
—that is, the existence of more than one
possibility. The “true”
outcome/state/result/value is not known.
 Measurement- A set of probabilities assigned to
set of possibilities. For example there is a 60%
chance of rain tomorrow, and a 40% chance it
won’t.
 By “uncertain” knowledge … I do not mean merely to distinguish what is
known for certain from what is only probable. The game of roulette is not
subject, in this sense, to uncertainty…. The sense in which I am using the
term is that in which the prospect of a European war is uncertain, or the
price of copper and the rate of interest twenty years hence, or the
obsolescence of a new invention…. About these matters, there is no
scientific basis on which to form any calculable probability whatever.
We simply do not know!
 A state of uncertainty where some of the
possibilities involve loss, injury, catastrophe, or
other undesirable outcome. (i.e. something bad
could happen) in the future. (if—then)
 Measurement of Risk
 A set of possibilities each with quantifiable
probabilities and quantified losses. For example, “we
believe there is a 40% chance a proposed oil well will
be dry with a loss of $12m in exploratory drilling costs
 Risk: Well, it certainly doesn't mean standard
deviation. People mainly think of risk in terms of
downside risk. They are concerned about the
maximum they can lose. So that's what risk means.
 In contrast, the professional view defines risk in terms
of variance, and doesn't discriminate gains from
losses. There is a great deal of miscommunication and
misunderstanding because of these very different
views of risk. Beta does not do it for most people, who
are more concerned with the possibility of loss
 Daniel Kahneman
 Measuring risks, especially important long-term ones, is
imprecise and difficult. Virtually none of the economic
statistics reported in the media measure risk.
 To fully comprehend risk, we must stretch our
imagination to think of all the different ways that things
can go wrong, including things that have not happened in
recent memory.
 We must protect ourselves against fallacies, such as
thinking that just because a risk has not proved damaging
for decades, it no longer exists.
 Yet another psychological barrier is a sort of ego
involvement in our own success.
 Our tendency to take full credit for our successes
discourages us from facing up to the possibility of loss or
failure, because considering such prospects calls into
question our self-satisfaction.
 Indeed, self-esteem is one of the most powerful human
needs: a view of our own success relative to others
provides us with a sense of meaning and well-being.
 So accepting the essential randomness of life is
terribly difficult, and contradicts our deep
psychological need for order and accountability.
 We often do not protect the things that we have -
such as our opportunities to earn income and
accumulate wealth - because we mistakenly
believe that our own natural superiority will do
that for us.
 Risk has to include some probability of loss—
this excludes Knight’s definition.
 Risk involves only losses (not gains)---this
excludes PMI’s definition
 Outside of finance, volatility may not
necessarily entail risk---this excludes
considering volatility alone as synonymous
with risk.
 Risk in not just the product of probability and loss.
Multiplying them together unnecessarily presumes
that the decision maker is risk neutral. Keep risk as a
vector quantity where probability and magnitude of
loss are separate until we compare it to the risk
aversion of the decision maker.
 Risk can be made of discrete or continuous losses and
associated probabilities. We do not need to make the
distinctions sometimes made in construction
engineering that risk is only discrete events.
 According to the peak-end rule, we judge our past experiences almost entirely
on how they were at their peak (pleasant or unpleasant) and how they ended.
Other information is not lost, but it is not used. This includes net pleasantness
or unpleasantness and how long the experience lasted.
 In one experiment, one group of people were subjected to loud, painful noises.
In a second group, subjects were exposed to the same loud, painful noises as
the first group, after which were appended somewhat less painful noises. This
second group rated the experience of listening to the noises as much less
unpleasant than the first group, despite having been subjected to more
discomfort than the first group, as they experienced the same initial duration,
and then an extended duration of reduced unpleasantness.
 This heuristic was first suggested by Daniel Kahneman and others. He argues
that because people seem to perceive not the sum of an experience but its
average, it may be an instance of the representativeness heuristic.
 Why we shouldn’t trust the numbers in our head.
 Peak end rule. We tend to remember extremes and
not the mundane.
 Misconceptions of chance
▪ (H=heads, T=Tails): HHHTTT or HTHTTH?
▪ Actually they are equally likely. But since the first “appears”
to be less random than the second, it must be less likely.
 In my opinion, the most important sections of
the book are chapters 6 and 7, where
Hubbard discusses why “expert knowledge
and opinions” (favoured by standards and
methodologies are flawed) and why a very
popular scoring method (risk matrices) is
“worse than useless.” See my posts on
the limitations of scoring techniques and Cox
’s risk matrix theorem for detailed discussions
of these points.
 A major problem with expert estimates is overconfidence.
To overcome this, Hubbard advocates using calibrated
probability assessments to quantify analysts’ abilities to
make estimates. Calibration assessments involve getting
analysts to answer trivia questions and eliciting confidence
intervals for each answer. The confidence intervals are
then checked against the proportion of correct answers.
 Essentially, this assesses experts’ abilities to estimate by
tracking how often they are right. It has been found that
people can improve their ability to make subjective
estimates through calibration training – i.e. repeated
calibration testing followed by feedback. See this site for
more on probability calibration.
 Next Hubbard tackles several “red herring”
arguments that are commonly offered as
reasons not to manage risks using rigorous
quantitative methods. Among these are
arguments that quantitative risk analysis is
impossible because:
 Unexpected events cannot be predicted.
 Risks cannot be measured accurately.
 Hubbard states that the first objection is invalid because
although some events (such as spectacular stockmarket
crashes) may have been overlooked by models, it doesn’t
prove that quantitative risk as a whole is flawed.
 As he discusses later in the book, many models go wrong
by assuming Gaussian probability distributions where
fat-tailed ones would be more appropriate. Of course,
given limited data it is difficult to figure out which
distribution’s the right one.
 So, although Hubbard’s argument is correct, it offers little
comfort to the analyst who has to model events before
they occur.
A critique of doug hubbards the failure of risk management
 As far as the second is concerned, Hubbard has written another book on how just about any business
variable (even intangible ones) can be measured.
 The book makes a persuasive case that most quantities of interest can be measured, but there are
difficulties.
 First, figuring out the factors that affect a variable is not a straightforward task. It depends, among other
things, on the availability of reliable data, the analyst’s experience etc.
 Second, much depends on the judgement of the analyst, and such judgements are subject to bias.
 Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea
for all biases.
 Third, risk-related measurements generally involve events that are yet to occur.
 Consequently, such measurements are based on incomplete information. To make progress one often
has to make additional assumptions which may not justifiable a priori.
A critique of doug hubbards the failure of risk management
Cost analysis, used to develop cost estimates for such things as hardware systems,
automated information systems, civil projects, manpower, and training, can be defined as
1. the effort to develop, analyze, and document cost estimates with analytical
approaches and techniques;
2. the process of analyzing and estimating the incremental and total resources
required to support past, present, and future systems—an integral step in selecting
alternatives; and
3. a tool for evaluating resource requirements at key milestones and decision points in the
acquisition process.
Cost estimating involves collecting and analyzing historical data and applying quantitative
models, techniques, tools, and databases to predict a program’s future cost.
More simply, cost estimating combines science and art to predict the future cost of
something based on known historical data that are adjusted to reflect new materials,
technology, software languages, and development teams.
Because cost estimating is complex, sophisticated cost analysts should combine concepts
from such disciplines as accounting, budgeting, computer science, economics,
engineering, mathematics, and statistics and should even employ concepts from
marketing and public affairs. And because cost estimating requires such a wide range of
disciplines, it is important that the cost analyst either be familiar with these disciplines
or have access to an expert in these fields.
 They are often used without empirical data or validation – i.e. their inputs and results are not
tested through observation.
 Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to
manage low-level, operational risks.
 They frequently focus on variables that are not important (because these are easier to measure)
rather than those that are important. Hubbard calls this perverse occurrence measurement
inversion. He contends that analysts often exclude the most important variables because these
are considered to be “too uncertain.”
 They use inappropriate probability distributions. The Normal distribution (or bell curve) is not
always appropriate. For example, see my posts on the
inherent uncertainty of project task estimates for an intuitive discussion of the form of the
probability distribution for project task durations.
 They do not account for correlations between variables. Hubbard contends that many analysts
simply ignore correlations between risk variables (i.e. they treat variables as independent when
they actually aren’t). This almost always leads to an underestimation of risk because correlations
can cause feedback effects and common mode failures.
 It turns out that many phenomena can be modeled by this kind of long-tailed distribution. Some of the
better known long-tailed distributions include lognormal and power law distributions.
 A quick, informal review of project management literature revealed that lognormal distributions are
more commonly used than power laws to model activity duration uncertainties.
 This may be because lognormal distributions have a finite mean and variance whereas power law
distributions can have infinite values for both (see this presentation by Michael Mitzenmacher, for
example). [An Aside:If you're curious as to why infinities are possible in the latter, it is because power
laws decay more slowly than lognormal distributions - i.e they have "fatter" tails, and hence enclose
larger (even infinite) areas.].
 In any case, regardless of the exact form of the distribution for activity durations, what’s important and
non-controversial is the short cutoff, the peak and long, decaying tail. These characteristics are true of all
probability distributions that describe activity durations.
 There’s one immediate consequence of the long tail: if you
want to be really, really sure of completing any activity, you
have to add a lot of “air” or safety because there’s a chance that
you may “slip in the shower” so to speak. Hence, many activity
estimators add large buffers to their estimates.
 Project managers who suffer the consequences of the resulting
inaccurate schedule are thus victims of the tail.
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
CONTROL or lack thereof
 One can study randomness, at three levels: mathematical, empirical, and behavioral.
 Mathematical
The first is the narrowly defined mathematics of randomness, which is no longer the interesting problem because we've pretty much
reached small returns in what we can develop in that branch.
 Empirical
The second one is the dynamics of the real world, the dynamics of history, what we can and cannot model, how we can get into the guts of
the mechanics of historical events, whether quantitative models can help us and how they can hurt us.
 Behavioral
And the third is our human ability to understand uncertainty. We are endowed with a native scorn of the abstract; we ignore what we do not
see, even if our logic recommends otherwise.
▪ We tend to overestimate causal relationships
▪ When we meet someone who by playing Russian roulette became extremely influential, wealthy, and
powerful, we still act toward that person as if he gained that status just by skills, even when you know
there's been a lot of luck. Why?
 Because our behavior toward that person is going to be entirely determined by shallow heuristics and very
superficial matters related to his appearance.
Nassim Taleb
 Following a very brief history of risk management from historical times to the present, Hubbard presents a list of
common methods of risk management. These are:
 Expert intuition – essentially based on “gut feeling”
 Expert audit – based on expert intuition of independent consultants. Typically involves the development of
checklists and also uses stratification methods (see next point)
 Simple stratification methods – risk matrices are the canonical example of stratification methods.
 Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by
weighting based on perceived importance of each criterion.
 Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and
worst case scenarios
 Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and
analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple
judgements are involved these techniques ensure that the judgements are logically consistent (i.e. do not
contradict the principles of logic).
 Probabilistic models – involves building probabilistic models of risk events. Probabilities can be based on
historical data, empirical observation or even intuition. The book essentially builds a case for evaluating risks
using probabilistic models, and provides advice on how these should be built
 Adopt the language, tools and philosophy of uncertain systems. To do this he
recommends:
 Using calibrated probabilities to express uncertainties. Hubbard believes that any person who
makes estimates that will be used in models should be calibrated. He offers some suggestions
on people can improve their ability to estimate through calibration – discussed earlier and on
this web site.
 Employing quantitative modeling techniques to model risks. In particular, he advocates the
use of Monte Carlo methods to model risks. He also provides a list of commercially available
PC-based Monte Carlo tools. Hubbard makes the point that modeling forces analysts to
decompose the systems of interest and understand the relationships between their
components (see point 2 below).
 Developing an understanding of the basic rules of probability including independent events,
conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these
rules can help analysts extrapolate
 To this, I would also add that it is important to understand the idea that an
estimate isn’t a number, but a probability distribution – i.e. a range of numbers,
each with a probability attached to it.
 Build, validate and test models using reality as the
ultimate arbiter. Models should be built iteratively,
testing each assumption against observation. Further,
models need to incorporate mechanisms (i.e. how and
why the observations are what they are), not just raw
observations. This is often hard to do, but at the very
least models should incorporate correlations between
variables. Note that correlations are often (but not
always!) indicative of an underlying mechanism. See
this post for an introductory example of Monte Carlo
simulation involving correlated variables.
 In the penultimate chapter of the book, Hubbard fleshes out the
characteristics or traits of good risk analysts. As he mentions several
times in the book, risk analysis is an empirical science – it arises from
experience.
 So, although the analytical and mathematical (modeling) aspects of risk
are important, a good analyst must, above all, be an empiricist – i.e.
believe that knowledge about risks can only come from observation of
reality.
 In particular, testing models by seeing how well they match historical
data and tracking model predictions are absolutely critical aspects of a
risk analysts job.
 Unfortunately, many analysts do not measure the performance of their
risk models. Hubbard offers some excellent suggestions on how analysts
can refine and improve their models via observation.
 Developing an understanding of the basic rules of
probability including independent events,
conditional probabilities and Bayes’ Theorem. He
gives examples of situations in which these rules
can help analysts extrapolate
A critique of doug hubbards the failure of risk management
 Both versions of the law state that the sample average converges to the expected value
 where X1, X2, ... is an infinite sequence of i.i.d. random variables with finite expected value;
 E(X1)=E(X2) = ... = µ < ∞.
 An assumption of finite variance Var(X1) = Var(X2) = ... = σ2 < ∞ is not necessary. Large or
infinite variance will make the convergence slower, but the LLN holds anyway. This assumption is often
used because it makes the proofs easier and shorter.
 The difference between the strong and the weak version is concerned with the mode of convergence being asserted.
 The weak law
 The weak law of large numbers states that the sample average converges in probability towards the expected value.
 Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a
sufficiently large sample there will be a very high probability that the average of the observations will be close to the
expected value, that is, within the margin.
 Convergence in probability is also called weak convergence of random variables. This version is called the weak law
because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as
below.
 A consequence of the weak LLN is the asymptotic equipartition property.
 The strong law
 The strong law of large numbers states that the sample average converges almost surely to the expected value
 That is, the proof is more complex than that of the weak law. This law justifies the intuitive interpretation of the expected
value of a random variable as the "long-term average when sampling repeatedly".
 Almost sure convergence is also called strong convergence of random variables. This version is called the strong law
because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability).
The strong law implies the weak law.
 The strong law of large numbers can itself be seen as a special case of the ergodic theorem.
 Bayesian inference uses aspects of the scientific method, which involves collecting
evidence that is meant to be consistent or inconsistent with a given hypothesis. As
evidence accumulates, the degree of belief in a hypothesis ought to change. With enough
evidence, it should become very high or very low. Thus, proponents of Bayesian inference
say that it can be used to discriminate between conflicting hypotheses: hypotheses with
very high support should be accepted as true and those with very low support should be
rejected as false. However, detractors say that this inference method may be biased due to
initial beliefs that one holds before any evidence is ever collected. (This is a form of
inductive bias).
 Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before
evidence has been observed and calculates a numerical estimate of the degree of belief in
the hypothesis after evidence has been observed. (This process is repeated when additional
evidence is obtained.) Bayesian inference usually relies on degrees of belief, or subjective
probabilities, in the induction process and does not necessarily claim to provide an
objective method of induction. Nonetheless, some Bayesian statisticians believe
probabilities can have an objective value and therefore Bayesian inference can provide an
objective method of induction
59
To convert the Probability of event A given event B to
the Probability of event B given event A, we use Bayes’
theorem. We must know or estimate the Probabilities
of the two separate events.
Pr(B|A) =
Pr (A|B) Pr (B)
Pr (A)
Pr (A) = Pr(A|B)Pr(B) + Pr(A|B)Pr(B)
Law of Total Probability
The Reverend Thomas Bayes, F.R.S. --- 1701?-1761
▪ Example of Bayesian search theory
In May 1968 the US nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of
Norfolk Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an
extensive search failed to discover the wreck. The US Navy's deep water expert, John Craven, USN, believed
that it was elsewhere and he organized a search south west of the Azores based on a controversial approximate
triangulation by hydrophones. He was allocated only a single ship, the Mizar, and he took advice from a firm of
consultant mathematicians in order to maximize his resources. A Bayesian search methodology was adopted.
Experienced submarine commanders were interviewed to construct hypotheses about what could have caused
the loss of the Scorpion.
The sea area was divided up into grid squares and a probability assigned to each square, under each of the
hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to
produce an overall probability grid. The probability attached to each square was then the probability that the
wreck was in that square. A second grid was constructed with probabilities that represented the probability of
successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This
was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives
the probability of finding the wreck in each grid square of the sea if it were to be searched.
This sea grid was systematically searched in a manner which started with the high probability regions first and
worked down to the low probability regions last. Each time a grid square was searched and found to be empty its
probability was reassessed using Bayes' theorem. This then forced the probabilities of all the other grid squares
to be reassessed (upwards), also by Bayes' theorem. The use of this approach was a major computational
challenge for the time but it was eventually successful and the Scorpion was found about 740 kilometers
southwest of the Azores in October of that year.
 Stochastic is synonymous with
"random." The word is of Greek origin
and means "pertaining to chance"
(Parzen 1962, p. 7).
 It is used to indicate that a particular
subject is seen from point of view of
randomness.
 Stochastic is often used as
counterpart of the word
"deterministic," which means that
random phenomena are not involved.
 Therefore, stochastic models are
based on random trials, while
deterministic models always produce
the same output for a given starting
condition.
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
 "Stochastic" means being or having a random variable.
 A stochastic model is a tool for estimating probability
distributions of potential outcomes by allowing for random
variation in one or more inputs over time. The random
variation is usually based on fluctuations observed in historical
data for a selected period using standard time-series
techniques. Distributions of potential outcomes are derived
from a large number of simulations (stochastic projections)
which reflect the random variation in the input(s).
 Its application initially started in physics (sometimes known as
the Monte Carlo Method). It is now being applied in
engineering, life sciences, social sciences, and finance.
 Valuation
 Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry,
however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now
until the claim, investment returns during that period, and so on.
 So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with
the best estimate for assets and liabilities, and therefore for the company's level of solvency.
 Deterministic approach The simplest way of doing this, and indeed the primary method used,
is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely
investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the
mostly likely rate and the most critical rate. The result provides a point estimate- the best single estimate
of what the company's current solvency position is or multiple points of estimate - depends on the problem definition.
Selection and identification of parameter values are frequently a challenge to less experienced analysts. The downside of
this approach is it does not fully cover the fact that there is a whole range of possible outcomes and some are
more probable and some are less.
 Stochastic modeling
 A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire
company. But rather than setting investment returns according to their most likely estimate, for example, the model uses
random variations to look at what investment conditions might be like.
 Based on a set of random outcomes, the experience of the policy/portfolio/company is projected, and the outcome is noted.
Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times.
 At the end, a distribution of outcomes is available which shows not only what the most likely estimate, but
what ranges are reasonable too.
 This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum. A
deterministic simulation, with varying scenarios for future investment return, does not provide a good way of estimating the
cost of providing this guarantee. This is because it does not allow for the volatility of investment returns in each future time
period or the chance that an extreme event in a particular time period leads to an investment return less than the
guarantee. Stochastic modeling builds volatility and variability (randomness) into the simulation and therefore provides
a better representation of real life from more angles.
 Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled
degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular structures
(see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling
phenomena with significant uncertainty in inputs, such as the calculation of risk in
business (for its use in the insurance industry, see stochastic modeling). A classic use is for the
evaluation of definite integrals, particularly multidimensional integrals with complicated boundary
conditions.
 Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate
investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is
intended for financial analysts who want to construct stochastic or probabilistic financial models as
opposed to the traditional static and deterministic models.
 Monte Carlo methods are very important in computational physics, physical chemistry, and related applied
fields, and have diverse applications from complicated quantum chromo dynamics calculations to designing
heat shields and aerodynamic forms.
 Monte Carlo methods have also proven efficient in solving coupled integral differential equations of
radiation fields and energy transport, and thus these methods have been used in global illumination
computations which produce photorealistic images of virtual 3D models, with applications in video games,
architecture, design, computer generated films, special effects in cinema, business, economics and other
fields.
 Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find
the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a
random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it
might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime"
when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind
produces one correct answer with a guarantee n is composite, and x proves it so, but another one without,
but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the
time. See also Las Vegas algorithm for a related, but different, idea.
 This Demonstration shows how to analyze
lifetime test data from data-fitting to a Weibull
distribution function plot.
 The data fit is on a log-log plot by a least squares
fitting method.
 The results are presented as Weibull distribution
CDF and PDF plots.
 The probability density function (PDF - upper plot) is the derivative
of the cumulative density function (CDF - lower plot). This elegant
relationship is illustrated here. The default plot of the PDF answers
the question, "How much of the distribution of a random variable
is found in the filled area; that is, how much probability mass is
there between observation values equal to or more than 64 and
equal to or fewer than 70?“

 The CDF is more helpful. By reading the axis you can estimate the
probability of a particular observation within that range: take the
difference between 90.8%, the probability of values below 70, and
25.2%, the probability of values below 63, to get 65.6%.
 The probability density function (PDF - upper plot) is the
derivative of the cumulative density function (CDF - lower
plot). This elegant relationship is illustrated here. The default plot of
the PDF answers the question, "How much of the distribution of a
random variable is found in the filled area; that is, how much
probability mass is there between observation values equal to or
more than 64 and equal to or fewer than 70?"
 The CDF is more helpful. By reading the y axis you can estimate the
probability of a particular observation within that range: take the
difference between 90.8%, the probability of values below 70, and
25.2%, the probability of values below 63, to get 65.6%.
 http://demonstrations.wolfram.com/ConnectingTheCDFAndThePDF/
 I noticed you downloaded Mathematica Player. I assume you found lots of great Demonstrations to utilize within
your curriculum, but if not (or if you had trouble figuring out how to use them), here's a video that will help you get
started:
 http://www.wolfram.com/videos/discoverdemonstrations
 Most people find the deployment of existing Demonstrations extremely useful in illustrating concepts to their
students, and often want to make their own models showing specific ideas interactively within the class. If that
applies to you, here's a second video that teaches you how to make models:
 http://www.wolfram.com/screencasts/makingmodels
 I would be happy to walk you through the Demonstrations process if you have any questions or concerns. Please
let me know how I can help make your classroom an interactive environment. If there are topics you'd like to see
Demonstrations for in the future, I look forward to hearing those suggestions as well.
 Sincerely,
 Scott Rauguth
 Academic Marketing Manager
 Wolfram Research, Inc.
 http://www.wolfram.com
 P.S. Did you know that the Wolfram Education Group offers free online seminars for training and development of
Mathematica proficiency, including Creating Demonstrations? Visit:
 http://www.wolfram.com/seminars/s14.html
 An example of a statistical macroscopic relation is the distribution of the magnitude of earthquakes. If it is the annual mean
number of earthquakes (in a zone or worldwide) of size (energy released), then empirically one finds over a wide range, with
the constant . The relation (7.1) is called the Gutenberg-Richter law and is obviously a statistical relation for observables - it
does not specify when an earthquake of some magnitude will occur but only what the mean distribution in their magnitude is.
The Gutenberg-Ricter law is a power-law and is therefore scale-invariant - a change of scale in can be absorbed in a
normalization constant, leaving the form of the law invariant. The scale-invariance of the law implies a scale-invariance in the
phenomena itself: earthquakes happen on all scales and there is no typical or mean magnitude! There are many other natural
phenomena which exhibit power laws over a wide range of the parameters: Volcanic activity, solar-flares, charge released
during lightning events, length of streams in river networks, forest fires, and even the extinction rate of biological species!
Some of these power laws refer to spatial scale-free structures, or fractals, while some others refer to temporal events and are
examples of the ubiquitous "one-over-f " phenomena (see chapter 2). Can the frequent appearance of such power laws in
complex systems be explained in a simple way? Note that the systems mentioned above are examples of dissipative
structures, with a slow but constant inflow of energy and its eventual dissipation. The systems are clearly out of equilibrium,
since we know that equilibrium systems tend towards uniformity rather than complexity. On the other hand the
abovementioned systems display scale-free behaviour similar to that exhibited by equilibrium systems near a critical point of
a second-order phase transition. However while the critical point in equilibrium systems is reached only for some specific
value of an external parameter, such as temperature, for the dissipative structures above the scale free behaviour appears to
be robust and does not seem to require any fine-tuning. Bak and collaborators proposed that many dissipative complex
systems naturally self-organise to a critical state, with the consequent scale-free fluctuations giving rise to power laws. In
short, the proposal is that self-organised criticality is the natural state of large complex dissipative systems, relatively
independent of initial conditions. It is important to note that while the critical state in an equilibrium second-order phase
transition is unstable (slight perturbations move the system away from it), the critical state of self-organised systems is
stable: systems are continually attracted to it! The idea that many complex systems are in a self-organised critical state is
intuitively appealing because it is natural to associate complexity with a state that is balanced at the edge between total
order and total disorder (sometimes loosely referred to as the "edge of chaos"). Far from the critical point, one typically has a
very ordered phase on one side and a greatly disordered phase on the other side. It is only at the critical point that one has
large correlations among the different parts of a large system, thus making it possible to have novel emergent properties, and
in particular scale-free phenomena. In addition to the examples mentioned above, self-organised criticality has also been
proposed to apply to economics, traffic jams, forest fires and even the brain!
 An example power law graph, being used to demonstrate ranking of popularity. To the right is the long tail, to the left are the few that dominate
(also known as the 80-20 rule).
 A power law is any polynomial relationship that exhibits the property of scale invariance. The most common power laws relate two variables and
have the form-
 where a and k are constants, and o(xk) is an asymptotically small function of x. Here, k is typically called the scaling exponent, denoting the fact
that a power-law function (or, more generally, a kth order (homogeneous polynomial) satisfies the criterion where c is a constant. That is, scaling
the function's argument changes the constant of proportionality as a function of the scale change, but preserves the shape of the function itself.
This relationship becomes more clear if we take the logarithm of both sides (or, graphically, plotting on a log-log graph)
 Notice that this expression has the form of a linear relationship with slope k, and scaling the argument induces a linear shift (up or down) of the
function, and leaves both the form and slope k unchanged.
 Power-law relations characterize a staggering number of natural patterns, and it is primarily in this context that the term power law is used rather
than polynomial function. For instance, inverse-square laws, such as gravitation and the Coulomb force are power laws, as are many common
mathematical formulae such as the quadratic law of area of the circle. Also, many probability distributions have tails that asymptotically follow
power-law relations, a topic that connects tightly with the theory of large deviations (also called
extreme value theory), which considers the frequency of extremely
rare events like stock market crashes, and large natural disasters.
 Scientific interest in power law relations, whether functions or distributions, comes primarily from the ease with which certain general classes of
mechanisms can generate them. That is, the observation of a power-law relation in data often points to specific kinds of mechanisms that underlie
the natural phenomenon in question, and can often indicate a deep connection with other, seemingly unrelated systems (for instance, see both
the reference by Simon and the subsection on universality below). The ubiquity of power-law relations in physics is partly due to
dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy and robustness. A few
notable examples of power laws are the Gutenberg-Richter law for earthquake sizes, Pareto's law of income distribution, or structural self-
similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and
validate them in the real world, is extremely active in many fields of modern science, including physics, computer science, linguistics,
 http://www.panix.com/~kts/Thesis/extreme/
extreme1.html
 When NASA missions are under tight time
and budget constraints, they tend to cut
component tests more than anything else.
And less testing means more failures.
A critique of doug hubbards the failure of risk management
 United Airlines Flight 232 was a scheduled flight from
Stapleton International Airport in Denver, Colorado, to
O'Hare International Airport in Chicago, with
continuing service to Philadelphia International Airport.
 On July 19, 1989, the DC-10 (Registration N1819U)
operating the route crash-landed in Sioux City, Iowa,
after suffering catastrophic failure of its tail-mounted
engine, which led to the loss of all flight controls.
 111 people died in the accident while 185 survived
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
 Investigators were able to recover the aircraft's tailcone as well as half of the fan
containment ring. Also found were fan blade fragments and parts of the
hydraulic lines. Three months after the accident, two pieces of the engine fan
disk were found in the fields near where the first pieces were located. Together
the pieces made up nearly the entire fan disk assembly.
 Two large fractures were found in the disk, indicating overstress failure.
Metallurgical examination showed that the primary fracture had resulted from a
fatigued section on the inside diameter of the disk.
 Further examination showed that the fatiguing had resulted in a small cavity on
the surface of the disk, apparently a defect in manufacturing.
 The 17 year old disk had undergone routine maintainence and six times had been
subjected to flourescent penetration inspections. Investigators concluded that
human error was responsible in improperly identifying the fatigued area before
the accident.
A critique of doug hubbards the failure of risk management
 In 1971 a Pan American 747 struck approach light structures for the reciprocal runway as it lifted off the runway at San Francisco Airport. Major
damage to the belly and landing gear resulted, which caused the loss of hydraulic fluid from three of its four flight control systems. The fluid which
remained in the fourth system gave the captain very limited control of some of the spoilers, ailerons, and one inboard elevator. That was sufficient
to circle the plane while fuel was dumped and then to make a hard landing. There were no fatalities, but there were some injuries.[31]
 In 1981, Eastern Airlines Flight 935, operated by a Lockheed L-1011 suffered a similar kind of massive failure of its tail mounted number two engine.
The shrapnel from that engine inflicted damage on all four of its hydraulic systems, which were also close together in the tail structure. Fluid was
lost in three of the four systems. While the fourth hydraulic system was impacted with shrapnel too, it was not punctured. The hydraulic pressure
remaining in that fourth system enabled the captain to land the plane safely with some limited use of the outboard spoilers, the inboard ailerons,
and the horizontal stabilizer, plus differential engine power of the remaining two engines. There were no injuries.[32]
 In 1985 Japan Airlines flight 123, a Boeing 747, suffered a rupture of the pressure bulkhead in its tail section. The damage was extensive and caused
the loss of fluid in all four of its hydraulic control systems. The pilots were able to keep the plane airborne for almost 30 minutes using differential
engine power, but eventually control was lost, and the plane crashed in mountainous terrain. There were only 4 survivors among the 524 on board.
This accident is the deadliest single-aircraft accident in history.[33]
 In 1994, RA85656, a Tupolev Tu-154 operating as Baikal Airlines Flight 130, crashed near Irkutsk shortly after departing from Irkutsk Airport, Russia.
Damage to the starter caused a fire in engine number two (located in the rear of fuselage). High temperatures during the fire destroyed the tanks
and pipes of all three hydraulic systems. The crew lost control of the aircraft. The unmanageable plane, at a speed of 275 knots, hit the ground at a
dairy farm and burned. All passengers and crew, as well as a dairyman on the ground, died.[34]
 In 2003, OO-DLL, a DHL Airbus A300 was struck by a surface-to-air missile shortly after departing from Baghdad International Airport, Iraq. The
missile struck the port side wing, rupturing a fuel tank and causing the loss of all three hydraulic systems. With the flight controls disabled, the crew
was able to use differential thrust to execute a safe landing at Baghdad. This is the first and only documented time anyone has managed to land a
transport aircraft safely without working flight controls.[35]
 The disintegration of a turbine disc, leading to loss of control, was a direct cause of two major aircraft disasters in Poland:
 On March 14, 1980, LOT Polish Airlines Flight 007, an Ilyushin Il-62, attempted a go-around when the crew experienced troubles with a gear
indicator. When thrust was applied, low pressure turbine disc in engine number 2 disintegrated because of material fatigue; parts of the disc
damaged engines number 1 and 3 and severed control pushers for both horizontal and vertical stabilizers. After 26 seconds of uncontrolled
descent, the aircraft crashed, killing all 87 people on board.[36]
 On May 9, 1987, improperly assembled bearings in engine number 2 on LOT Polish Airlines Flight 5055 overheated and exploded during cruise over
Lipniki village, causing the shaft to break in two; this caused the low pressure turbine disc to spin to enormous speeds and disintegrate, damaging
engine number 1 and cutting the control pushers. The crew managed to return to Warsaw, using nothing but trim tabs to control the Il-62M, but on
the final approach, the trim controlling links burned and the crew completely lost control over the aircraft. Soon after, it crashed on the outskirts of
Warsaw; all 183 on board perished. Had the plane stayed airborne for 40 seconds more, it would have been able to reach the runway.[37]
 It was featured in an episode of Seconds From
Disaster on the National Geographic Channel
and MSNBC Investigates on the MSNBC news
channel.
 The History Channel distributed a
documentary named Shockwave; a portion of
Episode 7 (originally aired January 25, 2008)
detailed the events of the crash.
 Bent Flyvbjerg
 Nils Bruzelius
 Werner Rothengatter
 Transparency
 "sunlight is said to be the
best of disinfectants”
 Louis Dembitz Brandeis was an Associate Justice on the
Supreme Court of the United States from 1916 to 1939.
 Brandeis made his famous statement that "sunlight is said to be the best of
disinfectants" in a 1913 Harper's Weekly article, entitled "What Publicity Can Do."
But it was an image that had been in his mind for decades.
 Twenty years earlier, in a letter to his fiance, Brandeis had expressed an interest in
writing a "a sort of companion piece" to his influential article on "The Right to
Privacy," but this time he would focus on "The Duty of Publicity."
 He had been thinking, he wrote, "about the wickedness of people shielding
wrongdoers & passing them off (or at least allowing them to pass themselves off) as
honest men."
 He then proposed a remedy:If the broad light of day could be let in upon men’s
actions, it would purify them as the sun disinfects.Interestingly, at that time the
word "publicity" referred both to something like what we think of as "public
relations" as well to the practice of making information widely available to the
public (Stoker and Rawlins, 2005).
 That latter definition sounds a lot like what we now mean by transparency.
 All documents be made available to the public
 Public hearings
 Independent peer reviews
 The decision to go ahead with a project
should, where all possible, be made
contingent on the willingness of private
financiers to participate without a sovereign
guarantee.
 Infrastructure grants will let local officials
spend the funds at their discretion but every
dollar they spend on one type of
infrastructure reduces their ability to fund
another.
 Forecsts should be made subject to
 “ in no other branch of mathematics is it so
easy to blunder as in probability theory.”
 Martin Gardiner, “Mathematical Games," Scientific American, October 1959 pp 180-182
 Monte Carlo simulation methods are especially useful in studying systems with a large number of
coupled degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular
structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for
modeling phenomena with significant uncertainty in inputs, such as the calculation of
risk in business (for its use in the insurance industry, see stochastic modeling). A classic
use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated
boundary conditions.
 Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate
investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method
is intended for financial analysts who want to construct stochastic or probabilistic financial models as
opposed to the traditional static and deterministic models.
 Monte Carlo methods are very important in computational physics, physical chemistry, and related
applied fields, and have diverse applications from complicated quantum chromo dynamics calculations
to designing heat shields and aerodynamic forms.
 Monte Carlo methods have also proven efficient in solving coupled integral differential equations of
radiation fields and energy transport, and thus these methods have been used in global illumination
computations which produce photorealistic images of virtual 3D models, with applications in video
games, architecture, design, computer generated films, special effects in cinema, business, economics
and other fields.
 Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can
find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not
prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x
says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is
probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo
algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so,
but another one without, but with a guarantee of not getting this answer when it is wrong too often —
in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
A critique of doug hubbards the failure of risk management
 The Senate committee
hearings that Pecora led
probed the causes of the
Wall Street Crash of 1929
that launched a major
reform of the American
financial system.
 “Pitch darkness was
among the bankers
strongest allies.”
 The Senate committee
hearings that Pecora led
probed the causes of the
Wall Street Crash of 1929
that launched a major
reform of the American
financial system.
 “Pitch darkness was
among the bankers
strongest allies.”
 “Economists for decades have shown that
transparency lowers margins, leads to
greater liquidity and more competition in the
marketplace…Transparent pricing is also a
critical feature of lowering the risk at banks,
and at the derivatives clearinghouses as
well.” Gary Gensler, Commodity Futures Trading Commission
Chairman NY Times 27 November 2011
 Spurred by these revelations, the United
States Congress enacted the Glass–Steagall
Act, the Securities Act of 1933 and the
Securities Exchange Act of 1934.
 Judgment Under Uncertainty:
Heuristics and Biases. Amos Tversky
and Daniel Kahneman
 Science, Volume 185, 1974
 Research for DARPA N00014-73C-
0438 monitored by ONR and Research
and Development Authority of
Hebrew University, Jerusalem, Israel.
 Biases in the evaluation of compound events
are particularly significant in the context of
planning. The successful completion of an
undertaking, such as the development of a
new product, typically has a conjunctive
character: for the undertaking to succeed,
each of a series of events must occur. Even
when each of these events is very likely, the
overall probability of success can be quite low
if the number of events is large.
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
 “The new program baseline projects total acquisition costs of $395.7 billion, an
increase of $117.2 billion (42%) from the prior 2007 baseline. Full rate production
is now planned for 2019, a delay of 6 years from the 2007 baseline. Unit costs per
aircraft have doubled since start of development in 2001…. Since 2002, the total
quantity through 2017 has been reduced by three-fourths, from 1,591 to 365.
Affordability is a key challenge…. Overall performance in 2011 was mixed as the
program achieved 6 of 11 important objectives…. Late software releases and
concurrent work on multiple software blocks have delayed testing and training.
Development of critical mission systems providing core combat capabilities
remains behind schedule and risky…. Most of the instability in the program has
been and continues to be the result of highly concurrent development, testing,
and production activities. Cost overruns on the first four annual procurement
contracts total more than $1 billion and aircraft deliveries are on average more
than 1 year late. Program officials said the government’s share of the cost growth
is $672 million; this adds about $11 million to the price of each of the 63 aircraft
under those contract.”
A critique of doug hubbards the failure of risk management
A critique of doug hubbards the failure of risk management
 In well-run firms in the private sector,
occasional problems are reluctantly
tolerated, but not disclosing them to
management is a crime.
 "Unless you can point the finger at
the man who is responsible when
something goes wrong, then you
never had anyone really
responsible.“
▪ Hyman G. Rickover, Admiral, USN
▪ Director of Naval Reactors
107
 Fought in 406 BC during the Peloponnesian War just east of the
island of Lesbos. In the battle, an Athenian fleet commanded by eight
strategoi defeated a Spartan fleet under Callicratidas. The battle was
precipitated by a Spartan victory which led to the Athenian fleet under
Conon being blockaded at Mytilene; to relieve Conon, the Athenians
assembled a scratch force composed largely of newly constructed
ships manned by inexperienced crews.
 This inexperienced fleet was thus tactically inferior to the Spartans,
but its commanders were able to circumvent this problem by
employing new and unorthodox tactics, which allowed the Athenians
to secure a dramatic and unexpected victory.
 The news of the victory itself was met with jubilation at Athens, and
the grateful Athenian public voted to bestow citizenship on the slaves
and metics who had fought in the battle. Their joy was tempered,
however, by the aftermath of the battle, in which a storm prevented
the ships assigned to rescue the survivors of the 25 disabled or sunken
Athenian triremes from performing their duties, and a great number of
sailors drowned. A fury erupted at Athens when the public learned of
this, and after a bitter struggle in the assembly six of the eight
generals who had commanded the fleet were tried as a group and
executed.
09/04/16 108
 Generals were frequently subject to impeachment and
prosecution in the courts. Penalties ranged from execution,
banishment and fines. The fines imposed might be truly
monumental, figures that could swallow up the estates of the
very richest Athenians.
 In 430 BC Pericles himself was removed summarily from office
by the assembly and fined.
 After the victorious naval battle of Arginusae in 406 BC, all
eight generals in command on the day were tried and sentenced
to death for failing to rescue survivors, though not all came
home to accept the penalty.
09/04/16
 Storm had prevented the victorious admirals from
picking up the crews of sunken ships. Many of them
drowned, and for this, the admirals were held
responsible.
09/04/16 Jeran Binning jeran.binning@dau.mil 110
 The alignment of
interests and
incentives is
elusive because
today’s
acquisition
culture lacks
meaningful
consequences for
failure.
111
 Dans ce pays-ci, il est bon de tuer de temps en temps un amiral pour encourager les autres).
 The king did not exercise royal prerogative and John Byng was shot on 14 March 1757 in the
Solent on the forecastle of HMS Monarch by a platoon of musketeers.
 Byng's execution was satirized byVoltaire in his novel Candide.
 In Portsmouth, Candide witnesses the execution of an officer by firing squad; and is told that
 "in this country, it is wise to kill an admiral from time to time to encourage the others”
 "What is surprising is not the magnitude of our
forecast errors," observes Mr. Taleb, "but our
absence of awareness of it.“
 We tend to fail--miserably--at predicting the
future, but such failure is little noted nor long
remembered. It seems to be of remarkably little
professional consequence.
 "Black swans" are highly consequential but unlikely events that are easily
explainable – but only in retrospect.
• Black swans have shaped the history of technology, science, business and
culture.
 • As the world gets more connected, black swans are becoming more
consequential.
 • The human mind is subject to numerous blind spots, illusions and biases.
 • One of the most pernicious biases is misusing standard statistical tools, such as
the “bell curve,” that ignore black swans.
 • Other statistical tools, such as the "power-law distribution," are far better at
modeling many important phenomena.
 • Expert advice is often useless.
 • Most forecasting is pseudoscience.
• You can retrain yourself to overcome your cognitive biases and to appreciate
randomness. but it's not easy.
 • You can hedge against negative black swans while benefiting from positive ones.
 "Much of what happens in history
comes from 'Black Swan dynamics',
very large, sudden, and totally
unpredictable 'outliers', while much of
what we usually talk about is almost
pure noise.
• Our track record in predicting those
events is dismal; yet by some
mechanism called the hindsight bias
we think that we understand them.
We have a bad habit of finding 'laws'
in history (by fitting stories to events
and detecting false patterns); we are
drivers looking through the rear view
mirror while convinced we are
looking ahead."
 The term Black–Scholes refers to three closely related concepts:
 The Black–Scholes model is a mathematical model of the market for an equity, in which the equity's
price is a stochastic process.
 The Black–Scholes PDE is a partial differential equation which (in the model) must be satisfied by the
price of a derivative on the equity.
 The Black–Scholes formula is the result obtained by solving the Black-Scholes PDE for European put
and call options.
 Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the
options pricing model and coined the term "Black-Scholes" options pricing model, by enhancing work
that was published by Fischer Black and Myron Scholes. The paper was first published in 1973. The
foundation for their research relied on work developed by scholars such as Louis Bachelier, , , Edward O.
Thorp, and Paul Samuelson. The fundamental insight of Black-Scholes is that the option is implicitly
priced if the stock is traded.
 Merton and Scholes received the 1997 Nobel Prize in Economics for this and related work. Though
ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the
Swedish academy.
 http://www.pbs.org/wgbh/nova/stockmarket/
 In 1973, with the publication of the options-pricing model developed by
Fischer Black and Myron Scholes and expanded on by Robert C.
Merton. The new model enabled more-effective pricing and mitigation
of risk. It could calculate the value of an option to buy a security as
long as the user could supply five pieces of data: the risk-free rate of
return (usually defined as the return on a three-month U.S. Treasury
bill), the price at which the security would be purchased (usually
given), the current price at which the security was traded (to be
observed in the market), the remaining time during which the option
could be exercised (given), and the security’s price volatility (which
could be estimated from historical data and is now more commonly
inferred from the prices of options themselves if they are traded).
 The equations in the model assume that the underlying security’s price
mimics the random way in which air molecules move in space, familiar
to engineers as Brownian motion.
 “But this long run is a misleading guide to current
affairs. In the long run we are all dead.”
John Maynard Keynes
identified three domains of probability:
Frequency probability;
Subjective or Bayesian probability;
and
Events lying outside the possibility of any description
in terms of probability (special causes) and based
a probability theory thereon.
"It ain't over till it's over”
Yogi Berra
 The Harken deal was a smaller scale version of the
accounting scandals at WorldCom, Enron and
other firms, Bush’s purchase and sale of the Texas
Rangers baseball team reveals other characteristic
features of the past several decades of American
capitalism: the plundering of public assets for
private gain, the confluence of political and
economic power, the defrauding of the American
people.
 By the time he cashed out in 1998, Bush’s return
on his original $600,000 investment in the Rangers
was 2,400 percent.
 Where did all of this money come from and what did Bush do to get it? Much of the story was first
reported nationally by Joe Conason in a February, 2000 article for Harpers Magazine. A report from the
public interest group, Center for Public Integrity, and recent columns on July 16 in the New York Times by
Paul Krugman and Nicholas Kristof have filled in some of the details.A free stadium, and some choice land
on the sideThe same factors that propelled Bush virtually overnight from failed oil man to wealthy corporate
executive—family connections and the desire of rich Texas businessmen to exploit the Bush name—opened
the way for him to buy a stake in the professional baseball team. Bill DeWitt, part owner of Spectrum 7,
which had bought Bush’s own company several years earlier and then later sold out to Harken, offered the
son of the then-US president a chance to join in a bid for the Rangers. In 1989 a deal was reached in which
Richard Rainwater, a wealthy Texas financier, joined Bush and several other investors in buying the
team.Bush himself did not have a large fortune at the time, and only bought a two percent share, financed
with a $500,000 loan from a bank on whose board of directors he had once served. Bush used the proceeds
from his questionable sale of Harken stock to repay this loan.Bush’s formal title was “managing partner.”
He served essentially as a public face, whose main responsibility was to attend the home baseball games.
Edward Rose, another wealthy Texas investor and Rainwater’s associate, was responsible for the actual
business operations of the team.The top priority for the new Rangers owners in increasing the value of their
holdings was to acquire a new stadium. They had no intention of paying for the stadium themselves, so they
threatened to move the team if the city of Arlington did not foot the bill. The city government readily agreed
to a generous deal. Reached in the fall of 1990, it guaranteed that the city would pay $135 million of an
estimated cost of $190 million. The remainder was raised through a ticket surcharge. Thus, local taxpayers
and baseball fans financed the entire cost of the stadium.Moreover, the owners were allowed to buy back
the stadium for a mere $60 million, which was deducted from ticket revenues at a rate of no more than $5
million per year. The Rangers syndicate was also given a property tax exemption and sales tax exemption on
products purchased for use in the stadium. City residents ended up subsidizing these tax breaks for the
Rangers owners by paying higher local rates.This plan was sold to Arlington voters with Bush’s help. At the
end of the day, the owners of the Rangers, including Bush, got a stadium worth nearly $200 million without
putting down a penny of their own money.But the boondoggle did not end there. As part of the deal, the
Rangers syndicate got a sizable chunk of land in addition to the stadium. This land naturally increased in
value as a result of the stadium’s construction.To oblige the owners, Ann Richards, the Democratic
Governor of Texas at the time, signed into law an extraordinary measure that set up the Arlington Sports
Facilities Development Authority (ASFDA), which was granted the power to seize privately owned land

Weitere ähnliche Inhalte

Was ist angesagt?

Session 07_Risk Assessment Program for YSP_Risk Evaluation
Session 07_Risk Assessment Program for YSP_Risk EvaluationSession 07_Risk Assessment Program for YSP_Risk Evaluation
Session 07_Risk Assessment Program for YSP_Risk EvaluationMuizz Anibire
 
Risk Analysis PowerPoint Presentation Slides
Risk Analysis PowerPoint Presentation Slides Risk Analysis PowerPoint Presentation Slides
Risk Analysis PowerPoint Presentation Slides SlideTeam
 
Statistical Approach to CRR
Statistical Approach to CRRStatistical Approach to CRR
Statistical Approach to CRRMayank Johri
 
Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...
Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...
Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...Muizz Anibire
 
Session 06_Risk Assessment Program for YSP_Risk Analysis III
Session 06_Risk Assessment Program for YSP_Risk Analysis IIISession 06_Risk Assessment Program for YSP_Risk Analysis III
Session 06_Risk Assessment Program for YSP_Risk Analysis IIIMuizz Anibire
 
Economically driven Cyber Risk Management
Economically driven Cyber Risk ManagementEconomically driven Cyber Risk Management
Economically driven Cyber Risk ManagementOsama Salah
 
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk AnalysisAdopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk AnalysisRicardo Viana Vargas
 
Session 02 Risk Assessment Program for YSP_The Risk Assessment Process
Session 02 Risk Assessment Program for YSP_The Risk Assessment ProcessSession 02 Risk Assessment Program for YSP_The Risk Assessment Process
Session 02 Risk Assessment Program for YSP_The Risk Assessment ProcessMuizz Anibire
 
Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...
Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...
Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...Muizz Anibire
 
Uncertainty
UncertaintyUncertainty
UncertaintyJan Zika
 
Session 08_Risk Assessment Program for YSP_Risk Treatment and Communication
Session 08_Risk Assessment Program for YSP_Risk Treatment and CommunicationSession 08_Risk Assessment Program for YSP_Risk Treatment and Communication
Session 08_Risk Assessment Program for YSP_Risk Treatment and CommunicationMuizz Anibire
 
Attack graph based risk assessment and optimisation approach
Attack graph based risk assessment and optimisation approachAttack graph based risk assessment and optimisation approach
Attack graph based risk assessment and optimisation approachIJNSA Journal
 
Managing Decision Under Uncertainties
Managing Decision Under UncertaintiesManaging Decision Under Uncertainties
Managing Decision Under UncertaintiesElijah Ezendu
 
Risk Evaluation And Mitigation Strategies PowerPoint Presentation Slide
Risk Evaluation And Mitigation Strategies PowerPoint Presentation SlideRisk Evaluation And Mitigation Strategies PowerPoint Presentation Slide
Risk Evaluation And Mitigation Strategies PowerPoint Presentation SlideSlideTeam
 

Was ist angesagt? (20)

Risk Analysis for Dummies
Risk Analysis for DummiesRisk Analysis for Dummies
Risk Analysis for Dummies
 
Risk assessment managment and risk based audit approach
Risk assessment managment and risk based audit approachRisk assessment managment and risk based audit approach
Risk assessment managment and risk based audit approach
 
Session 07_Risk Assessment Program for YSP_Risk Evaluation
Session 07_Risk Assessment Program for YSP_Risk EvaluationSession 07_Risk Assessment Program for YSP_Risk Evaluation
Session 07_Risk Assessment Program for YSP_Risk Evaluation
 
Risk Analysis PowerPoint Presentation Slides
Risk Analysis PowerPoint Presentation Slides Risk Analysis PowerPoint Presentation Slides
Risk Analysis PowerPoint Presentation Slides
 
Statistical Approach to CRR
Statistical Approach to CRRStatistical Approach to CRR
Statistical Approach to CRR
 
Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...
Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...
Session 01 _Risk Assessment Program for YSP_Introduction, Definitions and Sta...
 
Session 06_Risk Assessment Program for YSP_Risk Analysis III
Session 06_Risk Assessment Program for YSP_Risk Analysis IIISession 06_Risk Assessment Program for YSP_Risk Analysis III
Session 06_Risk Assessment Program for YSP_Risk Analysis III
 
Economically driven Cyber Risk Management
Economically driven Cyber Risk ManagementEconomically driven Cyber Risk Management
Economically driven Cyber Risk Management
 
Risk analysis
Risk analysisRisk analysis
Risk analysis
 
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk AnalysisAdopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis
 
Session 02 Risk Assessment Program for YSP_The Risk Assessment Process
Session 02 Risk Assessment Program for YSP_The Risk Assessment ProcessSession 02 Risk Assessment Program for YSP_The Risk Assessment Process
Session 02 Risk Assessment Program for YSP_The Risk Assessment Process
 
Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...
Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...
Construction Safety Training_Session 10_Risk Assessment, Hierarchy of Control...
 
Uncertainty
UncertaintyUncertainty
Uncertainty
 
Session 08_Risk Assessment Program for YSP_Risk Treatment and Communication
Session 08_Risk Assessment Program for YSP_Risk Treatment and CommunicationSession 08_Risk Assessment Program for YSP_Risk Treatment and Communication
Session 08_Risk Assessment Program for YSP_Risk Treatment and Communication
 
Attack graph based risk assessment and optimisation approach
Attack graph based risk assessment and optimisation approachAttack graph based risk assessment and optimisation approach
Attack graph based risk assessment and optimisation approach
 
Risk analysis
Risk analysis  Risk analysis
Risk analysis
 
Managing Decision Under Uncertainties
Managing Decision Under UncertaintiesManaging Decision Under Uncertainties
Managing Decision Under Uncertainties
 
Lecture 9, Chapter 13, Audit Sampling
Lecture 9, Chapter 13, Audit SamplingLecture 9, Chapter 13, Audit Sampling
Lecture 9, Chapter 13, Audit Sampling
 
Risk Evaluation And Mitigation Strategies PowerPoint Presentation Slide
Risk Evaluation And Mitigation Strategies PowerPoint Presentation SlideRisk Evaluation And Mitigation Strategies PowerPoint Presentation Slide
Risk Evaluation And Mitigation Strategies PowerPoint Presentation Slide
 
Dj24712716
Dj24712716Dj24712716
Dj24712716
 

Ähnlich wie A critique of doug hubbards the failure of risk management

Risk Management Lessons From The Current Crisis Ppt2003
Risk Management Lessons From The Current Crisis Ppt2003Risk Management Lessons From The Current Crisis Ppt2003
Risk Management Lessons From The Current Crisis Ppt2003Barry Schachter
 
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docx
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docxRunning Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docx
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docxhealdkathaleen
 
Questions On Risk Assessment Survey Essay
Questions On Risk Assessment Survey EssayQuestions On Risk Assessment Survey Essay
Questions On Risk Assessment Survey EssaySue Jones
 
AbstractKey FeaturesAssessmentIntroductionMeasur.docx
AbstractKey FeaturesAssessmentIntroductionMeasur.docxAbstractKey FeaturesAssessmentIntroductionMeasur.docx
AbstractKey FeaturesAssessmentIntroductionMeasur.docxransayo
 
John Salter Local Government Risk Management Strategic Lessons
John Salter   Local Government   Risk Management Strategic LessonsJohn Salter   Local Government   Risk Management Strategic Lessons
John Salter Local Government Risk Management Strategic Lessonsepcb
 
Case study in Enterprise Risk Management
Case study in Enterprise Risk ManagementCase study in Enterprise Risk Management
Case study in Enterprise Risk ManagementChris Teniswood
 
Risk Analysis Of Risk Analysis And Management
Risk Analysis Of Risk Analysis And ManagementRisk Analysis Of Risk Analysis And Management
Risk Analysis Of Risk Analysis And ManagementMary Brown
 
1 Contemporary Approaches in Management of Risk in .docx
1  Contemporary Approaches in Management of Risk in .docx1  Contemporary Approaches in Management of Risk in .docx
1 Contemporary Approaches in Management of Risk in .docxoswald1horne84988
 
PRINCIPLES-OF-RISK-AND-MANAGEMENT.pptx
PRINCIPLES-OF-RISK-AND-MANAGEMENT.pptxPRINCIPLES-OF-RISK-AND-MANAGEMENT.pptx
PRINCIPLES-OF-RISK-AND-MANAGEMENT.pptxGraciaSuratos
 
Risk assessment and management
Risk assessment and managementRisk assessment and management
Risk assessment and managementYAWAR HASSAN KHAN
 
Risk assessment and management
Risk assessment and managementRisk assessment and management
Risk assessment and managementYAWAR HASSAN
 
Managerial Accounting Chapter 5 Study Guide
Managerial Accounting Chapter 5 Study GuideManagerial Accounting Chapter 5 Study Guide
Managerial Accounting Chapter 5 Study GuideLisa Netkowicz
 
BBA 4226, Risk Management 1 Course Learning Outcomes
 BBA 4226, Risk Management 1 Course Learning Outcomes  BBA 4226, Risk Management 1 Course Learning Outcomes
BBA 4226, Risk Management 1 Course Learning Outcomes MargaritoWhitt221
 
Coaching material about fundraising, managing risk, sustainability strategies...
Coaching material about fundraising, managing risk, sustainability strategies...Coaching material about fundraising, managing risk, sustainability strategies...
Coaching material about fundraising, managing risk, sustainability strategies...Brodoto
 
Value At Risk And Risk Management
Value At Risk And Risk ManagementValue At Risk And Risk Management
Value At Risk And Risk ManagementJamie Boyd
 
PwC Insurance -Stress-testing
PwC Insurance -Stress-testingPwC Insurance -Stress-testing
PwC Insurance -Stress-testingPwC
 

Ähnlich wie A critique of doug hubbards the failure of risk management (20)

Risk Management Lessons From The Current Crisis Ppt2003
Risk Management Lessons From The Current Crisis Ppt2003Risk Management Lessons From The Current Crisis Ppt2003
Risk Management Lessons From The Current Crisis Ppt2003
 
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docx
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docxRunning Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docx
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docx
 
Questions On Risk Assessment Survey Essay
Questions On Risk Assessment Survey EssayQuestions On Risk Assessment Survey Essay
Questions On Risk Assessment Survey Essay
 
AbstractKey FeaturesAssessmentIntroductionMeasur.docx
AbstractKey FeaturesAssessmentIntroductionMeasur.docxAbstractKey FeaturesAssessmentIntroductionMeasur.docx
AbstractKey FeaturesAssessmentIntroductionMeasur.docx
 
On Risk Management
On Risk ManagementOn Risk Management
On Risk Management
 
John Salter Local Government Risk Management Strategic Lessons
John Salter   Local Government   Risk Management Strategic LessonsJohn Salter   Local Government   Risk Management Strategic Lessons
John Salter Local Government Risk Management Strategic Lessons
 
Case study in Enterprise Risk Management
Case study in Enterprise Risk ManagementCase study in Enterprise Risk Management
Case study in Enterprise Risk Management
 
Risk Analysis Of Risk Analysis And Management
Risk Analysis Of Risk Analysis And ManagementRisk Analysis Of Risk Analysis And Management
Risk Analysis Of Risk Analysis And Management
 
1 Contemporary Approaches in Management of Risk in .docx
1  Contemporary Approaches in Management of Risk in .docx1  Contemporary Approaches in Management of Risk in .docx
1 Contemporary Approaches in Management of Risk in .docx
 
PRINCIPLES-OF-RISK-AND-MANAGEMENT.pptx
PRINCIPLES-OF-RISK-AND-MANAGEMENT.pptxPRINCIPLES-OF-RISK-AND-MANAGEMENT.pptx
PRINCIPLES-OF-RISK-AND-MANAGEMENT.pptx
 
Risk assessment and management
Risk assessment and managementRisk assessment and management
Risk assessment and management
 
Risk assessment and management
Risk assessment and managementRisk assessment and management
Risk assessment and management
 
The five structural columns of risk analysis techniques
The five structural columns of risk analysis techniquesThe five structural columns of risk analysis techniques
The five structural columns of risk analysis techniques
 
Managerial Accounting Chapter 5 Study Guide
Managerial Accounting Chapter 5 Study GuideManagerial Accounting Chapter 5 Study Guide
Managerial Accounting Chapter 5 Study Guide
 
BBA 4226, Risk Management 1 Course Learning Outcomes
 BBA 4226, Risk Management 1 Course Learning Outcomes  BBA 4226, Risk Management 1 Course Learning Outcomes
BBA 4226, Risk Management 1 Course Learning Outcomes
 
Coaching material about fundraising, managing risk, sustainability strategies...
Coaching material about fundraising, managing risk, sustainability strategies...Coaching material about fundraising, managing risk, sustainability strategies...
Coaching material about fundraising, managing risk, sustainability strategies...
 
Value At Risk And Risk Management
Value At Risk And Risk ManagementValue At Risk And Risk Management
Value At Risk And Risk Management
 
Managing risk
Managing riskManaging risk
Managing risk
 
Risk identification
Risk identificationRisk identification
Risk identification
 
PwC Insurance -Stress-testing
PwC Insurance -Stress-testingPwC Insurance -Stress-testing
PwC Insurance -Stress-testing
 

Mehr von Jeran Binning

Space acquisition environment 01 sep 2016_jeran_binning_v2.0
Space acquisition environment 01 sep 2016_jeran_binning_v2.0Space acquisition environment 01 sep 2016_jeran_binning_v2.0
Space acquisition environment 01 sep 2016_jeran_binning_v2.0Jeran Binning
 
Presentation micro expressions
Presentation micro expressionsPresentation micro expressions
Presentation micro expressionsJeran Binning
 
Thinking fast and_slow
Thinking fast and_slow Thinking fast and_slow
Thinking fast and_slow Jeran Binning
 
The biology of bubble and crash
The biology of bubble and crashThe biology of bubble and crash
The biology of bubble and crashJeran Binning
 
History of Governmet Contracting 4 31 jan 12 2
History of Governmet Contracting 4 31 jan 12 2History of Governmet Contracting 4 31 jan 12 2
History of Governmet Contracting 4 31 jan 12 2Jeran Binning
 
The biology of bubble and crash
The biology of bubble and crashThe biology of bubble and crash
The biology of bubble and crashJeran Binning
 
Supplier management v4.0_11_mar_12
Supplier management v4.0_11_mar_12Supplier management v4.0_11_mar_12
Supplier management v4.0_11_mar_12Jeran Binning
 
Hx of contracting 3 16 feb 12
Hx of contracting 3 16 feb 12Hx of contracting 3 16 feb 12
Hx of contracting 3 16 feb 12Jeran Binning
 

Mehr von Jeran Binning (10)

Space acquisition environment 01 sep 2016_jeran_binning_v2.0
Space acquisition environment 01 sep 2016_jeran_binning_v2.0Space acquisition environment 01 sep 2016_jeran_binning_v2.0
Space acquisition environment 01 sep 2016_jeran_binning_v2.0
 
Risk management
Risk management Risk management
Risk management
 
Presentation micro expressions
Presentation micro expressionsPresentation micro expressions
Presentation micro expressions
 
Thinking fast and_slow
Thinking fast and_slow Thinking fast and_slow
Thinking fast and_slow
 
Biases april 2012
Biases april 2012Biases april 2012
Biases april 2012
 
The biology of bubble and crash
The biology of bubble and crashThe biology of bubble and crash
The biology of bubble and crash
 
History of Governmet Contracting 4 31 jan 12 2
History of Governmet Contracting 4 31 jan 12 2History of Governmet Contracting 4 31 jan 12 2
History of Governmet Contracting 4 31 jan 12 2
 
The biology of bubble and crash
The biology of bubble and crashThe biology of bubble and crash
The biology of bubble and crash
 
Supplier management v4.0_11_mar_12
Supplier management v4.0_11_mar_12Supplier management v4.0_11_mar_12
Supplier management v4.0_11_mar_12
 
Hx of contracting 3 16 feb 12
Hx of contracting 3 16 feb 12Hx of contracting 3 16 feb 12
Hx of contracting 3 16 feb 12
 

Kürzlich hochgeladen

Chapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptx
Chapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptxChapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptx
Chapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptxesiyasmengesha
 
AMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdf
AMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdfAMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdf
AMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdfJohnCarloValencia4
 
The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...
The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...
The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...Brian Solis
 
Intellectual Property Licensing Examples
Intellectual Property Licensing ExamplesIntellectual Property Licensing Examples
Intellectual Property Licensing Examplesamberjiles31
 
Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...
Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...
Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...IMARC Group
 
HELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptx
HELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptxHELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptx
HELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptxHelene Heckrotte
 
Borderless Access - Global B2B Panel book-unlock 2024
Borderless Access - Global B2B Panel book-unlock 2024Borderless Access - Global B2B Panel book-unlock 2024
Borderless Access - Global B2B Panel book-unlock 2024Borderless Access
 
Amazon ppt.pptx Amazon about the company
Amazon ppt.pptx Amazon about the companyAmazon ppt.pptx Amazon about the company
Amazon ppt.pptx Amazon about the companyfashionfound007
 
PDT 88 - 4 million seed - Seed - Protecto.pdf
PDT 88 - 4 million seed - Seed - Protecto.pdfPDT 88 - 4 million seed - Seed - Protecto.pdf
PDT 88 - 4 million seed - Seed - Protecto.pdfHajeJanKamps
 
UNLEASHING THE POWER OF PROGRAMMATIC ADVERTISING
UNLEASHING THE POWER OF PROGRAMMATIC ADVERTISINGUNLEASHING THE POWER OF PROGRAMMATIC ADVERTISING
UNLEASHING THE POWER OF PROGRAMMATIC ADVERTISINGlokeshwarmaha
 
MC Heights construction company in Jhang
MC Heights construction company in JhangMC Heights construction company in Jhang
MC Heights construction company in Jhangmcgroupjeya
 
Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)
Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)
Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)tazeenaila12
 
Borderless Access - Global Panel book-unlock 2024
Borderless Access - Global Panel book-unlock 2024Borderless Access - Global Panel book-unlock 2024
Borderless Access - Global Panel book-unlock 2024Borderless Access
 
7movierulz.uk
7movierulz.uk7movierulz.uk
7movierulz.ukaroemirsr
 
Plano de marketing- inglês em formato ppt
Plano de marketing- inglês  em formato pptPlano de marketing- inglês  em formato ppt
Plano de marketing- inglês em formato pptElizangelaSoaresdaCo
 
Scrum Events & How to run them effectively
Scrum Events & How to run them effectivelyScrum Events & How to run them effectively
Scrum Events & How to run them effectivelyMarianna Nakou
 
Michael Vidyakin: Introduction to PMO (UA)
Michael Vidyakin: Introduction to PMO (UA)Michael Vidyakin: Introduction to PMO (UA)
Michael Vidyakin: Introduction to PMO (UA)Lviv Startup Club
 
To Create Your Own Wig Online To Create Your Own Wig Online
To Create Your Own Wig Online  To Create Your Own Wig OnlineTo Create Your Own Wig Online  To Create Your Own Wig Online
To Create Your Own Wig Online To Create Your Own Wig Onlinelng ths
 
PDT 89 - $1.4M - Seed - Plantee Innovations.pdf
PDT 89 - $1.4M - Seed - Plantee Innovations.pdfPDT 89 - $1.4M - Seed - Plantee Innovations.pdf
PDT 89 - $1.4M - Seed - Plantee Innovations.pdfHajeJanKamps
 

Kürzlich hochgeladen (20)

Chapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptx
Chapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptxChapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptx
Chapter_Five_The_Rural_Development_Policies_and_Strategy_of_Ethiopia.pptx
 
AMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdf
AMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdfAMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdf
AMAZON SELLER VIRTUAL ASSISTANT PRODUCT RESEARCH .pdf
 
The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...
The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...
The End of Business as Usual: Rewire the Way You Work to Succeed in the Consu...
 
Intellectual Property Licensing Examples
Intellectual Property Licensing ExamplesIntellectual Property Licensing Examples
Intellectual Property Licensing Examples
 
Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...
Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...
Boat Trailers Market PPT: Growth, Outlook, Demand, Keyplayer Analysis and Opp...
 
HELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptx
HELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptxHELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptx
HELENE HECKROTTE'S PROFESSIONAL PORTFOLIO.pptx
 
Borderless Access - Global B2B Panel book-unlock 2024
Borderless Access - Global B2B Panel book-unlock 2024Borderless Access - Global B2B Panel book-unlock 2024
Borderless Access - Global B2B Panel book-unlock 2024
 
Amazon ppt.pptx Amazon about the company
Amazon ppt.pptx Amazon about the companyAmazon ppt.pptx Amazon about the company
Amazon ppt.pptx Amazon about the company
 
PDT 88 - 4 million seed - Seed - Protecto.pdf
PDT 88 - 4 million seed - Seed - Protecto.pdfPDT 88 - 4 million seed - Seed - Protecto.pdf
PDT 88 - 4 million seed - Seed - Protecto.pdf
 
UNLEASHING THE POWER OF PROGRAMMATIC ADVERTISING
UNLEASHING THE POWER OF PROGRAMMATIC ADVERTISINGUNLEASHING THE POWER OF PROGRAMMATIC ADVERTISING
UNLEASHING THE POWER OF PROGRAMMATIC ADVERTISING
 
MC Heights construction company in Jhang
MC Heights construction company in JhangMC Heights construction company in Jhang
MC Heights construction company in Jhang
 
Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)
Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)
Harvard Business Review.pptx | Navigating Labor Unrest (March-April 2024)
 
Borderless Access - Global Panel book-unlock 2024
Borderless Access - Global Panel book-unlock 2024Borderless Access - Global Panel book-unlock 2024
Borderless Access - Global Panel book-unlock 2024
 
7movierulz.uk
7movierulz.uk7movierulz.uk
7movierulz.uk
 
Plano de marketing- inglês em formato ppt
Plano de marketing- inglês  em formato pptPlano de marketing- inglês  em formato ppt
Plano de marketing- inglês em formato ppt
 
Investment Opportunity for Thailand's Automotive & EV Industries
Investment Opportunity for Thailand's Automotive & EV IndustriesInvestment Opportunity for Thailand's Automotive & EV Industries
Investment Opportunity for Thailand's Automotive & EV Industries
 
Scrum Events & How to run them effectively
Scrum Events & How to run them effectivelyScrum Events & How to run them effectively
Scrum Events & How to run them effectively
 
Michael Vidyakin: Introduction to PMO (UA)
Michael Vidyakin: Introduction to PMO (UA)Michael Vidyakin: Introduction to PMO (UA)
Michael Vidyakin: Introduction to PMO (UA)
 
To Create Your Own Wig Online To Create Your Own Wig Online
To Create Your Own Wig Online  To Create Your Own Wig OnlineTo Create Your Own Wig Online  To Create Your Own Wig Online
To Create Your Own Wig Online To Create Your Own Wig Online
 
PDT 89 - $1.4M - Seed - Plantee Innovations.pdf
PDT 89 - $1.4M - Seed - Plantee Innovations.pdfPDT 89 - $1.4M - Seed - Plantee Innovations.pdf
PDT 89 - $1.4M - Seed - Plantee Innovations.pdf
 

A critique of doug hubbards the failure of risk management

  • 3.  divided into three parts:  (1) the first part introduces the crisis in risk management;  (2) the second deals with why some popular risk management practices are flawed;  (3) the third discusses what needs to be done to fix these.
  • 4.  Code of Hammurabi –  compensation or indemnification for those harmed by bandits or floods.  Careful selection of debtors- Called underwriting in insurance  Development of probability theory and statistics
  • 5.  There are several risk management methodologies and techniques in use ; a quick search will reveal some of them. Hubbard begins his book by asking the following simple questions about these:  Do these risk management methods work?  Would any organization that uses these techniques know if they didn’t work?  What would be the consequences if they didn’t?
  • 6.  His contention is that for most organizations the answers to the first two questions are negative.  To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them. This is an example of common mode failure – a single event causing multiple systems to fail.  The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error).  The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion. Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.
  • 7.  Following a very brief history of risk management from historical times to the present, Hubbard presents a list of common methods of risk management. These are:  Expert intuition – essentially based on “gut feeling”  Expert audit – based on expert intuition of independent consultants. Typically involves the development of checklists and also uses stratification methods (see next point)  Simple stratification methods – risk matrices are the canonical example of stratification methods.  Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by weighting based on perceived importance of each criterion.  Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and worst case scenarios  Calculus of preferences – structured decision analysis techniques such as multi-attribute utility and analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple judgements are involved these techniques ensure that the judgements are logically consistent (i.e. do not contradict the principles of logic).  Probabilistic models – involves building probabilistic models of risk events. Probabilities can be based on historical data, empirical observation or even intuition. The book essentially builds a case for evaluating risks using probabilistic models, and provides advice on how these should be built
  • 8.  The book also discusses the state of risk management practice (at the end of 2008) as assessed by surveys carried out by The Economist, Protiviti and Aon Corporation. Hubbard notes that the surveys are based largely on self-assessments of risk management effectiveness. One cannot place much confidence in these because self-assessments of risk are subject to well known psychological effects such as cognitive biases (tendencies to base judgments on flawed perceptions) and the Dunning-Kruger effect (overconfidence in one’s abilities).  The acid test for any assessment is whether or not it use sound quantitative measures. Many of the firms surveyed fail on this count: they do not quantify risks as well as they claim they do. Assigning weighted scores to qualitative judgements does not count as a sound quantitative technique – more on this later.
  • 9.  The Dunning–Kruger effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to recognize their mistakes.[1]  The unskilled therefore suffer from illusory superiority, rating their ability as above average, much higher than it actually is, while the highly skilled underrate their own abilities, suffering from illusory inferiority.  Actual competence may weaken self-confidence, as competent individuals may falsely assume that others have an equivalent understanding.  As Kruger and Dunning conclude, "the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others" (p. 1127).[2] The effect is about paradoxical defects in cognitive ability, both in oneself and as one compares oneself to others.
  • 10.  So, what are some good ways of measuring the effectiveness of risk management? Hubbard lists the following:  Statistics based on large samples  Direct evidence  Component testing  Check of completeness
  • 11.  Statistics based on large samples – the use of this depends on the availability of historical or other data that is similar to the situation at hand.  Direct evidence – this is where the risk management technique actually finds some problem that would not have been found otherwise. For example, an audit that unearths dubious financial practices  Component testing – even if one isn’t able to test the method end-to-end, it may be possible to test specific components that make up the method. For example, if the method uses computer simulations, it may be possible to validate the simulations by applying them to known situations.  Check of completeness – organisations need to ensure that their risk management methods cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the probability of another. Further, as Hubbard states, “A risk that’s not even on the radar cannot be managed at all.” As far as completeness is concerned, there are four perspectives that need to be taken into account. These are:  Internal completeness – covering all parts of the organisation  External completeness – covering all external entities that the organisation interacts with.  Historical completeness – this involves covering worst case scenarios and historical data.  Combinatorial completeness – this involves considering combinations of events that may occur together; those that may lead to common-mode failure discussed earlier.
  • 12.  Hubbard begins this section by identifying the four major players in the risk management game. These are:  Actuaries  Physicists and Mathematicians  Economists  Management Consultants
  • 13.  These are perhaps the first modern professional risk managers. They use quantitative methods to manage risks in the insurance and pension industry.  Although the methods actuaries use are generally sound, the profession is slow to pick up new techniques.  Further, many investment decisions that insurance companies make do not come under the purview of actuaries.  So, actuaries typically do not cover the entire spectrum of organizational risks.
  • 14.  Many rigorous risk management techniques came out of statistical research done during the second world war. Hubbard therefore calls this group War Quants.  One of the notable techniques to come out of this effort is the Monte Carlo Method – originally proposed by Nick Metropolis, John Neumann and Stanislaw Ulam as a technique to calculate the averaged trajectories of neutrons in fissile material (see this article by Nick Metropolis for a first-person account of how the method was developed).  Hubbard believes that Monte Carlo simulations offer a sound, general technique for quantitative risk analysis. Consequently he spends a fair few pages discussing these methods, albeit at a very basic level. More about this later.
  • 15.  Risk analysts in investment firms often use quantitative techniques from economics. Popular techniques include modern portfolio theory and models from options theory (such as the Black-Scholes model) . The problem is that these models are often based on questionable assumptions.  For example, the Black-Scholes model assumes that the rate of return on a stock is normally distributed (i.e. its value is lognormally distributed) – an assumption that’s demonstrably incorrect as witnessed by the events of the last few years .  Another way in which economics plays a role in risk management is through behavioural studies, in particular the recognition that decisions regarding future events (be they risks or stock prices) are subject to cognitive biases. Hubbard suggests that the role of cognitive biases in risk management has been consistently overlooked.  See my post entitled Cognitive biases as meta-risks and its follow-up for more on this point.
  • 16.  In Hubbard’s view, management consultants and standards institutes are largely responsible for many of the ad-hoc approaches to risk management.  A particular favorite of these folks are ad-hoc scoring methods that involve ordering of risks based on subjective criteria. The scores assigned to risks are thus subject to cognitive bias.  Even worse, some of the tools used in scoring can end up ordering risks incorrectly.  Bottom line: many of the risk analysis techniques used by consultants and standards have no justification.
  • 18.  Following the discussion of the main players in the risk arena, Hubbard discusses the confusion associated with the definition of risk.  There are a plethora of definitions of risk, most of which originated in academia. Hubbard shows how some of these contradict each other while others are downright non-intuitive and incorrect.  In doing so, he clarifies some of the academic and professional terminology around risk.  As an example, he takes exception to the notion of risk as a “good thing” – as in the PMI definition, which views risk as “an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objective.”  This definition contradicts common (dictionary) usage of the term risk (which generally includes only bad stuff). Hubbard’s opinion on this may raise a few eyebrows (and hackles!) in project management circles, but I reckon he has a point.
  • 19.  The story that I have to tell is marked all the way through by a persistent tension between those who assert that the best decisions are based on quantification and numbers, determined by the patterns of the past, and those who base their decisions on more subjective degrees of belief about the uncertain future. This is a controversy that has never been resolved.’  — FROM THE INTRODUCTION TO ‘‘AGAINST THE GODS: THE REMARKABLE STORY OF RISK,’’ BY PETER L. BERNSTEIN  http://www.mckinseyquarterly.com/Peter_L_Bernstein_on_risk_2211
  • 21.  Frank H. Knight was one of the founders of the so-called Chicago school of economics, of which milton friedman and george stigler were the leading members from the 1950s to the 1980s.  Knight made his reputation with his book Risk, Uncertainty, and Profit, which was based on his Ph.D. dissertation. In it Knight set out to explain why “perfect competition” would not necessarily eliminate profits.  His explanation was “uncertainty,” which Knight distinguished from risk. According to Knight, “risk” refers to a situation in which the probability of an outcome can be determined, and therefore the outcome insured against. “Uncertainty,” by contrast, refers to an event whose probability cannot be known.  Knight argued that even in long-run equilibrium, entrepreneurs would earn profits as a return for putting up with uncertainty. Knight’s distinction between risk and uncertainty is still taught in economics classes today.
  • 22.  [To differentiate] the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term uncertainty for the latter.
  • 23.  Probability, then, is concerned with professedly uncertain [emphasis added] judgments.2  The word risk has acquired no technical meaning in economics, but signifies here as elsewhere [emphasis added] chance of damage or loss.
  • 24.  If you wish to converse with me, define your terms  Voltaire
  • 25.  Uncertainty. The lack of complete certainty —that is, the existence of more than one possibility. The “true” outcome/state/result/value is not known.  Measurement- A set of probabilities assigned to set of possibilities. For example there is a 60% chance of rain tomorrow, and a 40% chance it won’t.
  • 26.  By “uncertain” knowledge … I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty…. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention…. About these matters, there is no scientific basis on which to form any calculable probability whatever. We simply do not know!
  • 27.  A state of uncertainty where some of the possibilities involve loss, injury, catastrophe, or other undesirable outcome. (i.e. something bad could happen) in the future. (if—then)  Measurement of Risk  A set of possibilities each with quantifiable probabilities and quantified losses. For example, “we believe there is a 40% chance a proposed oil well will be dry with a loss of $12m in exploratory drilling costs
  • 28.  Risk: Well, it certainly doesn't mean standard deviation. People mainly think of risk in terms of downside risk. They are concerned about the maximum they can lose. So that's what risk means.  In contrast, the professional view defines risk in terms of variance, and doesn't discriminate gains from losses. There is a great deal of miscommunication and misunderstanding because of these very different views of risk. Beta does not do it for most people, who are more concerned with the possibility of loss  Daniel Kahneman
  • 29.  Measuring risks, especially important long-term ones, is imprecise and difficult. Virtually none of the economic statistics reported in the media measure risk.  To fully comprehend risk, we must stretch our imagination to think of all the different ways that things can go wrong, including things that have not happened in recent memory.  We must protect ourselves against fallacies, such as thinking that just because a risk has not proved damaging for decades, it no longer exists.
  • 30.  Yet another psychological barrier is a sort of ego involvement in our own success.  Our tendency to take full credit for our successes discourages us from facing up to the possibility of loss or failure, because considering such prospects calls into question our self-satisfaction.  Indeed, self-esteem is one of the most powerful human needs: a view of our own success relative to others provides us with a sense of meaning and well-being.
  • 31.  So accepting the essential randomness of life is terribly difficult, and contradicts our deep psychological need for order and accountability.  We often do not protect the things that we have - such as our opportunities to earn income and accumulate wealth - because we mistakenly believe that our own natural superiority will do that for us.
  • 32.  Risk has to include some probability of loss— this excludes Knight’s definition.  Risk involves only losses (not gains)---this excludes PMI’s definition  Outside of finance, volatility may not necessarily entail risk---this excludes considering volatility alone as synonymous with risk.
  • 33.  Risk in not just the product of probability and loss. Multiplying them together unnecessarily presumes that the decision maker is risk neutral. Keep risk as a vector quantity where probability and magnitude of loss are separate until we compare it to the risk aversion of the decision maker.  Risk can be made of discrete or continuous losses and associated probabilities. We do not need to make the distinctions sometimes made in construction engineering that risk is only discrete events.
  • 34.  According to the peak-end rule, we judge our past experiences almost entirely on how they were at their peak (pleasant or unpleasant) and how they ended. Other information is not lost, but it is not used. This includes net pleasantness or unpleasantness and how long the experience lasted.  In one experiment, one group of people were subjected to loud, painful noises. In a second group, subjects were exposed to the same loud, painful noises as the first group, after which were appended somewhat less painful noises. This second group rated the experience of listening to the noises as much less unpleasant than the first group, despite having been subjected to more discomfort than the first group, as they experienced the same initial duration, and then an extended duration of reduced unpleasantness.  This heuristic was first suggested by Daniel Kahneman and others. He argues that because people seem to perceive not the sum of an experience but its average, it may be an instance of the representativeness heuristic.
  • 35.  Why we shouldn’t trust the numbers in our head.  Peak end rule. We tend to remember extremes and not the mundane.  Misconceptions of chance ▪ (H=heads, T=Tails): HHHTTT or HTHTTH? ▪ Actually they are equally likely. But since the first “appears” to be less random than the second, it must be less likely.
  • 36.  In my opinion, the most important sections of the book are chapters 6 and 7, where Hubbard discusses why “expert knowledge and opinions” (favoured by standards and methodologies are flawed) and why a very popular scoring method (risk matrices) is “worse than useless.” See my posts on the limitations of scoring techniques and Cox ’s risk matrix theorem for detailed discussions of these points.
  • 37.  A major problem with expert estimates is overconfidence. To overcome this, Hubbard advocates using calibrated probability assessments to quantify analysts’ abilities to make estimates. Calibration assessments involve getting analysts to answer trivia questions and eliciting confidence intervals for each answer. The confidence intervals are then checked against the proportion of correct answers.  Essentially, this assesses experts’ abilities to estimate by tracking how often they are right. It has been found that people can improve their ability to make subjective estimates through calibration training – i.e. repeated calibration testing followed by feedback. See this site for more on probability calibration.
  • 38.  Next Hubbard tackles several “red herring” arguments that are commonly offered as reasons not to manage risks using rigorous quantitative methods. Among these are arguments that quantitative risk analysis is impossible because:  Unexpected events cannot be predicted.  Risks cannot be measured accurately.
  • 39.  Hubbard states that the first objection is invalid because although some events (such as spectacular stockmarket crashes) may have been overlooked by models, it doesn’t prove that quantitative risk as a whole is flawed.  As he discusses later in the book, many models go wrong by assuming Gaussian probability distributions where fat-tailed ones would be more appropriate. Of course, given limited data it is difficult to figure out which distribution’s the right one.  So, although Hubbard’s argument is correct, it offers little comfort to the analyst who has to model events before they occur.
  • 41.  As far as the second is concerned, Hubbard has written another book on how just about any business variable (even intangible ones) can be measured.  The book makes a persuasive case that most quantities of interest can be measured, but there are difficulties.  First, figuring out the factors that affect a variable is not a straightforward task. It depends, among other things, on the availability of reliable data, the analyst’s experience etc.  Second, much depends on the judgement of the analyst, and such judgements are subject to bias.  Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea for all biases.  Third, risk-related measurements generally involve events that are yet to occur.  Consequently, such measurements are based on incomplete information. To make progress one often has to make additional assumptions which may not justifiable a priori.
  • 43. Cost analysis, used to develop cost estimates for such things as hardware systems, automated information systems, civil projects, manpower, and training, can be defined as 1. the effort to develop, analyze, and document cost estimates with analytical approaches and techniques; 2. the process of analyzing and estimating the incremental and total resources required to support past, present, and future systems—an integral step in selecting alternatives; and 3. a tool for evaluating resource requirements at key milestones and decision points in the acquisition process. Cost estimating involves collecting and analyzing historical data and applying quantitative models, techniques, tools, and databases to predict a program’s future cost. More simply, cost estimating combines science and art to predict the future cost of something based on known historical data that are adjusted to reflect new materials, technology, software languages, and development teams. Because cost estimating is complex, sophisticated cost analysts should combine concepts from such disciplines as accounting, budgeting, computer science, economics, engineering, mathematics, and statistics and should even employ concepts from marketing and public affairs. And because cost estimating requires such a wide range of disciplines, it is important that the cost analyst either be familiar with these disciplines or have access to an expert in these fields.
  • 44.  They are often used without empirical data or validation – i.e. their inputs and results are not tested through observation.  Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to manage low-level, operational risks.  They frequently focus on variables that are not important (because these are easier to measure) rather than those that are important. Hubbard calls this perverse occurrence measurement inversion. He contends that analysts often exclude the most important variables because these are considered to be “too uncertain.”  They use inappropriate probability distributions. The Normal distribution (or bell curve) is not always appropriate. For example, see my posts on the inherent uncertainty of project task estimates for an intuitive discussion of the form of the probability distribution for project task durations.  They do not account for correlations between variables. Hubbard contends that many analysts simply ignore correlations between risk variables (i.e. they treat variables as independent when they actually aren’t). This almost always leads to an underestimation of risk because correlations can cause feedback effects and common mode failures.
  • 45.  It turns out that many phenomena can be modeled by this kind of long-tailed distribution. Some of the better known long-tailed distributions include lognormal and power law distributions.  A quick, informal review of project management literature revealed that lognormal distributions are more commonly used than power laws to model activity duration uncertainties.  This may be because lognormal distributions have a finite mean and variance whereas power law distributions can have infinite values for both (see this presentation by Michael Mitzenmacher, for example). [An Aside:If you're curious as to why infinities are possible in the latter, it is because power laws decay more slowly than lognormal distributions - i.e they have "fatter" tails, and hence enclose larger (even infinite) areas.].  In any case, regardless of the exact form of the distribution for activity durations, what’s important and non-controversial is the short cutoff, the peak and long, decaying tail. These characteristics are true of all probability distributions that describe activity durations.
  • 46.  There’s one immediate consequence of the long tail: if you want to be really, really sure of completing any activity, you have to add a lot of “air” or safety because there’s a chance that you may “slip in the shower” so to speak. Hence, many activity estimators add large buffers to their estimates.  Project managers who suffer the consequences of the resulting inaccurate schedule are thus victims of the tail.
  • 49. CONTROL or lack thereof
  • 50.  One can study randomness, at three levels: mathematical, empirical, and behavioral.  Mathematical The first is the narrowly defined mathematics of randomness, which is no longer the interesting problem because we've pretty much reached small returns in what we can develop in that branch.  Empirical The second one is the dynamics of the real world, the dynamics of history, what we can and cannot model, how we can get into the guts of the mechanics of historical events, whether quantitative models can help us and how they can hurt us.  Behavioral And the third is our human ability to understand uncertainty. We are endowed with a native scorn of the abstract; we ignore what we do not see, even if our logic recommends otherwise. ▪ We tend to overestimate causal relationships ▪ When we meet someone who by playing Russian roulette became extremely influential, wealthy, and powerful, we still act toward that person as if he gained that status just by skills, even when you know there's been a lot of luck. Why?  Because our behavior toward that person is going to be entirely determined by shallow heuristics and very superficial matters related to his appearance. Nassim Taleb
  • 51.  Following a very brief history of risk management from historical times to the present, Hubbard presents a list of common methods of risk management. These are:  Expert intuition – essentially based on “gut feeling”  Expert audit – based on expert intuition of independent consultants. Typically involves the development of checklists and also uses stratification methods (see next point)  Simple stratification methods – risk matrices are the canonical example of stratification methods.  Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by weighting based on perceived importance of each criterion.  Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and worst case scenarios  Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple judgements are involved these techniques ensure that the judgements are logically consistent (i.e. do not contradict the principles of logic).  Probabilistic models – involves building probabilistic models of risk events. Probabilities can be based on historical data, empirical observation or even intuition. The book essentially builds a case for evaluating risks using probabilistic models, and provides advice on how these should be built
  • 52.  Adopt the language, tools and philosophy of uncertain systems. To do this he recommends:  Using calibrated probabilities to express uncertainties. Hubbard believes that any person who makes estimates that will be used in models should be calibrated. He offers some suggestions on people can improve their ability to estimate through calibration – discussed earlier and on this web site.  Employing quantitative modeling techniques to model risks. In particular, he advocates the use of Monte Carlo methods to model risks. He also provides a list of commercially available PC-based Monte Carlo tools. Hubbard makes the point that modeling forces analysts to decompose the systems of interest and understand the relationships between their components (see point 2 below).  Developing an understanding of the basic rules of probability including independent events, conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these rules can help analysts extrapolate  To this, I would also add that it is important to understand the idea that an estimate isn’t a number, but a probability distribution – i.e. a range of numbers, each with a probability attached to it.
  • 53.  Build, validate and test models using reality as the ultimate arbiter. Models should be built iteratively, testing each assumption against observation. Further, models need to incorporate mechanisms (i.e. how and why the observations are what they are), not just raw observations. This is often hard to do, but at the very least models should incorporate correlations between variables. Note that correlations are often (but not always!) indicative of an underlying mechanism. See this post for an introductory example of Monte Carlo simulation involving correlated variables.
  • 54.  In the penultimate chapter of the book, Hubbard fleshes out the characteristics or traits of good risk analysts. As he mentions several times in the book, risk analysis is an empirical science – it arises from experience.  So, although the analytical and mathematical (modeling) aspects of risk are important, a good analyst must, above all, be an empiricist – i.e. believe that knowledge about risks can only come from observation of reality.  In particular, testing models by seeing how well they match historical data and tracking model predictions are absolutely critical aspects of a risk analysts job.  Unfortunately, many analysts do not measure the performance of their risk models. Hubbard offers some excellent suggestions on how analysts can refine and improve their models via observation.
  • 55.  Developing an understanding of the basic rules of probability including independent events, conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these rules can help analysts extrapolate
  • 57.  Both versions of the law state that the sample average converges to the expected value  where X1, X2, ... is an infinite sequence of i.i.d. random variables with finite expected value;  E(X1)=E(X2) = ... = µ < ∞.  An assumption of finite variance Var(X1) = Var(X2) = ... = σ2 < ∞ is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway. This assumption is often used because it makes the proofs easier and shorter.  The difference between the strong and the weak version is concerned with the mode of convergence being asserted.  The weak law  The weak law of large numbers states that the sample average converges in probability towards the expected value.  Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value, that is, within the margin.  Convergence in probability is also called weak convergence of random variables. This version is called the weak law because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as below.  A consequence of the weak LLN is the asymptotic equipartition property.  The strong law  The strong law of large numbers states that the sample average converges almost surely to the expected value  That is, the proof is more complex than that of the weak law. This law justifies the intuitive interpretation of the expected value of a random variable as the "long-term average when sampling repeatedly".  Almost sure convergence is also called strong convergence of random variables. This version is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). The strong law implies the weak law.  The strong law of large numbers can itself be seen as a special case of the ergodic theorem.
  • 58.  Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. Thus, proponents of Bayesian inference say that it can be used to discriminate between conflicting hypotheses: hypotheses with very high support should be accepted as true and those with very low support should be rejected as false. However, detractors say that this inference method may be biased due to initial beliefs that one holds before any evidence is ever collected. (This is a form of inductive bias).  Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been observed. (This process is repeated when additional evidence is obtained.) Bayesian inference usually relies on degrees of belief, or subjective probabilities, in the induction process and does not necessarily claim to provide an objective method of induction. Nonetheless, some Bayesian statisticians believe probabilities can have an objective value and therefore Bayesian inference can provide an objective method of induction
  • 59. 59 To convert the Probability of event A given event B to the Probability of event B given event A, we use Bayes’ theorem. We must know or estimate the Probabilities of the two separate events. Pr(B|A) = Pr (A|B) Pr (B) Pr (A) Pr (A) = Pr(A|B)Pr(B) + Pr(A|B)Pr(B) Law of Total Probability The Reverend Thomas Bayes, F.R.S. --- 1701?-1761
  • 60. ▪ Example of Bayesian search theory In May 1968 the US nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of Norfolk Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an extensive search failed to discover the wreck. The US Navy's deep water expert, John Craven, USN, believed that it was elsewhere and he organized a search south west of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, the Mizar, and he took advice from a firm of consultant mathematicians in order to maximize his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of the Scorpion. The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched. This sea grid was systematically searched in a manner which started with the high probability regions first and worked down to the low probability regions last. Each time a grid square was searched and found to be empty its probability was reassessed using Bayes' theorem. This then forced the probabilities of all the other grid squares to be reassessed (upwards), also by Bayes' theorem. The use of this approach was a major computational challenge for the time but it was eventually successful and the Scorpion was found about 740 kilometers southwest of the Azores in October of that year.
  • 61.  Stochastic is synonymous with "random." The word is of Greek origin and means "pertaining to chance" (Parzen 1962, p. 7).  It is used to indicate that a particular subject is seen from point of view of randomness.  Stochastic is often used as counterpart of the word "deterministic," which means that random phenomena are not involved.  Therefore, stochastic models are based on random trials, while deterministic models always produce the same output for a given starting condition.
  • 64.  "Stochastic" means being or having a random variable.  A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fluctuations observed in historical data for a selected period using standard time-series techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which reflect the random variation in the input(s).  Its application initially started in physics (sometimes known as the Monte Carlo Method). It is now being applied in engineering, life sciences, social sciences, and finance.
  • 65.  Valuation  Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry, however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now until the claim, investment returns during that period, and so on.  So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with the best estimate for assets and liabilities, and therefore for the company's level of solvency.  Deterministic approach The simplest way of doing this, and indeed the primary method used, is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the mostly likely rate and the most critical rate. The result provides a point estimate- the best single estimate of what the company's current solvency position is or multiple points of estimate - depends on the problem definition. Selection and identification of parameter values are frequently a challenge to less experienced analysts. The downside of this approach is it does not fully cover the fact that there is a whole range of possible outcomes and some are more probable and some are less.  Stochastic modeling  A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire company. But rather than setting investment returns according to their most likely estimate, for example, the model uses random variations to look at what investment conditions might be like.  Based on a set of random outcomes, the experience of the policy/portfolio/company is projected, and the outcome is noted. Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times.  At the end, a distribution of outcomes is available which shows not only what the most likely estimate, but what ranges are reasonable too.  This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum. A deterministic simulation, with varying scenarios for future investment return, does not provide a good way of estimating the cost of providing this guarantee. This is because it does not allow for the volatility of investment returns in each future time period or the chance that an extreme event in a particular time period leads to an investment return less than the guarantee. Stochastic modeling builds volatility and variability (randomness) into the simulation and therefore provides a better representation of real life from more angles.
  • 66.  Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business (for its use in the insurance industry, see stochastic modeling). A classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions.  Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models.  Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromo dynamics calculations to designing heat shields and aerodynamic forms.  Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, special effects in cinema, business, economics and other fields.  Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
  • 67.  This Demonstration shows how to analyze lifetime test data from data-fitting to a Weibull distribution function plot.  The data fit is on a log-log plot by a least squares fitting method.  The results are presented as Weibull distribution CDF and PDF plots.
  • 68.  The probability density function (PDF - upper plot) is the derivative of the cumulative density function (CDF - lower plot). This elegant relationship is illustrated here. The default plot of the PDF answers the question, "How much of the distribution of a random variable is found in the filled area; that is, how much probability mass is there between observation values equal to or more than 64 and equal to or fewer than 70?“   The CDF is more helpful. By reading the axis you can estimate the probability of a particular observation within that range: take the difference between 90.8%, the probability of values below 70, and 25.2%, the probability of values below 63, to get 65.6%.
  • 69.  The probability density function (PDF - upper plot) is the derivative of the cumulative density function (CDF - lower plot). This elegant relationship is illustrated here. The default plot of the PDF answers the question, "How much of the distribution of a random variable is found in the filled area; that is, how much probability mass is there between observation values equal to or more than 64 and equal to or fewer than 70?"  The CDF is more helpful. By reading the y axis you can estimate the probability of a particular observation within that range: take the difference between 90.8%, the probability of values below 70, and 25.2%, the probability of values below 63, to get 65.6%.  http://demonstrations.wolfram.com/ConnectingTheCDFAndThePDF/
  • 70.  I noticed you downloaded Mathematica Player. I assume you found lots of great Demonstrations to utilize within your curriculum, but if not (or if you had trouble figuring out how to use them), here's a video that will help you get started:  http://www.wolfram.com/videos/discoverdemonstrations  Most people find the deployment of existing Demonstrations extremely useful in illustrating concepts to their students, and often want to make their own models showing specific ideas interactively within the class. If that applies to you, here's a second video that teaches you how to make models:  http://www.wolfram.com/screencasts/makingmodels  I would be happy to walk you through the Demonstrations process if you have any questions or concerns. Please let me know how I can help make your classroom an interactive environment. If there are topics you'd like to see Demonstrations for in the future, I look forward to hearing those suggestions as well.  Sincerely,  Scott Rauguth  Academic Marketing Manager  Wolfram Research, Inc.  http://www.wolfram.com  P.S. Did you know that the Wolfram Education Group offers free online seminars for training and development of Mathematica proficiency, including Creating Demonstrations? Visit:  http://www.wolfram.com/seminars/s14.html
  • 71.  An example of a statistical macroscopic relation is the distribution of the magnitude of earthquakes. If it is the annual mean number of earthquakes (in a zone or worldwide) of size (energy released), then empirically one finds over a wide range, with the constant . The relation (7.1) is called the Gutenberg-Richter law and is obviously a statistical relation for observables - it does not specify when an earthquake of some magnitude will occur but only what the mean distribution in their magnitude is. The Gutenberg-Ricter law is a power-law and is therefore scale-invariant - a change of scale in can be absorbed in a normalization constant, leaving the form of the law invariant. The scale-invariance of the law implies a scale-invariance in the phenomena itself: earthquakes happen on all scales and there is no typical or mean magnitude! There are many other natural phenomena which exhibit power laws over a wide range of the parameters: Volcanic activity, solar-flares, charge released during lightning events, length of streams in river networks, forest fires, and even the extinction rate of biological species! Some of these power laws refer to spatial scale-free structures, or fractals, while some others refer to temporal events and are examples of the ubiquitous "one-over-f " phenomena (see chapter 2). Can the frequent appearance of such power laws in complex systems be explained in a simple way? Note that the systems mentioned above are examples of dissipative structures, with a slow but constant inflow of energy and its eventual dissipation. The systems are clearly out of equilibrium, since we know that equilibrium systems tend towards uniformity rather than complexity. On the other hand the abovementioned systems display scale-free behaviour similar to that exhibited by equilibrium systems near a critical point of a second-order phase transition. However while the critical point in equilibrium systems is reached only for some specific value of an external parameter, such as temperature, for the dissipative structures above the scale free behaviour appears to be robust and does not seem to require any fine-tuning. Bak and collaborators proposed that many dissipative complex systems naturally self-organise to a critical state, with the consequent scale-free fluctuations giving rise to power laws. In short, the proposal is that self-organised criticality is the natural state of large complex dissipative systems, relatively independent of initial conditions. It is important to note that while the critical state in an equilibrium second-order phase transition is unstable (slight perturbations move the system away from it), the critical state of self-organised systems is stable: systems are continually attracted to it! The idea that many complex systems are in a self-organised critical state is intuitively appealing because it is natural to associate complexity with a state that is balanced at the edge between total order and total disorder (sometimes loosely referred to as the "edge of chaos"). Far from the critical point, one typically has a very ordered phase on one side and a greatly disordered phase on the other side. It is only at the critical point that one has large correlations among the different parts of a large system, thus making it possible to have novel emergent properties, and in particular scale-free phenomena. In addition to the examples mentioned above, self-organised criticality has also been proposed to apply to economics, traffic jams, forest fires and even the brain!
  • 72.  An example power law graph, being used to demonstrate ranking of popularity. To the right is the long tail, to the left are the few that dominate (also known as the 80-20 rule).  A power law is any polynomial relationship that exhibits the property of scale invariance. The most common power laws relate two variables and have the form-  where a and k are constants, and o(xk) is an asymptotically small function of x. Here, k is typically called the scaling exponent, denoting the fact that a power-law function (or, more generally, a kth order (homogeneous polynomial) satisfies the criterion where c is a constant. That is, scaling the function's argument changes the constant of proportionality as a function of the scale change, but preserves the shape of the function itself. This relationship becomes more clear if we take the logarithm of both sides (or, graphically, plotting on a log-log graph)  Notice that this expression has the form of a linear relationship with slope k, and scaling the argument induces a linear shift (up or down) of the function, and leaves both the form and slope k unchanged.  Power-law relations characterize a staggering number of natural patterns, and it is primarily in this context that the term power law is used rather than polynomial function. For instance, inverse-square laws, such as gravitation and the Coulomb force are power laws, as are many common mathematical formulae such as the quadratic law of area of the circle. Also, many probability distributions have tails that asymptotically follow power-law relations, a topic that connects tightly with the theory of large deviations (also called extreme value theory), which considers the frequency of extremely rare events like stock market crashes, and large natural disasters.  Scientific interest in power law relations, whether functions or distributions, comes primarily from the ease with which certain general classes of mechanisms can generate them. That is, the observation of a power-law relation in data often points to specific kinds of mechanisms that underlie the natural phenomenon in question, and can often indicate a deep connection with other, seemingly unrelated systems (for instance, see both the reference by Simon and the subsection on universality below). The ubiquity of power-law relations in physics is partly due to dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy and robustness. A few notable examples of power laws are the Gutenberg-Richter law for earthquake sizes, Pareto's law of income distribution, or structural self- similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is extremely active in many fields of modern science, including physics, computer science, linguistics,
  • 74.  When NASA missions are under tight time and budget constraints, they tend to cut component tests more than anything else. And less testing means more failures.
  • 76.  United Airlines Flight 232 was a scheduled flight from Stapleton International Airport in Denver, Colorado, to O'Hare International Airport in Chicago, with continuing service to Philadelphia International Airport.  On July 19, 1989, the DC-10 (Registration N1819U) operating the route crash-landed in Sioux City, Iowa, after suffering catastrophic failure of its tail-mounted engine, which led to the loss of all flight controls.  111 people died in the accident while 185 survived
  • 79.  Investigators were able to recover the aircraft's tailcone as well as half of the fan containment ring. Also found were fan blade fragments and parts of the hydraulic lines. Three months after the accident, two pieces of the engine fan disk were found in the fields near where the first pieces were located. Together the pieces made up nearly the entire fan disk assembly.  Two large fractures were found in the disk, indicating overstress failure. Metallurgical examination showed that the primary fracture had resulted from a fatigued section on the inside diameter of the disk.  Further examination showed that the fatiguing had resulted in a small cavity on the surface of the disk, apparently a defect in manufacturing.  The 17 year old disk had undergone routine maintainence and six times had been subjected to flourescent penetration inspections. Investigators concluded that human error was responsible in improperly identifying the fatigued area before the accident.
  • 81.  In 1971 a Pan American 747 struck approach light structures for the reciprocal runway as it lifted off the runway at San Francisco Airport. Major damage to the belly and landing gear resulted, which caused the loss of hydraulic fluid from three of its four flight control systems. The fluid which remained in the fourth system gave the captain very limited control of some of the spoilers, ailerons, and one inboard elevator. That was sufficient to circle the plane while fuel was dumped and then to make a hard landing. There were no fatalities, but there were some injuries.[31]  In 1981, Eastern Airlines Flight 935, operated by a Lockheed L-1011 suffered a similar kind of massive failure of its tail mounted number two engine. The shrapnel from that engine inflicted damage on all four of its hydraulic systems, which were also close together in the tail structure. Fluid was lost in three of the four systems. While the fourth hydraulic system was impacted with shrapnel too, it was not punctured. The hydraulic pressure remaining in that fourth system enabled the captain to land the plane safely with some limited use of the outboard spoilers, the inboard ailerons, and the horizontal stabilizer, plus differential engine power of the remaining two engines. There were no injuries.[32]  In 1985 Japan Airlines flight 123, a Boeing 747, suffered a rupture of the pressure bulkhead in its tail section. The damage was extensive and caused the loss of fluid in all four of its hydraulic control systems. The pilots were able to keep the plane airborne for almost 30 minutes using differential engine power, but eventually control was lost, and the plane crashed in mountainous terrain. There were only 4 survivors among the 524 on board. This accident is the deadliest single-aircraft accident in history.[33]  In 1994, RA85656, a Tupolev Tu-154 operating as Baikal Airlines Flight 130, crashed near Irkutsk shortly after departing from Irkutsk Airport, Russia. Damage to the starter caused a fire in engine number two (located in the rear of fuselage). High temperatures during the fire destroyed the tanks and pipes of all three hydraulic systems. The crew lost control of the aircraft. The unmanageable plane, at a speed of 275 knots, hit the ground at a dairy farm and burned. All passengers and crew, as well as a dairyman on the ground, died.[34]  In 2003, OO-DLL, a DHL Airbus A300 was struck by a surface-to-air missile shortly after departing from Baghdad International Airport, Iraq. The missile struck the port side wing, rupturing a fuel tank and causing the loss of all three hydraulic systems. With the flight controls disabled, the crew was able to use differential thrust to execute a safe landing at Baghdad. This is the first and only documented time anyone has managed to land a transport aircraft safely without working flight controls.[35]  The disintegration of a turbine disc, leading to loss of control, was a direct cause of two major aircraft disasters in Poland:  On March 14, 1980, LOT Polish Airlines Flight 007, an Ilyushin Il-62, attempted a go-around when the crew experienced troubles with a gear indicator. When thrust was applied, low pressure turbine disc in engine number 2 disintegrated because of material fatigue; parts of the disc damaged engines number 1 and 3 and severed control pushers for both horizontal and vertical stabilizers. After 26 seconds of uncontrolled descent, the aircraft crashed, killing all 87 people on board.[36]  On May 9, 1987, improperly assembled bearings in engine number 2 on LOT Polish Airlines Flight 5055 overheated and exploded during cruise over Lipniki village, causing the shaft to break in two; this caused the low pressure turbine disc to spin to enormous speeds and disintegrate, damaging engine number 1 and cutting the control pushers. The crew managed to return to Warsaw, using nothing but trim tabs to control the Il-62M, but on the final approach, the trim controlling links burned and the crew completely lost control over the aircraft. Soon after, it crashed on the outskirts of Warsaw; all 183 on board perished. Had the plane stayed airborne for 40 seconds more, it would have been able to reach the runway.[37]
  • 82.  It was featured in an episode of Seconds From Disaster on the National Geographic Channel and MSNBC Investigates on the MSNBC news channel.  The History Channel distributed a documentary named Shockwave; a portion of Episode 7 (originally aired January 25, 2008) detailed the events of the crash.
  • 83.  Bent Flyvbjerg  Nils Bruzelius  Werner Rothengatter
  • 84.  Transparency  "sunlight is said to be the best of disinfectants”  Louis Dembitz Brandeis was an Associate Justice on the Supreme Court of the United States from 1916 to 1939.
  • 85.  Brandeis made his famous statement that "sunlight is said to be the best of disinfectants" in a 1913 Harper's Weekly article, entitled "What Publicity Can Do." But it was an image that had been in his mind for decades.  Twenty years earlier, in a letter to his fiance, Brandeis had expressed an interest in writing a "a sort of companion piece" to his influential article on "The Right to Privacy," but this time he would focus on "The Duty of Publicity."  He had been thinking, he wrote, "about the wickedness of people shielding wrongdoers & passing them off (or at least allowing them to pass themselves off) as honest men."  He then proposed a remedy:If the broad light of day could be let in upon men’s actions, it would purify them as the sun disinfects.Interestingly, at that time the word "publicity" referred both to something like what we think of as "public relations" as well to the practice of making information widely available to the public (Stoker and Rawlins, 2005).  That latter definition sounds a lot like what we now mean by transparency.
  • 86.  All documents be made available to the public  Public hearings  Independent peer reviews
  • 87.  The decision to go ahead with a project should, where all possible, be made contingent on the willingness of private financiers to participate without a sovereign guarantee.
  • 88.  Infrastructure grants will let local officials spend the funds at their discretion but every dollar they spend on one type of infrastructure reduces their ability to fund another.
  • 89.  Forecsts should be made subject to
  • 90.  “ in no other branch of mathematics is it so easy to blunder as in probability theory.”  Martin Gardiner, “Mathematical Games," Scientific American, October 1959 pp 180-182
  • 91.  Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business (for its use in the insurance industry, see stochastic modeling). A classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions.  Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models.  Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromo dynamics calculations to designing heat shields and aerodynamic forms.  Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, special effects in cinema, business, economics and other fields.  Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
  • 93.  The Senate committee hearings that Pecora led probed the causes of the Wall Street Crash of 1929 that launched a major reform of the American financial system.  “Pitch darkness was among the bankers strongest allies.”  The Senate committee hearings that Pecora led probed the causes of the Wall Street Crash of 1929 that launched a major reform of the American financial system.  “Pitch darkness was among the bankers strongest allies.”
  • 94.  “Economists for decades have shown that transparency lowers margins, leads to greater liquidity and more competition in the marketplace…Transparent pricing is also a critical feature of lowering the risk at banks, and at the derivatives clearinghouses as well.” Gary Gensler, Commodity Futures Trading Commission Chairman NY Times 27 November 2011
  • 95.  Spurred by these revelations, the United States Congress enacted the Glass–Steagall Act, the Securities Act of 1933 and the Securities Exchange Act of 1934.
  • 96.  Judgment Under Uncertainty: Heuristics and Biases. Amos Tversky and Daniel Kahneman  Science, Volume 185, 1974  Research for DARPA N00014-73C- 0438 monitored by ONR and Research and Development Authority of Hebrew University, Jerusalem, Israel.
  • 97.  Biases in the evaluation of compound events are particularly significant in the context of planning. The successful completion of an undertaking, such as the development of a new product, typically has a conjunctive character: for the undertaking to succeed, each of a series of events must occur. Even when each of these events is very likely, the overall probability of success can be quite low if the number of events is large.
  • 103.  “The new program baseline projects total acquisition costs of $395.7 billion, an increase of $117.2 billion (42%) from the prior 2007 baseline. Full rate production is now planned for 2019, a delay of 6 years from the 2007 baseline. Unit costs per aircraft have doubled since start of development in 2001…. Since 2002, the total quantity through 2017 has been reduced by three-fourths, from 1,591 to 365. Affordability is a key challenge…. Overall performance in 2011 was mixed as the program achieved 6 of 11 important objectives…. Late software releases and concurrent work on multiple software blocks have delayed testing and training. Development of critical mission systems providing core combat capabilities remains behind schedule and risky…. Most of the instability in the program has been and continues to be the result of highly concurrent development, testing, and production activities. Cost overruns on the first four annual procurement contracts total more than $1 billion and aircraft deliveries are on average more than 1 year late. Program officials said the government’s share of the cost growth is $672 million; this adds about $11 million to the price of each of the 63 aircraft under those contract.”
  • 106.  In well-run firms in the private sector, occasional problems are reluctantly tolerated, but not disclosing them to management is a crime.
  • 107.  "Unless you can point the finger at the man who is responsible when something goes wrong, then you never had anyone really responsible.“ ▪ Hyman G. Rickover, Admiral, USN ▪ Director of Naval Reactors 107
  • 108.  Fought in 406 BC during the Peloponnesian War just east of the island of Lesbos. In the battle, an Athenian fleet commanded by eight strategoi defeated a Spartan fleet under Callicratidas. The battle was precipitated by a Spartan victory which led to the Athenian fleet under Conon being blockaded at Mytilene; to relieve Conon, the Athenians assembled a scratch force composed largely of newly constructed ships manned by inexperienced crews.  This inexperienced fleet was thus tactically inferior to the Spartans, but its commanders were able to circumvent this problem by employing new and unorthodox tactics, which allowed the Athenians to secure a dramatic and unexpected victory.  The news of the victory itself was met with jubilation at Athens, and the grateful Athenian public voted to bestow citizenship on the slaves and metics who had fought in the battle. Their joy was tempered, however, by the aftermath of the battle, in which a storm prevented the ships assigned to rescue the survivors of the 25 disabled or sunken Athenian triremes from performing their duties, and a great number of sailors drowned. A fury erupted at Athens when the public learned of this, and after a bitter struggle in the assembly six of the eight generals who had commanded the fleet were tried as a group and executed. 09/04/16 108
  • 109.  Generals were frequently subject to impeachment and prosecution in the courts. Penalties ranged from execution, banishment and fines. The fines imposed might be truly monumental, figures that could swallow up the estates of the very richest Athenians.  In 430 BC Pericles himself was removed summarily from office by the assembly and fined.  After the victorious naval battle of Arginusae in 406 BC, all eight generals in command on the day were tried and sentenced to death for failing to rescue survivors, though not all came home to accept the penalty. 09/04/16
  • 110.  Storm had prevented the victorious admirals from picking up the crews of sunken ships. Many of them drowned, and for this, the admirals were held responsible. 09/04/16 Jeran Binning jeran.binning@dau.mil 110
  • 111.  The alignment of interests and incentives is elusive because today’s acquisition culture lacks meaningful consequences for failure. 111
  • 112.  Dans ce pays-ci, il est bon de tuer de temps en temps un amiral pour encourager les autres).  The king did not exercise royal prerogative and John Byng was shot on 14 March 1757 in the Solent on the forecastle of HMS Monarch by a platoon of musketeers.  Byng's execution was satirized byVoltaire in his novel Candide.  In Portsmouth, Candide witnesses the execution of an officer by firing squad; and is told that  "in this country, it is wise to kill an admiral from time to time to encourage the others”
  • 113.  "What is surprising is not the magnitude of our forecast errors," observes Mr. Taleb, "but our absence of awareness of it.“  We tend to fail--miserably--at predicting the future, but such failure is little noted nor long remembered. It seems to be of remarkably little professional consequence.
  • 114.  "Black swans" are highly consequential but unlikely events that are easily explainable – but only in retrospect. • Black swans have shaped the history of technology, science, business and culture.  • As the world gets more connected, black swans are becoming more consequential.  • The human mind is subject to numerous blind spots, illusions and biases.  • One of the most pernicious biases is misusing standard statistical tools, such as the “bell curve,” that ignore black swans.  • Other statistical tools, such as the "power-law distribution," are far better at modeling many important phenomena.  • Expert advice is often useless.  • Most forecasting is pseudoscience. • You can retrain yourself to overcome your cognitive biases and to appreciate randomness. but it's not easy.  • You can hedge against negative black swans while benefiting from positive ones.
  • 115.  "Much of what happens in history comes from 'Black Swan dynamics', very large, sudden, and totally unpredictable 'outliers', while much of what we usually talk about is almost pure noise. • Our track record in predicting those events is dismal; yet by some mechanism called the hindsight bias we think that we understand them. We have a bad habit of finding 'laws' in history (by fitting stories to events and detecting false patterns); we are drivers looking through the rear view mirror while convinced we are looking ahead."
  • 116.  The term Black–Scholes refers to three closely related concepts:  The Black–Scholes model is a mathematical model of the market for an equity, in which the equity's price is a stochastic process.  The Black–Scholes PDE is a partial differential equation which (in the model) must be satisfied by the price of a derivative on the equity.  The Black–Scholes formula is the result obtained by solving the Black-Scholes PDE for European put and call options.  Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the options pricing model and coined the term "Black-Scholes" options pricing model, by enhancing work that was published by Fischer Black and Myron Scholes. The paper was first published in 1973. The foundation for their research relied on work developed by scholars such as Louis Bachelier, , , Edward O. Thorp, and Paul Samuelson. The fundamental insight of Black-Scholes is that the option is implicitly priced if the stock is traded.  Merton and Scholes received the 1997 Nobel Prize in Economics for this and related work. Though ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the Swedish academy.  http://www.pbs.org/wgbh/nova/stockmarket/
  • 117.  In 1973, with the publication of the options-pricing model developed by Fischer Black and Myron Scholes and expanded on by Robert C. Merton. The new model enabled more-effective pricing and mitigation of risk. It could calculate the value of an option to buy a security as long as the user could supply five pieces of data: the risk-free rate of return (usually defined as the return on a three-month U.S. Treasury bill), the price at which the security would be purchased (usually given), the current price at which the security was traded (to be observed in the market), the remaining time during which the option could be exercised (given), and the security’s price volatility (which could be estimated from historical data and is now more commonly inferred from the prices of options themselves if they are traded).  The equations in the model assume that the underlying security’s price mimics the random way in which air molecules move in space, familiar to engineers as Brownian motion.
  • 118.  “But this long run is a misleading guide to current affairs. In the long run we are all dead.” John Maynard Keynes identified three domains of probability: Frequency probability; Subjective or Bayesian probability; and Events lying outside the possibility of any description in terms of probability (special causes) and based a probability theory thereon. "It ain't over till it's over” Yogi Berra
  • 119.  The Harken deal was a smaller scale version of the accounting scandals at WorldCom, Enron and other firms, Bush’s purchase and sale of the Texas Rangers baseball team reveals other characteristic features of the past several decades of American capitalism: the plundering of public assets for private gain, the confluence of political and economic power, the defrauding of the American people.  By the time he cashed out in 1998, Bush’s return on his original $600,000 investment in the Rangers was 2,400 percent.
  • 120.  Where did all of this money come from and what did Bush do to get it? Much of the story was first reported nationally by Joe Conason in a February, 2000 article for Harpers Magazine. A report from the public interest group, Center for Public Integrity, and recent columns on July 16 in the New York Times by Paul Krugman and Nicholas Kristof have filled in some of the details.A free stadium, and some choice land on the sideThe same factors that propelled Bush virtually overnight from failed oil man to wealthy corporate executive—family connections and the desire of rich Texas businessmen to exploit the Bush name—opened the way for him to buy a stake in the professional baseball team. Bill DeWitt, part owner of Spectrum 7, which had bought Bush’s own company several years earlier and then later sold out to Harken, offered the son of the then-US president a chance to join in a bid for the Rangers. In 1989 a deal was reached in which Richard Rainwater, a wealthy Texas financier, joined Bush and several other investors in buying the team.Bush himself did not have a large fortune at the time, and only bought a two percent share, financed with a $500,000 loan from a bank on whose board of directors he had once served. Bush used the proceeds from his questionable sale of Harken stock to repay this loan.Bush’s formal title was “managing partner.” He served essentially as a public face, whose main responsibility was to attend the home baseball games. Edward Rose, another wealthy Texas investor and Rainwater’s associate, was responsible for the actual business operations of the team.The top priority for the new Rangers owners in increasing the value of their holdings was to acquire a new stadium. They had no intention of paying for the stadium themselves, so they threatened to move the team if the city of Arlington did not foot the bill. The city government readily agreed to a generous deal. Reached in the fall of 1990, it guaranteed that the city would pay $135 million of an estimated cost of $190 million. The remainder was raised through a ticket surcharge. Thus, local taxpayers and baseball fans financed the entire cost of the stadium.Moreover, the owners were allowed to buy back the stadium for a mere $60 million, which was deducted from ticket revenues at a rate of no more than $5 million per year. The Rangers syndicate was also given a property tax exemption and sales tax exemption on products purchased for use in the stadium. City residents ended up subsidizing these tax breaks for the Rangers owners by paying higher local rates.This plan was sold to Arlington voters with Bush’s help. At the end of the day, the owners of the Rangers, including Bush, got a stadium worth nearly $200 million without putting down a penny of their own money.But the boondoggle did not end there. As part of the deal, the Rangers syndicate got a sizable chunk of land in addition to the stadium. This land naturally increased in value as a result of the stadium’s construction.To oblige the owners, Ann Richards, the Democratic Governor of Texas at the time, signed into law an extraordinary measure that set up the Arlington Sports Facilities Development Authority (ASFDA), which was granted the power to seize privately owned land

Hinweis der Redaktion

  1. His contention is that for most organisations the answers to the first two questions are negative.  To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them.  This is an example of common mode failure – a single event causing multiple systems to fail.  The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error).  The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion.  Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.
  2. Daniel Kahneman is the Eugene Higgins Professor of Psychology at Princeton University) and Professor of Public Affairs at Woodrow Wilson School. Kahneman was born in Israel and educated at the Hebrew University in Jerusalem before taking his PhD at the University of California. He was the joint Nobel Prize winner for Economics in 2002 for his work on applying cognitive behavioural theorie to decision making in economics.
  3. Kahneman and Tversky “Subjective Probability: A judgment of Representativeness. Cognitive Psychology3, 1972 430-454
  4. Brandeis And The History Of TransparencySunlight InternMay 26, 2009, 10:47 a.m.
  5. Where did all of this money come from and what did Bush do to get it? Much of the story was first reported nationally by Joe Conason in a February, 2000 article for Harpers Magazine. A report from the public interest group, Center for Public Integrity, and recent columns on July 16 in the New York Times by Paul Krugman and Nicholas Kristof have filled in some of the details.A free stadium, and some choice land on the sideThe same factors that propelled Bush virtually overnight from failed oil man to wealthy corporate executive—family connections and the desire of rich Texas businessmen to exploit the Bush name—opened the way for him to buy a stake in the professional baseball team. Bill DeWitt, part owner of Spectrum 7, which had bought Bush’s own company several years earlier and then later sold out to Harken, offered the son of the then-US president a chance to join in a bid for the Rangers. In 1989 a deal was reached in which Richard Rainwater, a wealthy Texas financier, joined Bush and several other investors in buying the team.Bush himself did not have a large fortune at the time, and only bought a two percent share, financed with a $500,000 loan from a bank on whose board of directors he had once served. Bush used the proceeds from his questionable sale of Harken stock to repay this loan.Bush’s formal title was “managing partner.” He served essentially as a public face, whose main responsibility was to attend the home baseball games. Edward Rose, another wealthy Texas investor and Rainwater’s associate, was responsible for the actual business operations of the team.The top priority for the new Rangers owners in increasing the value of their holdings was to acquire a new stadium. They had no intention of paying for the stadium themselves, so they threatened to move the team if the city of Arlington did not foot the bill. The city government readily agreed to a generous deal. Reached in the fall of 1990, it guaranteed that the city would pay $135 million of an estimated cost of $190 million. The remainder was raised through a ticket surcharge. Thus, local taxpayers and baseball fans financed the entire cost of the stadium.Moreover, the owners were allowed to buy back the stadium for a mere $60 million, which was deducted from ticket revenues at a rate of no more than $5 million per year. The Rangers syndicate was also given a property tax exemption and sales tax exemption on products purchased for use in the stadium. City residents ended up subsidizing these tax breaks for the Rangers owners by paying higher local rates.This plan was sold to Arlington voters with Bush’s help. At the end of the day, the owners of the Rangers, including Bush, got a stadium worth nearly $200 million without putting down a penny of their own money.But the boondoggle did not end there. As part of the deal, the Rangers syndicate got a sizable chunk of land in addition to the stadium. This land naturally increased in value as a result of the stadium’s construction.To oblige the owners, Ann Richards, the Democratic Governor of Texas at the time, signed into law an extraordinary measure that set up the Arlington Sports Facilities Development Authority (ASFDA), which was granted the power to seize privately owned land deemed necessary for stadium construction.According to documents obtained by the Center for Public Integrity, the Rangers owners would locate a piece of land they wanted, offer a price far below the market value, and if the owners of the land parcel refused, bring in the ASFDA to condemn the land.