5. My Mission
Arm my teams (and yours)
with the tools and techniques
to solve these problems
6.
7. 2 Minutes About Larry
• Larry is a Pisces who enjoys skiing, reading
and wine (red, or white in outdoor setting)
• We have a lot in common… over to Larry!
9. Why measure?
Feedback
Diagnostics
Forecasting
Lever
10. When to NOT take a shot
Good players?
• Monta Ellis
– 9th highest scorer (8th
last season)
• Carmelo Anthony
(Melo)
– 8th highest scorer (3rd
last season)
15. You will be wrong by…
• 3x-10x when assuming Normal distribution
• 2.5x-5x when assuming Poisson distribution
• 7x-20x if you use Shewhart’s method
Heavy tail phenomena are not
incomprehensible… but they cannot be
understood with traditional statistical tools.
Using the wrong tools is incomprehensible.
~ Roger Cooke and Daan Nieboer
16. Bad application of control chart
Control is an illusion, you infantile
egomaniac. Nobody knows what's gonna
happen next: not on a freeway, not in an
airplane, not inside our own bodies and
certainly not on a racetrack with 40 other
infantile egomaniacs.
~Days of Thunder
17. Time in Process (TIP) Chart
A good alternative to control chart
18. Collection
• Perceived cost
is high
• Little need for
explicit collection
activities
• Use a 1-question NPS survey for customer and
employee satisfaction
• Plenty to learn in passive data from ALM and
other tools
• How you use the tools will drive your use of
metrics from them
19. Summary of how to make good
metric choices
• Start with outcomes and
use ODIM to make metrics Data visualization is like
choices. photography. Impact is a
• Make sure your metrics are function of perspective,
balanced so you don’t illumination, and focus.
over-emphasize one at the ~Larry Maccherone
cost of others.
• Be careful in your analysis. The TIP chart is a good
alternative to control chart. Troy’s approach is excellent
for forecasting. We’ve shown that there are many out
there that are not so good.
• Consider collection costs. Get maximal value out of
passively gathered data.
22. A model is a tool
used to mimic a
real world process
A tool for low-cost
experimentation
23. Monte Carlo Simulation?
Performing a simulation of a
model multiple times using
random input conditions and
recording the frequency of
each result occurrence
24. Scrum
Run Sim Total
Backlog This Iteration Deployed Iterations
1 3
2 2
3 5
5 2 4 3
5 4
6 2
8 … …
25. Kanban
Run Time Total
Backlog Design Develop Test
1 – 2 days 1 – 5 days 1 – 2 days Deployed 1 5
2 4
3 3
4 9
2 5 5
6 6
… …
26. Result versus Frequency (50 runs)
More Often
25
Frequency of Result
20
15
10
5
1
10 15 20
Less Often
Result Values – For example, Days
27. Result versus Frequency (250 runs)
More Often
25
Frequency of Result
20
15
10
5
1
10 15 20
Less Often
Result Values – For example, Days
28. Result versus Frequency (1000+ runs)
More Often
25
Frequency of Result
20
15
10
5
1
10 15 20
Less Often
Result Values – For example, Days
29. Key Point
There is NO single
forecast result
There will always be many
possible results, some more likely
30. 50% 50%
Possible Possible
Likelihood
Outcomes Outcomes
Time to Complete Backlog
When pressed for a single number,
we often give the average.
31. 95% Outcomes 5%
Likelihood
Time to Complete Backlog
Monte Carlo Simulation Yields More
Information – 95% Common.
32. Key Point
“Average” is
NEVER an option
WARNING: Regression lines
are most often “average”
40. In this demo
• Basic Scrum and Kanban Modeling
• How to build a simple model
– SimML Modeling Language
– Visual checking of models
– Forecasting Date and Cost
– The “Law of Large Numbers”
43. Staff Skill Impact Report
Explore what staff
changes have the
greatest impact
44. Key Point
Modeling helps
find what matters
Fewer estimates required
45. In this demo
• Finding what matters most
– Manual experiments
– Sensitivity Testing
• Finding the next best 3 staff skill hires
• Minimizing and simplifying estimation
– Grouping backlog
– Range Estimates
– Deleting un-important model elements
47. Outsourcing Cost & Benefits
• Outsourcing often controversial
– Often fails when pursued for cost savings alone
– Doesn’t always reduce local employment
– An important tool to remain competitive
– I.Q. has no geographic boundaries
• Many models
– Entire project
– Augmentation of local team
48. Build Date & Cost Matrix
1x 1.5 x 2x
Estimates Estimates Estimates
1 x Staff Best Case
1.5 x Staff Midpoint
2 x Staff Worst Case
Benefit =
(Baseline Dev Cost – New Dev Cost) - Cost of Delay
+ Local Staff Cost Savings
49. NOT LINEAR & NOT YOUR PROJECT
$150,000
$100,000
$50,000
1x Multiplier
$- 1.5x Multiplier
1 1.5 2 2x Multiplier
$(50,000)
$(100,000)
$(150,000)
50. In this demo
• Model the impact of various outsourcing
models
51. New Project Rules of Thumb…
• Cost of Delay plays a significant role
– High cost of delay project poor candidates
– Increase staffing some compensation
• Knowledge transfer and ramp-up time critical
– Complex products poor candidates
– Captive teams better choices for these projects
• NEVER as simple as direct lower costs!
53. Speaking Risk To Executives
• Buy them a copy of “Flaw of Averages”
• Show them you are tracking & managing risk
• Do
– “We are 95% certain of hitting date x”
– “With 1 week of analysis, that may drop to date y”
– “We identified risk x, y & z that we will track weekly”
• Don’t
– Give them a date without likelihood
• “February 29th 2013”
– Give them a date without risk factors considered
• “To do the backlog of features, February 29th, 2013”
54. **Major risk events have the predominate role
in deciding where deliver actually occurs **
We spend all our
time estimating here
Plan Performance External Vendor
Issues Delay
1 2 3
59. Key Points
• There is no single release date forecast
• Never use Average as a quoted forecast
• Risk factors play a major role (not just backlog)
• Data has shape: beware of Non-Normal data
• Measurement → Insight → Decisions →
Outcomes : Work Backwards!
• Communicate Risk early with executive peers
60. Call to action
• Read these books
• Download the software FocusedObjective.com
• Follow @AgileSimulation
• Follow @LMaccherone
64. Sensitivity Model
Test (a little)
The Model
Creation
Cycle
Monte- Visually
Carlo Test Test
65. Make
Informed Baseline
Decision(s)
The
Experiment
Cycle
Make
Compare
Single
Results
Change
66. Best Practice 1
Start simple and add ONE
input condition at a time.
Visually / Monte-carlo test
each input to verify it works
67. Best Practice 2
Find the likelihood of major
events and estimate delay
E.g. vendor dependencies,
performance/memory issues,
third party component
failures.
68. Best Practice 3
Only obtain and add detailed
estimates and opinion to a
model if Sensitivity Analysis
says that input is material
69. Best Practice 4
Use a uniform random input
distribution UNTIL sensitivity
analysis says that input is
influencing the output
70. Best Practice 5
Educate your managers’
about risk. They will still want
a “single” date for planning,
but let them decide 75 th or
95 th confidence level
(average is NEVER an option)
77. Focused Objective
• Risk Tools for Software Dev
• Scrum/Agile Simulation
• Kanban/Lean Simulation
• Forecasting Staff, Date & Cost
• Automated Sensitivity Analysis
• Data Reverse Engineering
• Consulting / Training
• Book
78. We Use & Recommend: EasyFit
• MathWave.com
• Invaluable for
– Analyzing data
– Fitting Distributions
– Generating Random
Numbers
– Determining
Percentiles