4. – Norman Vincent Peale,
The Power of Posi,ve Thinking
“Do not build up obstacles
in your imagina1on.”
5. Three Key Challenges for
Probabilis1c Methods
1. Where do the probabili0es come from?
2. What if the probabili0es are wrong?
3. What if uncertainty masks faulty behavior?
8. Quan1fying Uncertainty
self-inflicted uncertainty: randomiza1on deliberately
introduced into a system for some desirable effect
coin flipping, symmetry breaking (Leader Elec,on), …
epistemic uncertainty: reducible systema1c uncertainty that
is too difficult to resolve or quan1fy more precisely
choice of node address (Zeroconf), user’s input commands, ...
aleatoric uncertainty: irreducible sta1s1cal noise that varies
from execu1on to execu1on
latency of a message delivery, collision on an Ethernet link, …
… in both a model and its proper,es
9. Assigning Probabili1es
Requirements Engineering
Le^cia Duboc, Emmanuel Le1er and David S. Rosenblum, “Systema1c Elabora1on of Scalability Requirements through Goal-Obstacle Analysis”,
IEEE Transac,ons on SoGware Engineering, Jan. 2013.
Par,al KAOS
Goal Model for
IEF Financial
Transac,on Fraud
Detec,on System
10. Assigning Probabili1es
Requirements Engineering
Le^cia Duboc, Emmanuel Le1er and David S. Rosenblum, “Systema1c Elabora1on of Scalability Requirements through Goal-Obstacle Analysis”,
IEEE Transac,ons on SoGware Engineering, Jan. 2013.
Example KAOS
Goal Specifica,on
for IEF
11. Assigning Probabili1es
Distribu1ons MaSer Too!
Ana Le^cia de Cerqueira Le1e Duboc, “A Framework for the Characterisa1on and Analysis of Sofware Systems Scalability”,
PhD thesis, University College London, 2010.
U1lity of Two Hypothe1cal Designs
as a Func1on of Number of IEF En11es
pdf 1 pdf 2
12. Assigning Probabili1es
Run1me Sampling
Guoxin Su, David S. Rosenblum and Giordano Tamburrelli, “Reliability of Run-Time Quality-of-Service Evalua1on
using Parametric Model Checking”, Proc. ICSE 2016.
14. Probabilis1c Model Checking
Changing the Model Probabili1es
! ¬p → ◊q( )∧"( )
Model
Checker
✓
State Machine
Model
Temporal
Property
Results
System
Requirements
P≥0.95 [ ]
0.4
0.6
Quan1ta1ve Results
0.9732Probabilis1c
Probabilis1c
15. Probabilis1c Model Checking
Changing the Model Probabili1es
! ¬p → ◊q( )∧"( )
Model
Checker
✕
State Machine
Model
Temporal
Property
Results
Counterexample
Trace
System
Requirements
P≥0.95 [ ]
Quan1ta1ve Results
Probabilis1c
Probabilis1c
0.41
0.59
0.6211
16. Perturbed Probabilis1c Systems
Current Research
Overall approach
• Compute asympto1c bounds for efficiency
• Apply the bounds for non-asympto1c es1ma1on
Star@ng Point
• Parametric Discrete-Time Markov Chains
• “Small” perturba1ons of probability parameters
• Reachability proper1es P≤p [ ]
• Linear bounds for es1ma1ng verifica1on impact
S? U S!
17. Example
The Zeroconf Protocol
s1s0 s2 s3
q
1
1
{ok} {error}
{start} s4
s5
s6
s7
s8
1
1-q
1-p
1-p
1-p
1-p
p p p
p
1
DTMC model from the PRISM group
(Kwiatkowska et al.)
P=? [ true U error ]
0.1
0.9
0.5
0.5
0.10.10.1
0.9
0.9
0.9
S?
S!
18. Perturba1on Distance
•Perturba1on is captured in distribu1on parameters,
a vector x of probability parameters xi
•The norm of total variance measures the amount of
perturba1on
•Perturba1on distance is computed with respect to
reference values r
v = vi∑
x − r ≤ Δ
Guoxin Su and David S. Rosenblum, “Asympto1c Bounds for Quan1ta1ve Verifica1on of Perturbed Probabilis1c Systems”, Proc. ICFEM 2013.
19. Example
The Zeroconf Protocol
s1s0 s2 s3
q
1
1
{ok} {error}
{start} s4
s5
s6
s7
s8
1
1-q
1-p
1-p
1-p
1-p
p p p
p
1
DTMC model from the PRISM group
(Kwiatkowska et al.)
0.1
0.9
0.5
0.5
0.10.10.1
0.9
0.9
0.9
20. Example
The Zeroconf Protocol
s1s0 s2 s3
q
1
1
{ok} {error}
{start} s4
s5
s6
s7
s8
1
1-q
1-p
1-p
1-p
1-p
p p p
p
1
DTMC model from the PRISM group
(Kwiatkowska et al.)
0.1
0.9
0.5
0.5
0.10.10.1
0.9
0.9
0.9
x1
1-x1
x2
1-x2
1-x3
x3
x4
1-x4
21. • Perturba1on Func1on captures effect of perturba1on on
verifica1on result p
where i? is the ini1al state distribu1on
and A is the transi1on probability sub-matrix for S?
and b is the vector of one-step probabili1es from S? to S!
and h•(x-r) is the linear approxima1on of ρ near r
• Condi1on Number provides asympto1c bound of ρ
• Predicted varia1on to verifica1on result p due to perturba1on Δ
ρ x( )= ι? i A x( )i
i b x( )( )− A r
i
i b r( )( )i=0
∞
∑ ≈ hi x − r( )
κ = lim
Δ→0
sup
ρ(x)
δ
: x − r ≤ Δ,0 < δ ≤ Δ
⎧
⎨
⎩
⎫
⎬
⎭
≈
1
2
max h( )− min h( )( )
ˆp = p ±κΔ
Guoxin Su and David S. Rosenblum, “Asympto1c Bounds for Quan1ta1ve Verifica1on of Perturbed Probabilis1c Systems”, Proc. ICFEM 2013.
Asympto1c Perturba1on
Bounds on Verifica1on Impact
22. • Perturba1on Func1on captures effect of perturba1on on
verifica1on result p
where i? is the ini1al state distribu1on
and A is the transi1on probability sub-matrix for S?
and b is the vector of one-step probabili1es from S? to S!
and h•(x-r) is the linear approxima1on of ρ near r
• Condi1on Number provides asympto1c bound of ρ
• Predicted varia1on to verifica1on result p due to perturba1on Δ
ρ x( )= ι? i A x( )i
i b x( )( )− A r
i
i b r( )( )i=0
∞
∑ ≈ hi x − r( )
κ = lim
Δ→0
sup
ρ(x)
δ
: x − r ≤ Δ,0 < δ ≤ Δ
⎧
⎨
⎩
⎫
⎬
⎭
≈
1
2
max h( )− min h( )( )
ˆp = p ±κΔ
Guoxin Su and David S. Rosenblum, “Asympto1c Bounds for Quan1ta1ve Verifica1on of Perturbed Probabilis1c Systems”, Proc. ICFEM 2013.
Asympto1c Perturba1on
Bounds on Verifica1on Impact
23. Case Study Results
Noisy Zeroconf (35000 Hosts)
x
Probability of Reaching error State
Actual (PRISM) Predicted (via κ)
0.095 -19.8% -21.5%
0.096 -16.9% -17.2%
0.097 -12.3% -12.9%
0.098 -8.33% -8.61%
0.099 -4.23% -4.30%
0.100 1.8567 ✕ 10-4 —
0.101 +4.38% +4.30%
0.102 +8.91% +8.61%
0.103 +13.6% +12.9%
0.104 +18.4% +17.2%
0.105 +23.4% +21.5%
Guoxin Su and David S. Rosenblum, “Perturba1on Analysis of Stochas1c Systems with Empirical Distribu1on Parameters”, Proc. ICSE 2014.
24. Addi1onal Results
[ICSE 2014] ω-regular proper1es and quadra1c perturba1on bounds
[CONCUR 2014]
Asympto1c and non-asympto1c bounds for
three addi1onal perturba1on distance norms
[ATVA 2014]
Interval approxima1ons for reachability proper1es with
nested P operators
[FSE 2014 Doctoral
Symposium]
Heuris1cs for Markov Decision Processes (MDPs)
(Yamilet Serrano’s PhD)
[FASE 2016] Applica1on to decision making in self-adap1ve systems
[ICSE 2016] Applica1on to run1me QoS evalua1on
[IEEE TSE 2016] Integra1on of previous results and new case studies
[ICSE 2017] Con1nuous-Time Markov Chains (CTMCs)
[ESEC/FSE 2017] Markov Decision Processes (MDPs)
26. The Challenge for MDPs
Iden1fying Relevant Adversaries
0 1Pmin Pmax
op1mal adversaries
27. The Challenge for MDPs
Iden1fying Relevant Adversaries
0 1Pmin Pmax
op1mal adversaries
Op1mal adversaries may not induce minimum and
maximum probabili1es in presence of perturba1on
28. Approaches
Iden1fying Relevant Adversaries
✓Approximate: ignore the problem, find the maximum
condi1on number over the op1mal adversaries
✓Heuris1c: apply a range of brute-force adversary
enumera1on schemes to find the maximum condi1on
number
✓Algorithmic: efficiently find the maximum condi0on
number over the most relevant adversaries
29. Case Study Results
Cloud Migra1on (5 & 8 Rings, 3 Proper1es)
Yamilet R. Serrano Llerena, Guoxin Su and David S. Rosenblum, “Probabilis1c Model Checking of Perturbed MDPs
with Applica1ons to Cloud Compu1ng”, Proc. ESEC/FSE 2017.
Model States Transi0ons Adversaries Property
Maximum
Condi0on
Number
Time (seconds)
Exhaus0ve
Our
Algorithm
5
Rings
9 17 32
P1 0.1111 3.28 0.15
P2 0.5 4.02 0.21
P3 0.5 2.95 0.10
8
Rings
15 32 2048
P1 0.0102 197.10 141.06
P2 0.5 288.97 202.42
P3 0.1111 199.05 139.75
30. Case Study Results
Cloud Migra1on (5 & 8 Rings, 3 Proper1es)
Yamilet R. Serrano Llerena, Guoxin Su and David S. Rosenblum, “Probabilis1c Model Checking of Perturbed MDPs
with Applica1ons to Cloud Compu1ng”, Proc. ESEC/FSE 2017.
Model States Transi0ons Adversaries Property
Maximum
Condi0on
Number
Time (seconds)
Exhaus0ve
Our
Algorithm
5
Rings
9 17 32
P1 0.1111 3.28 0.15
P2 0.5 4.02 0.21
P3 0.5 2.95 0.10
8
Rings
15 32 2048
P1 0.0102 197.10 141.06
P2 0.5 288.97 202.42
P3 0.1111 199.05 139.75
32. Challenge
Pinpoin1ng the Root Cause of Uncertainty
“There are known knowns; there are
things we know we know. We also
know there are known unknowns; that
is to say, we know there are some
things we do not know. But there are
also unknown unknowns – the ones
we don’t know we don’t know.”
— Donald Rumsfeld
33. Known Unknowns in
Modern Systems
✓ Autonomous Vehicles
✓ Cyber Physical Systems
✓ Internet of Things
✓ Extensive reliance on machine learning
see
Deep Learning and Understandability versus
SoGware Engineering and Verifica,on
by Peter Norvig, Director of Research at Google
hSp://www.youtube.com/watch?v=X769cyzBNVw
35. Uncertainty in Tes1ng
Current Research
Test
Execu1on
System
Under Test
Result Interpreta1on
Unacceptable
Acceptable✓
✕
36. Uncertainty in Tes1ng
Current Research
Test
Execu1on
System
Under Test
Result Interpreta1on
Unacceptable
Acceptable
Acceptable
✓
✕
✕
Acceptable misbehaviors can mask real faults!
37. One Possible Solu1on
Distribu1on Fing
System
Under Test
Training
Data WEKA
Sebas1an Elbaum and David S. Rosenblum, “Known Unknowns: Tes1ng in the Presence of Uncertainty”, Proc. FSE 2014.
38. One Possible Solu1on
Distribu1on Fing
System
Under Test
WEKA
Sebas1an Elbaum and David S. Rosenblum, “Known Unknowns: Tes1ng in the Presence of Uncertainty”, Proc. FSE 2014.
39. One Possible Solu1on
Distribu1on Fing
System
Under Test
WEKA
Sebas1an Elbaum and David S. Rosenblum, “Known Unknowns: Tes1ng in the Presence of Uncertainty”, Proc. FSE 2014.
40. One Possible Solu1on
Distribu1on Fing
System
Under Test
Result Interpreta1on
Test
Execu1on
Sebas1an Elbaum and David S. Rosenblum, “Known Unknowns: Tes1ng in the Presence of Uncertainty”, Proc. FSE 2014.
41. One Possible Solu1on
Distribu1on Fing
System
Under Test
Result Interpreta1on
Unacceptable
Acceptable
Inconclusive
p < 0.99
Test
Execu1on p < 0.37
p < 0.0027
Sebas1an Elbaum and David S. Rosenblum, “Known Unknowns: Tes1ng in the Presence of Uncertainty”, Proc. FSE 2014.