In failed error propagation (FEP), an erroneous program state (that occurs in testing) does not lead to an observed failure. FEP is known to hamper software testing, yet it remains poorly understood. If we could, for example, assess where FEP is likely to occur then we might be able to produce more effective notions of test coverage that would incorporate an element of semantics. This talk will describe an information theoretic formulation of FEP that is based on conditional entropy. This formulation considers the situation in which we are interested in the potential for an incorrect program state at statement s to fail to propagate to incorrect output. The talk will explore the underlying idea, metrics that have been defined to try to estimate the probability of FEP, and experimental evaluation.
19. Extreme cases
• Recall
• If all inputs are mapped to the same output
– The entropy of the output is zero.
– All information lost.
– The output tells us nothing about the input.
• The function f is a bijection
– Squeeziness of 0: no loss of information.
– The output uniquely identifies the input.
UCM, 17th March 2017
Sq(f, I) = H(I) H(O)
25. Possible measures
• M1: Squeeziness of Q (on the states at pp’)
• M2: M1 + Squeeziness of R (code before)
• M3: Squeeziness of Q on states reachable via
a given upper path πu
• M4: M3 + Squeeziness of (upper/initial) path
πu
• M5: Squeeziness of (lower/final) path πl
UCM, 17th March 2017