Jeder Programmierer kennt die Situation: Ein Programm läuft nicht so, wie es soll. Ich stelle Techniken vor, die automatisch
(a) die Ursachen eines Fehlverhaltens finden - indem wir genau die Aspekte isolieren, die das Zustandekommen eines Fehlers verursachen;
(b) Programmfehler finden - indem wir aus dem Code "normale" Anweisungsfolgen lernen und nach Abweichungen suchen; und
(c) vorhersagen, wo in Zukunft Fehler auftreten werden - indem wir maschinell lernen, welche Code- und Prozesseigenschaften bisher mit Fehlern korrelierten.
Fallstudien an echten Programmen mit echten Fehlern, von AspectJ über Firefox zu Windows demonstrieren die Praxistauglichkeit der vorgestellten Verfahren.
Andreas Zeller ist Professor für Softwaretechnik an der Universität des Saarlandes in Saarbrücken. Sein Forschungsgebiet ist die Analyse großer Software-Systeme und deren Fehler. Sein Buch "Why Programs Fail - A Guide to Systematic Debugging" wurde 2006 mit dem Jolt Software Development Productivity Award ausgezeichnet.
Vortrag der GI Regionalgruppe Rhein-Neckar 2008-06-19
Models—abstract and simple descriptions of some artifact—are the backbone of all software engineering activities. While writing models is hard, existing code can serve as a source for abstract descriptions of how software behaves. To infer correct usage, code analysis needs usage examples, though; the more, the better.
We have built a lightweight parser that efficiently extracts API usage models from source code—models that can then be used to detect anomalies. Applied on the 200 mil- lion lines of code of the Gentoo Linux distribution, we would extract more than 15 million API constraints. On the web site checkmycode.org, anyone can check his/her code against the “wisdom of Linux”.
Language-Based Testing and Debugging
Andreas Zeller, CISPA Helmholtz Center for Information Security
Joint work with Dominic Steinhöfel and the CISPA team
If one knows the input language of a software system, one can use it for generating test inputs or for determining input patterns that cause specific behavior. But how does one specify an input language such that all properties of inputs are covered? In this talk, I introduce the ISLa specification language, combining grammars and constraints over nonterminals to capture even the most complex input languages. ISLa serves as a fuzzer, allowing to test programs with a large variety of inputs systematically, but also as a parser, decomposing inputs and outputs into individual elements. This opens up new directions for debugging, determining the exact conditions under which a program fails: "The program fails if the <check-temperature-option> is given and <temperature> < 0 holds." Includes live demos!
Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and professor for Software Engineering at Saarland University. His research on automated debugging, mining software archives, specification mining, and security testing has proven highly influential. Zeller is one of the few researchers to have received two ERC Advanced Grants, most recently for his S3 project. Zeller is an ACM Fellow and holds an ACM SIGSOFT Outstanding Research Award.
As a young aspiring scientist, social media is one of the outlets to disseminate your work and connect to the community. This talk gives hints on the benefits and risks of science on social media. Talk at the ICSE 2022 New Faculty Symposium.
Andreas Zeller's keynote at the 1st Intl Fuzzing workshop 2022 at NDSS: https://fuzzingworkshop.github.io/program.html
Do you fuzz your own program, or do you fuzz someone else's program? The answer to this question has vast consequences on your view on fuzzing. Fuzzing someone else's program is the typical adverse "security tester" perspective, where you want your fuzzer to be as automatic and versatile as possible. Fuzzing your own code, however, is more like a traditional tester perspective, where you may assume some knowledge about the program and its context, but may also want to _exploit_ this knowledge - say, to direct the fuzzer to critical locations.
In this talk, I detail these differences in perspectives and assumptions, and highlight their consequences for fuzzer design and research. I also highlight cultural differences in the research communities, and what happens if you submit a paper to the wrong community. I close with an outlook into our newest frameworks, set to reconcile these perspectives by giving users unprecedented control over fuzzing, yet staying fully automatic if need be.
Bio: Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and a professor for Software Engineering at Saarland University, both in Saarbrücken, Germany. His research on automated debugging, mining software archives, specification mining, and security testing has won several awards for its impact in academia and industry. Zeller is an ACM Fellow, an IFIP Fellow, an ERC Advanced Grant Awardee, and holds an ACM SIGSOFT Outstanding Research Award.
Illustrated Code: Building Software in a Literate Way
Andreas Zeller, CISPA Helmholtz Center for Information Security
Notebooks – rich, interactive documents that join together code, documentation, and outputs – are all the rage with data scientists. But can they be used for actual software development? In this talk, I share experiences from authoring two interactive textbooks – fuzzingbook.org and debuggingbook.org – and show how notebooks not only serve for exploring and explaining code and data, but also how they can be used as software modules, integrating self-checking documentation, tests, and tutorials all in one place. The resulting software focuses on the essential, is well-documented, highly maintainable, easily extensible, and has a much higher shelf life than the "duct tape and wire” prototypes frequently found in research and beyond.
At my talk "On Impact in Software Engineering Research", I present a number of lessons from (and for!) high-impact research:
* Work on a real problem
* Assume as little as possible
* Keep things simple
* Have a sound model
* Keep on learning
* Keep on moving
* Build prototypes
Video at https://youtu.be/md4Fp3Pro0o
Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and professor for Software Engineering at Saarland University, both in Saarbrücken, Germany. His research on automated debugging, mining software archives, specification mining, and security testing has won several awards for its impact in academia and industry. Zeller is an ACM Fellow, an IFIP Fellow, an ERC Advanced Grant Awardee, and holds an ACM SIGSOFT Outstanding Research Award.
How do you make your research impactful? How do you get towards your book, your tool, your algorithm? Andreas Zeller shares lessons learned in his career.
Models—abstract and simple descriptions of some artifact—are the backbone of all software engineering activities. While writing models is hard, existing code can serve as a source for abstract descriptions of how software behaves. To infer correct usage, code analysis needs usage examples, though; the more, the better.
We have built a lightweight parser that efficiently extracts API usage models from source code—models that can then be used to detect anomalies. Applied on the 200 mil- lion lines of code of the Gentoo Linux distribution, we would extract more than 15 million API constraints. On the web site checkmycode.org, anyone can check his/her code against the “wisdom of Linux”.
Language-Based Testing and Debugging
Andreas Zeller, CISPA Helmholtz Center for Information Security
Joint work with Dominic Steinhöfel and the CISPA team
If one knows the input language of a software system, one can use it for generating test inputs or for determining input patterns that cause specific behavior. But how does one specify an input language such that all properties of inputs are covered? In this talk, I introduce the ISLa specification language, combining grammars and constraints over nonterminals to capture even the most complex input languages. ISLa serves as a fuzzer, allowing to test programs with a large variety of inputs systematically, but also as a parser, decomposing inputs and outputs into individual elements. This opens up new directions for debugging, determining the exact conditions under which a program fails: "The program fails if the <check-temperature-option> is given and <temperature> < 0 holds." Includes live demos!
Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and professor for Software Engineering at Saarland University. His research on automated debugging, mining software archives, specification mining, and security testing has proven highly influential. Zeller is one of the few researchers to have received two ERC Advanced Grants, most recently for his S3 project. Zeller is an ACM Fellow and holds an ACM SIGSOFT Outstanding Research Award.
As a young aspiring scientist, social media is one of the outlets to disseminate your work and connect to the community. This talk gives hints on the benefits and risks of science on social media. Talk at the ICSE 2022 New Faculty Symposium.
Andreas Zeller's keynote at the 1st Intl Fuzzing workshop 2022 at NDSS: https://fuzzingworkshop.github.io/program.html
Do you fuzz your own program, or do you fuzz someone else's program? The answer to this question has vast consequences on your view on fuzzing. Fuzzing someone else's program is the typical adverse "security tester" perspective, where you want your fuzzer to be as automatic and versatile as possible. Fuzzing your own code, however, is more like a traditional tester perspective, where you may assume some knowledge about the program and its context, but may also want to _exploit_ this knowledge - say, to direct the fuzzer to critical locations.
In this talk, I detail these differences in perspectives and assumptions, and highlight their consequences for fuzzer design and research. I also highlight cultural differences in the research communities, and what happens if you submit a paper to the wrong community. I close with an outlook into our newest frameworks, set to reconcile these perspectives by giving users unprecedented control over fuzzing, yet staying fully automatic if need be.
Bio: Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and a professor for Software Engineering at Saarland University, both in Saarbrücken, Germany. His research on automated debugging, mining software archives, specification mining, and security testing has won several awards for its impact in academia and industry. Zeller is an ACM Fellow, an IFIP Fellow, an ERC Advanced Grant Awardee, and holds an ACM SIGSOFT Outstanding Research Award.
Illustrated Code: Building Software in a Literate Way
Andreas Zeller, CISPA Helmholtz Center for Information Security
Notebooks – rich, interactive documents that join together code, documentation, and outputs – are all the rage with data scientists. But can they be used for actual software development? In this talk, I share experiences from authoring two interactive textbooks – fuzzingbook.org and debuggingbook.org – and show how notebooks not only serve for exploring and explaining code and data, but also how they can be used as software modules, integrating self-checking documentation, tests, and tutorials all in one place. The resulting software focuses on the essential, is well-documented, highly maintainable, easily extensible, and has a much higher shelf life than the "duct tape and wire” prototypes frequently found in research and beyond.
At my talk "On Impact in Software Engineering Research", I present a number of lessons from (and for!) high-impact research:
* Work on a real problem
* Assume as little as possible
* Keep things simple
* Have a sound model
* Keep on learning
* Keep on moving
* Build prototypes
Video at https://youtu.be/md4Fp3Pro0o
Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and professor for Software Engineering at Saarland University, both in Saarbrücken, Germany. His research on automated debugging, mining software archives, specification mining, and security testing has won several awards for its impact in academia and industry. Zeller is an ACM Fellow, an IFIP Fellow, an ERC Advanced Grant Awardee, and holds an ACM SIGSOFT Outstanding Research Award.
How do you make your research impactful? How do you get towards your book, your tool, your algorithm? Andreas Zeller shares lessons learned in his career.
Software-Tests automatisch erzeugen: Frische Ansätze für Forschung, Praxis und Lehre
Andreas Zeller, CISPA Helmholtz-Zentrum für Informationssicherheit, Saarbrücken
Automatisch erzeugte Softwaretests können mit wenig menschlichem Aufwand viele Fehler finden. In diesem Vortrag stelle ich aktuelle Techniken zur Testerzeugung vor, die für ein gegebenes Programm vollautomatisch dessen Eingabesprache ableiten und aus den so entstehenden Grammatiken große Mengen gültiger Testeingaben ableiten. Unser grammatikbasierter LangFuzz-Testgenerator hat so in den JavaScript-Interpretern von Firefox, Chrome und Edge tausende Fehler gefunden. Die Techniken sind in dem interaktiven Buch “Generating Software Tests” (www.fuzzingbook.org) zusammengefasst. In einer Mischung von Text und Programmcode können Leser im Browser direkt mit den Programmen experimentieren und ihren Code live ergänzen und erweitern.
Andreas Zeller ist Forscher am CISPA Helmholtz-Zentrum für Informationssicherheit und Professor für Softwaretechnik an der Universität des Saarlandes, beide in Saarbrücken. Seine Forschung beschäftigt sich mit der Analyse großer Software-Systeme und ihre Entwicklungsgeschichte. In 2010 wurde Zeller zum Fellow der ACM ernannt für seine Beiträge zur automatischen Fehlersuche und der Analyse von Software-Archiven, für die er 2018 als erster Deutscher den Outstanding Research Award der ACM SIGSOFT erhielt. Seine aktuellen Projekte kombinieren Testerzeugung und Specification Mining, gefördert durch den Europäischen Forschungsrat (ERC).
- Andreas Zeller gave a talk on impact in software engineering research.
- He discussed different perspectives on what constitutes impactful research, including intellectual challenge, elegance and reusability, usefulness, and innovation.
- Zeller described his own path to impact, starting with his PhD work on configuration management using feature logic, through developing the DDD debugger and introducing delta debugging. His work then expanded to mining software repositories and more recently mining app descriptions and behaviors.
What does it take to have high impact in software engineering research? Andreas Zeller, a "high impact" SE researcher, shares his personal story and perspective.
The document provides 12 tips for preparing a successful grant proposal for the European Research Council (ERC). The tips include understanding the ERC process and guidelines; starting the proposal process many months before the deadline; reserving several weeks solely for writing; getting feedback from multiple reviewers outside your specialty; leveraging local expertise in EU funding; emphasizing your unique achievements and qualifications; proposing a high-risk, high-reward project with a clear title and structure; getting straight to the point without jargon; polishing the proposal extensively; and recognizing that following the tips does not guarantee success but prevents misunderstandings. The overall aim is to craft a proposal that clearly communicates your message to reviewers in a manner that stands out against competing
You are a young researcher on your first independent position. What can you do to get your research work funded? How do you frame your work, find the right partners, address the funding body?
Slides from Andreas Zeller's presentation at the New Faculty Symposium at ICSE 2017, Buenos Aires, Argentina.
Delta debugging is a technique for isolating the cause of failures in programs. It works by systematically removing parts of changes made between a version where the program worked and a version where it fails. The algorithm efficiently determines the minimal set of changes responsible for the failure. The technique was able to isolate a single change causing a failure in GDB from 178,000 changed lines within a few hours. Automating debugging through techniques like delta debugging can help developers quickly find and fix bugs.
The document discusses using mutation testing to evaluate test quality by seeding programs with faults and seeing if tests can find them. It outlines challenges with efficiency and determining equivalent mutants. It proposes using dynamic invariants learned through execution to characterize the impact of mutations and identify equivalent mutants, aiming to make mutation testing more practical.
I need to change some piece of code. What do I need to consider? Is there anything else I need to change? We show what to learn from software repositories.
The document discusses various myths and misconceptions in software engineering. It summarizes research that analyzed large codebases to determine factors correlated with defects. Metrics like lines of code and complexity metrics correlated with defects, but the relationship depended on the project. Defect-prone modules could be predicted by combining metrics. Other factors like developer experience, prior defects, language features, and dependencies also influenced bugs. While some properties were project-specific, others like the role of requirements, design, coding and testing were universal defect sources.
Weitere ähnliche Inhalte
Mehr von CISPA Helmholtz Center for Information Security
Software-Tests automatisch erzeugen: Frische Ansätze für Forschung, Praxis und Lehre
Andreas Zeller, CISPA Helmholtz-Zentrum für Informationssicherheit, Saarbrücken
Automatisch erzeugte Softwaretests können mit wenig menschlichem Aufwand viele Fehler finden. In diesem Vortrag stelle ich aktuelle Techniken zur Testerzeugung vor, die für ein gegebenes Programm vollautomatisch dessen Eingabesprache ableiten und aus den so entstehenden Grammatiken große Mengen gültiger Testeingaben ableiten. Unser grammatikbasierter LangFuzz-Testgenerator hat so in den JavaScript-Interpretern von Firefox, Chrome und Edge tausende Fehler gefunden. Die Techniken sind in dem interaktiven Buch “Generating Software Tests” (www.fuzzingbook.org) zusammengefasst. In einer Mischung von Text und Programmcode können Leser im Browser direkt mit den Programmen experimentieren und ihren Code live ergänzen und erweitern.
Andreas Zeller ist Forscher am CISPA Helmholtz-Zentrum für Informationssicherheit und Professor für Softwaretechnik an der Universität des Saarlandes, beide in Saarbrücken. Seine Forschung beschäftigt sich mit der Analyse großer Software-Systeme und ihre Entwicklungsgeschichte. In 2010 wurde Zeller zum Fellow der ACM ernannt für seine Beiträge zur automatischen Fehlersuche und der Analyse von Software-Archiven, für die er 2018 als erster Deutscher den Outstanding Research Award der ACM SIGSOFT erhielt. Seine aktuellen Projekte kombinieren Testerzeugung und Specification Mining, gefördert durch den Europäischen Forschungsrat (ERC).
- Andreas Zeller gave a talk on impact in software engineering research.
- He discussed different perspectives on what constitutes impactful research, including intellectual challenge, elegance and reusability, usefulness, and innovation.
- Zeller described his own path to impact, starting with his PhD work on configuration management using feature logic, through developing the DDD debugger and introducing delta debugging. His work then expanded to mining software repositories and more recently mining app descriptions and behaviors.
What does it take to have high impact in software engineering research? Andreas Zeller, a "high impact" SE researcher, shares his personal story and perspective.
The document provides 12 tips for preparing a successful grant proposal for the European Research Council (ERC). The tips include understanding the ERC process and guidelines; starting the proposal process many months before the deadline; reserving several weeks solely for writing; getting feedback from multiple reviewers outside your specialty; leveraging local expertise in EU funding; emphasizing your unique achievements and qualifications; proposing a high-risk, high-reward project with a clear title and structure; getting straight to the point without jargon; polishing the proposal extensively; and recognizing that following the tips does not guarantee success but prevents misunderstandings. The overall aim is to craft a proposal that clearly communicates your message to reviewers in a manner that stands out against competing
You are a young researcher on your first independent position. What can you do to get your research work funded? How do you frame your work, find the right partners, address the funding body?
Slides from Andreas Zeller's presentation at the New Faculty Symposium at ICSE 2017, Buenos Aires, Argentina.
Delta debugging is a technique for isolating the cause of failures in programs. It works by systematically removing parts of changes made between a version where the program worked and a version where it fails. The algorithm efficiently determines the minimal set of changes responsible for the failure. The technique was able to isolate a single change causing a failure in GDB from 178,000 changed lines within a few hours. Automating debugging through techniques like delta debugging can help developers quickly find and fix bugs.
The document discusses using mutation testing to evaluate test quality by seeding programs with faults and seeing if tests can find them. It outlines challenges with efficiency and determining equivalent mutants. It proposes using dynamic invariants learned through execution to characterize the impact of mutations and identify equivalent mutants, aiming to make mutation testing more practical.
I need to change some piece of code. What do I need to consider? Is there anything else I need to change? We show what to learn from software repositories.
The document discusses various myths and misconceptions in software engineering. It summarizes research that analyzed large codebases to determine factors correlated with defects. Metrics like lines of code and complexity metrics correlated with defects, but the relationship depended on the project. Defect-prone modules could be predicted by combining metrics. Other factors like developer experience, prior defects, language features, and dependencies also influenced bugs. While some properties were project-specific, others like the role of requirements, design, coding and testing were universal defect sources.
Mehr von CISPA Helmholtz Center for Information Security (11)
6. Airport opened 16 mos
later;
Soft ware firm (BEA) got
bankrupt
Overall damage 1.8 bln
7. // Get #years, #days since 1980
days = ...;
year = 1980;
while (days > 365) {
if (IsLeapYear(year)) {
if (days > 366) {
days -= 366; year += 1;
}
}
else {
days -= 365; year += 1;
}
}
11. bug.aj
@interface A {}
aspect Test {
declare @field : @A int var* : @A;
declare @field : int var* : @A;
interface Subject {}
public int Subject.vara;
public int Subject.varb;
}
class X implements Test.Subject {}
12.
13.
14.
15. Diagnose Erkennen Vorbeugen
Woher kommen Software-Fehler? • Andreas Zeller, Universität des Saarlandes
16. Diagnose Erkennen Vorbeugen
Woher kommen Software-Fehler? • Andreas Zeller, Universität des Saarlandes
17. bug.aj
@interface A {}
aspect Test {
declare @field : @A int var* : @A;
declare @field : int var* : @A;
interface Subject {}
public int Subject.vara;
public int Subject.varb;
}
class X implements Test.Subject {}
18. ajc Stack Trace
java.util.NoSuchElementException
at java.util.AbstractList$Itr
.next(AbstractList.java:427)
at org.aspectj.weaver.bcel.BcelClassWeaver
.weaveAtFieldRepeatedly
(BcelClassWeaver.java:1016)
22. Vereinfachen
• Proceed by binary search. Throw away half
the input and see if the output is still wrong.
• If not, go back to the previous state and
discard the other half of the input.
mozilla.csv
✔
✘
26. Delta Debugging
Delta Debugging isoliert automatisch Fehlerursachen:
Eingaben: 1 von 436 Kontakten in Columba
Änderungen: 1 von 8,721 Änderungen in GDB
Threads: 1 von 3.8 bio Thread-Wechsel in Scene.java
➔ Automatisierte Fehlersuche
35. getPreferredEmail
public String getPreferredEmail() {
Iterator it = getEmailIterator();
// get first item
IEmailModel model = (IEmailModel) it.next();
// backwards compatiblity
// -> its not possible anymore to create a
// contact model without email address
if (model == null)
return null;
return model.getAddress();
}
36. Delta Debugging
Delta Debugging isoliert automatisch Fehlerursachen:
Eingaben: 1 von 436 Kontakten in Columba
Änderungen: 1 von 8,721 Änderungen in GDB
Threads: 1 von 3.8 bio Thread-Wechsel in Scene.java
Aufrufe: 2 von 18738 Methodenaufrufen 99,8%
Verringerung des
➔ Automatisierte Fehlersuche Suchraums
42. OP-Miner
Benutzungsmodelle Temporale Eigenschaften
hasNext ≺ next
Programm iter.hasNext () iter.next ()
hasNext ≺ hasNext
next ≺ hasNext
next ≺ next
Anomalien Muster
hasNext ≺ next
✓ hasNext ≺ hasNext hasNext ≺ next
hasNext ≺ next hasNext ≺ hasNext
✗ hasNext ≺ hasNext
43. OP-Miner
Benutzungsmodelle Temporale Eigenschaften
hasNext ≺ next
Programm iter.hasNext () iter.next ()
hasNext ≺ hasNext
next ≺ hasNext
next ≺ next
Anomalien Muster
hasNext ≺ next
✓ hasNext ≺ hasNext hasNext ≺ next
hasNext ≺ next hasNext ≺ hasNext
✗ hasNext ≺ hasNext
44. Methodenmodelle
public Stack createStack () {
Random r = new Random ();
int n = r.nextInt ();
Stack s = new Stack ();
int i = 0;
while (i < n) {
s.push (rand (r));
i++;
}
s.push (-1);
return s;
}
45. Methodenmodelle
Random r = new Random ();
public Stack createStack () {
Random r = new Random ();
int n = r.nextInt ();
Stack s = new Stack ();
int i = 0;
while (i < n) {
s.push (rand (r));
i++;
}
s.push (-1);
return s;
}
46. Methodenmodelle
Random r = new Random ();
public Stack createStack () {
Random r = new Random ();
int n = r.nextInt (); int n = r.nextInt ();
Stack s = new Stack ();
int i = 0; Stack s = new Stack ();
while (i < n) {
s.push (rand (r));
i++; int i = 0;
}
s.push (-1);
return s;
}
47. Methodenmodelle
Random r = new Random ();
public Stack createStack () {
Random r = new Random ();
int n = r.nextInt (); int n = r.nextInt ();
Stack s = new Stack ();
int i = 0; Stack s = new Stack ();
while (i < n) {
s.push (rand (r));
i++; int i = 0;
}
s.push (-1); i < n
return s; i++;
}
s.push (rand (r));
48. Methodenmodelle
Random r = new Random ();
public Stack createStack () {
Random r = new Random ();
int n = r.nextInt (); int n = r.nextInt ();
Stack s = new Stack ();
int i = 0; Stack s = new Stack ();
while (i < n) {
s.push (rand (r));
i++; int i = 0;
}
s.push (-1); i < n i < n
return s; i++;
}
s.push (-1); s.push (rand (r));
49. Methodenmodelle
Random r = new Random ();
public Stack createStack () {
Random r = new Random ();
int n = r.nextInt (); int n = r.nextInt ();
Stack s = new Stack ();
int i = 0; Stack s = new Stack ();
while (i < n) {
s.push (rand (r));
i++; int i = 0;
}
s.push (-1); i < n i < n
return s; i++;
}
s.push (-1); s.push (rand (r));
50. Benutzungsmodelle
Random r = new Random ();
int n = r.nextInt ();
Stack s = new Stack ();
int i = 0;
i < n i < n
i++;
s.push (-1); s.push (rand (r));
51. Benutzungsmodelle
Stack s = new Stack ();
s.push (-1); s.push (rand (r));
53. Benutzungsmodelle
Random r = new Random ();
int n = r.nextInt ();
Stack s = new Stack ();
int i = 0;
i < n i < n
i++;
s.push (-1); s.push (rand (r));
54. Benutzungsmodelle
Random r = new Random ();
int n = r.nextInt ();
s.push (rand (r));
59. Ein Defekt
for (Iterator iter = itdFields.iterator();
iter.hasNext();) {
...
for (Iterator iter2 = worthRetrying.iterator();
iter.hasNext();) {
... sollte iter2 sein
}
}
60. Noch ein Defekt
public void visitNEWARRAY (NEWARRAY o) {
byte t = o.getTypecode ();
if (!((t == Constants.T_BOOLEAN) ||
(t == Constants.T_CHAR) ||
...
(t == Constants.T_LONG))) {
constraintViolated (o, "(...) '+t+' (...)");
}
} sollte “+t+” sein
61. Ein Fehlalarm
Name internalNewName (String[] identifiers)
...
for (int i = 1; i < count; i++) {
SimpleName name = new SimpleName(this);
name.internalSetIdentifier(identifiers[i]);
...
} sollte unverändert
... bleiben
}
62. Ein Code Smell
public String getRetentionPolicy ()
{
...
for (Iterator it = ...; it.hasNext();)
{
... = it.next();
...
return retentionPolicy;
}
... sollte repariert werden
}
64. Fehler finden
Table 2: Summary of the results for the experiment subjects. (See Section 5.2 for a discussion.)
# Violations
Program Total Investigated # Defects # Code smells # False positives Efficiency
A-Rʙ 0.8.2 25 25 2 13 10 60%
Aʜ T 6.0.16 55 55 0 9 46 16%
AʀɢUML 0.24 305 28 0 12 16 43%
AJ 1.5.3 300 300 16 42 242 19%
Aʀ 2.5.0.0 315 85 1 26 58 32%
Cʟʙ 1.2 57 57 4 15 38 33%
Eɪ 4.2 11 11 0 4 7 36%
1,068 562 23 121 417 26%
String getPreferredEmail () {
erator it = getEmailIterator ();
• The tools we have used (JADET and Cʟɪʙʀɪ) could
mailModel = (IEmailModel) it.next (); fective. We think this is very improbable, especially f
. ɪʙʀɪ, whose implementation is publicly available [28]
JADET as well as the OP-Mɪɴʀ code, we have u
thoroughly validated it, so we believe that any defects
17: Another defect in Cʟʙ. Missing call to hasNext fect only a small number of OUMs and violations a
65. OP-Miner
OP-Miner lernt operationale Vorbedingungen –
wie man Argumente gewöhnlich konstruiert
Lernt aus gewöhnlicher Benutzung –
allgemein und projektspezifisch
Vollautomatisch
Dutzende bestätigter Fehler gefunden
69. Ein Defekt
conspire-0.20 (IRC client)
static int dcc_listen_init (…) {
dcc-sok = socket (…);
if (…) {
while (…) {
… = bind (dcc-sok, …);
}
/* with a small port range, reUseAddr is needed */
setsockopt (dcc-sok, …, SO_REUSEADDR, …);
}
listen (dcc-sok, …); muss vor bind() stehen
}
70.
71. Diagnose Erkennen Vorbeugen
Woher kommen Software-Fehler? • Andreas Zeller, Universität des Saarlandes
72. Diagnose Erkennen Vorbeugen
Woher kommen Software-Fehler? • Andreas Zeller, Universität des Saarlandes
93. Studien
Make this
Actionable!
Rosenberg, L. and Hyatt, L. “Developing An Effective Metrics Program”
European Space Agency Software Assurance Symposium, Netherlands, March, 1996
94.
95.
96.
97.
98. Diagnose Erkennen Vorbeugen
Woher kommen Software-Fehler? • Andreas Zeller, Universität des Saarlandes
99. Diagnose Erkennen Vorbeugen
ZELLER
WHY PROGRAMS FAIL
WHY
PROGRAMS
FAIL
A Guide to Systematic Debugging
ANDREAS ZELLER
Woher kommen Software-Fehler? • Andreas Zeller, Universität des Saarlandes