Argumentation technology is a rich interdisciplinary area of research that, in the last two decades, has emerged as one of the most promising paradigms for commonsense reasoning and conflict resolution in a great variety of domains.
In this tutorial we aim at providing PhD students, early stage researchers, and experts from different fields of AI with a clear understanding of argumentation in AI and with a set of tools they can start using in order to advance the field.
Part 1 of 2
Recombinant DNA technology (Immunological screening)
Argumentation in Artificial Intelligence: From Theory to Practice
1. argumentation in artificial intelligence
From Theory to Practice
Federico Cerutti†
and Mauro Vallati‡
xxi • viii • mmxvii
†
Cardiff University • ‡
University of Huddersfield
6. EARLY REPORT
Early report
lleal-lymphoid-nodular hyperplasia, non-specific colitis, and
pervasive developmental disorder in children
A J Wake eld, S H Murch, A Anthony, J Linnell, D M Casson, M Malik, M Berelowitz, A P Dhillon, M A Thomson,
P Harvey, A Valentine, 5 E Davies, J A Walker-Smith
5|-|mma|'Y Introduction
1177
" °9W several children Who, after a nP"" '
"‘ investigated a conser""' _m;mAn1".,,,
6
7. Support
What else should
be true if the
causal link is true?
Alternative explanation
7
From Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children by Wakefield et al, The Lancet, 1998
9. Argument from Verification
Major Premise: If A (a hypothesis) is true, then B (a proposition reporting an
event) will be observed to be true.
Minor Premise: B has been observed to be true, in a given instance.
Conclusion: Therefore, A is true.
Critical Questions
CQ1: Is it the case that if A is true, then B is true?
CQ2: Has B been observed to be true (false)?
CQ3: Could there be some reason why B is true, other than its being because of A
being true?
Connection between critical questions, objectivity, and burden of proof
Unclear connection on uncertainty assessment
9
11. Support
11
From A Population-based Study of Measles, Mumps, and Rubella Vaccination and Autism by Madsen et al, The New England Journal of Medicine, 2002
14. Argument and Computation, 2014 Taylor & Francis
V01. 5, N0. 1, 1-4, http://dx.d0i.0rg/10.1080/l9462l66.20l3.869764 Ta)/|or&FrancisGroup
Introduction to structured argumentation
Philippe Besnard“, Alejandro Garciab, Anthony Hunter”, Sanj ay Modgild, Henry Pralckenef,
Guillermo Simarib and Francesca Tonig
[Bes+14]
14
15. Argument and Computation, 2014
eTayloralprancis
Vol. 5, No. 1, 5—30, http://dx.doi.org/10.1080/1946216620]3.869765 Taylnr S. franm Cxnuu
Constructing argument graphs with deductive arguments: a tutorial
Philippe Besnard“ and Anthony Hunter“
[BH14]
15
16. base logic
Let L be a language for a logic, and let ⊢i be the consequence relation for that logic.
If α is an atom in L, then α is a positive literal in L and ¬α is a negative literal in L.
For a literal β, the complement of β is defined as follows:
∙ If β is a positive literal, i.e. it is of the form α, then the complement of β is the
negative literal ¬α,
∙ if β is a negative literal, i.e. it is of the form ¬α, then the complement of β is the
positive literal α.
16
17. deductive argument
A deductive argument is an ordered pair ⟨Φ, α⟩ where Φ ⊢i α.
Φ is the support, or premises, or assumptions of the argument, and α is the claim, or
conclusion, of the argument.
For an argument A = ⟨Φ, α⟩, the function Support(A) returns Φ and the function Claim(A)
returns α.
⟨{report(rain), report(rain) → carry(umbrella)}, carry(umbrella)⟩
17
18. Here we focus on simple logic, but other options include non-monotonic logics,
conditional logics, temporal logics, description logics, and paraconsistent logics.
18
19. simple logic
Simple logic is based on a language of literals and simple rules where each simple rule is
of the form α1 ∧ . . . ∧ αk → β where α1 to αk and β are literals.
The consequence relation is modus ponens (i.e. implication elimination):
∆ ⊢s β iff there is an α1 ∧ · · · ∧ αn → β ∈ ∆
and for each αi ∈ {α1, . . . , αn}
either αi ∈ ∆ or ∆ ⊢s αi
Let ∆ = {a, b, a ∧ b → c, c → d}. Hence, ∆ ⊢s c and ∆ ⊢s d. However, ∆ ̸⊢s a and ∆ ̸⊢s b.
19
20. arguments based on simple logic
Let ∆ be a simple logic knowledgebase. For Φ ⊆ ∆, and a literal α, ⟨Φ, α⟩ is a simple
argument iff Φ ⊢s α and there is no proper subset Φ′
of Φ such that Φ′
⊢s α.
Let p1, p2, and p3 be the following formulae.
p1 = oilCompany(BP)
p2 = goodPerformer(BP)
p3 = oilCompany(BP) ∧ goodPerformer(BP)) → goodInvestment(BP)
Then ⟨{p1, p2, p3}, goodInvestment(BP)⟩ is a simple argument.
20
21. rebut and undercut for simple logic
For simple arguments A and B, we consider the following type of simple attack:
∙ A is a simple undercut of B if there is a simple rule α1 ∧ · · · ∧ αn → β in Support(B)
and there is an αi ∈ {α1, . . . , αn} such that Claim(A) is the complement of αi
∙ A is a simple rebut of B if Claim(A) is the complement of Claim(B)
A1 = ⟨{efficientMetro, efficientMetro → useMetro}, useMetro⟩
A2 = ⟨{strikeMetro, strikeMetro → ¬efficientMetro}, ¬efficientMetro⟩
A3 = ⟨{govDeficit, govDeficit → cutGovSpending}, cutGovSpending⟩
A4 = ⟨{weakEconomy, weakEconomy → ¬cutGovSpending}, ¬cutGovSpending⟩
21
26. argument graphs
The flight is low cost and luxury, therefore it is a good flight
A flight cannot be both low cost and luxury
A1 = ⟨{lowCostFly, luxuryFly, lowCostFly ∧ luxuryFly → goodFly}, goodFly⟩
A2 = ⟨{¬(lowCostFly ∧ luxuryFly)}, ¬lowCostFly ∨ ¬luxuryFly⟩
26
28. Artificial
Intelligence
Arti cialIntelligence 77 (1995) 321v357
On the acceptability of arguments and its fundamental
role in nonmonotonic reasoning, logic programming and
n-person games*
Phan Minh Dung*
[Dun95]
28
29. Definition 1
A Dung argumentation framework AF is a pair
⟨A, → ⟩
where A is a set of arguments, and → is a binary relation on A i.e. →⊆ A × A.
29
30. A semantics is a way to identify sets of arguments (i.e. extensions)
“surviving the conflict together”
30
35. (some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
if you defend some argument you should take it on board (∅ satisfies the principle
only if there are no unattacked arguments)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
35
36. (some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
no extension is a proper subset of another one
∙ Directionality (Def. 12)
36
37. (some) semantics properties
∙ Conflict-freeness (Def. 2)
∙ Admissibility (Def. 5)
∙ Strong-Admissibility (Def. 7)
∙ Reinstatement (Def. 8)
∙ I-Maximality (Def. 9)
∙ Directionality (Def. 12)
a (set of) argument(s) is affected only by its ancestors in the attack relation
37
38. complete extension (def. 15)
Admissibility and reinstatement
Set of conflict-free arguments s.t. each defended argument is included
b a
c
d
f e
gh
{a, c, d, e, g},
{a, b, c, e, g},
{a, c, e, g}
38
39. grounded extension (def. 16)
Strong Admissibility
Minimum complete extension
b a
c
d
f e
gh
{a, c, e, g}
39
40. preferred extension (def. 17)
Admissibility and maximality
Maximum complete extensions
b a
c
d
f e
gh
{a, c, d, e, g},
{a, b, c, e, g}
40
41. stable extension (def. 17)
„orror vacui:” the absence of odd-length cycles is a sufficient condition for existence of
stable extensions
Complete extensions attacking all the arguments outside
b a
c
d
f e
gh
{a, c, d, e, g},
{a, b, c, e, g}
41
42. complete labellings (def. 20)
An argument is IN if all its attackers are OUT
An argument is OUT if at least one of its attackers is IN
Otherwise is UNDEC
42
48. properties of semantics
CO GR PR ST
D-conflict-free Yes Yes Yes Yes
D-admissibility Yes Yes Yes Yes
D-strongly admissibility No Yes No No
D-reinstatement Yes Yes Yes Yes
D-I-maximality No Yes Yes Yes
D-directionality Yes Yes Yes No
48
52. an exercise
a
b c d
e
f
g
h
i
l
m
no
p
ECO(∆) =
{a, c},
{a, c, f},
{a, c, m},
{a, c, f, m},
{a, c, f, l},
{a, c, g, m}
52
53. an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EGR(∆) =
{a, c}
53
54. an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EPR(∆) =
{a, c, f, m},
{a, c, f, l},
{a, c, g, m}
54
55. an exercise
a
b c d
e
f
g
h
i
l
m
no
p
EST (∆) =
55
56. decomposability
Arti cial Intelligence 2]? (2014) 144-197
Contents lists available at ScienceDirect
Arti cialIntelligence
www.e|sevier.c0 m/locate/a rtint
On the Input/Output behavior of argumentation frameworks
Pietro Baronia. Guido Boellab, Federico Cerutti C, Massimiliano Giacomin ‘1~*,
Leendert van der Torred, Serena Villatae
®CrossMark
[Bar+14]
56
58. decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 28):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 27):
combining “local” labellings you get only global labellings, possibly less
58
59. decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 28):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 27):
combining “local” labellings you get only global labellings, possibly less
59
60. decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 28):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 27):
combining “local” labellings you get only global labellings, possibly less
60
61. decomposability
A semantics is:
∙ Fully decomposable (Def. 29):
∙ any combination of “local” labellings gives rise to a global labelling;
∙ any global labelling arises from a set of “local” labellings
∙ Top-Down decomposable (Def. 28):
combining “local” labellings you get all global labellings, possibly more
∙ Bottom-Up decomposable (Def. 27):
combining “local” labellings you get only global labellings, possibly less
CO ST GR PR
Full decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes Yes
Bottom-up decomposability Yes Yes No No
61
65. l BY FLORIS BEX, JOHN LAWRENCE,
MARK SNAITH. AND CHRIS REED
Implementing
the Argument
Web
56 COMMUNICATIONS OF THE AGM OCTOBER 2[]l3 VOL‘ 56 N0.1D
[Bex+13]
65
69. Supporting Reasoning with Different Types of Evidence in
Intelligence Analysis
Alice Toniolo_ Anthony Etuk Robin Wentao Ouyang
Tlmothy J-
N0Fman Federico Cerutti Mani Srivastava
DBPL 0f_C0ml3U“”Q SCIENCE Dept. of Computing Science University of California
University of Aberdeen, UK University of Aberdeen, UK Los Angeles, CA, USA
Nir Oren Timothy Dropps Paul Sullivan
Dept. of Computing Science John A_ Allen INTELPOINT Incorporated
University of Aberdeen, UK Honeywell, USA Pennsylvania, USA
Appears in: Proceedings of the 14th International
Conference on Autonomous Agents and ll/Iultiayent
Systems (AAJWAS 2015), Bordim, Elkind, Was.-3, Yolum
(ed5.), Mlay 4 8, 2015, Istcmbttl, Turkey.
[Ton+15]
69
70. Research question: Evaluate the Jupiter intervention on a conflict ongoing on Mars
Research hypothesis: Is the Jupiter intervention on Mars humanitarian or strategical?
Data gathering: beyond the scope of this work
Justification of possible hypotheses based on data and logic
The example considered in this version of the slides clarifies some misunderstandings raised during the
presentation and hopefully reduces the elements of controversy
70
72. sensemaking agent and walton’s argumentation schemes
Argument from Cause to Effect
Major Premise: Generally, if A occurs, then B will (might) occur.
Minor Premise: In this case, A occurs (might occur).
Conclusion: Therefore, in this case, B will (might) occur.
Critical questions
CQ1: How strong is the causal generalisation?
CQ2: Is the evidence cited (if there is any) strong enough to warrant the causal
generalisation?
CQ3: Are there other causal factors that could interfere with the production of the
effect in the given case?
72
73. Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
73
74. Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
74
75. Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
CQ2
There is no evidence
to show that the cause
occurred
75
76. Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
CQ2
There is no evidence
to show that the cause
occurred
CON
Use of massive aerial
and artillery strikes
76
80. Jupiter troops
deliver aids to
Martians
Jupiter intervention
on Mars is
humanitarian
PRO
Agreement to
exchange crude oil
for refined
petroleum
Jupiter intervention
on Mars aims at
protecting strategic
assets
PRO
CON
CON
Civilian casualties
caused by Jupiter
forces
CON
LCE
Use of old Jupiter
military doctrine
causes civilian
casualties
Large use of old
Jupiter military
techniques on Mars
CQ2
There is no evidence
to show that the cause
occurred
CON
Use of massive aerial
and artillery strikes
80
85. belief revision and argumentation
Potential cross-fertilisation
Argumentation in Belief Revision
∙ Justification-based truth maintenance
system
∙ Assumption-based truth maintenance
system
Some conceptual differences:
in revision, external beliefs are
compared with internal beliefs and,
after a selection process, some
sentences are discarded, other
ones are accepted. [FKS09]
Belief Revision in Argumentation
∙ Changing by adding or deleting an
argument.
∙ Changing by adding or deleting a set of
arguments.
∙ Changing the attack (and/or defeat)
relation among arguments.
∙ Changing the status of beliefs (as
conclusions of arguments).
∙ Changing the type of an argument (from
strict to defeasible, or vice versa).
85
86. argument mining
Con1pimm'o11m' M0del.s' ofArg.umem
B. Verhetj er (.21. (Eds.)
105 Press, 2012
2012 The authom and IOS Press‘. All rights reserved.
daisi0.323_?/9781'—t5I499—HI—3—454
Generating Abstract Arguments:
a Natural Language Approach
Elena CABRIO and Serena VILLATA
[CV12]
Cbmptrrational Models QfArgumer1r [85
S. Par.s'on.s' er al. (Eds.)
IOS Press, 2014
20M The authors and 1'05 Pram". A1'! rights reserved.
dais1U.3233/978-I-6i'499—436- 7-185
Towards Argument Mining from Dialogue
Katarzyna BUDZYNSKA Mathilde JANIER *2 Juyeon KANG“, Chris REED h,
Patrick SAINT—DIZIER d, Manfred STEDE°, and Olena YASKORSKA if
[Bud+14]
86
88. argumentation and humans
Proceedings of the Twenty—Ninth AAAI Conference on Arti cialIntelligence
Providing Arguments in Discussions Based on the
Prediction of Human Argumentative Behavior*
Ariel Rosenfeld and Sarit Kraus
Department of Computer Science
Bar-Ilan University, Ramat-Gan, Israel 92500
rosenfa5@cs.biu.ac.il , sarit@cs.biu.ac.il
[RK15]
Providing Arguments in Discussions on the Basis of the Prediction
of Human Argumentative Behavior
ARIEL ROSENFELD and SARIT KRAUS, Bar—llan University
ACM Transactions on Interactive Intelligent Systems, Vol. 6, No. 4, Article 30, Publication date: December 2016.
[RK16]
88
90. natural language interfaces
ECAI 2014 207
T. Schmtb et at. {Eris}
2014 The Amhors and 105 Press.
This arrirrle is pub1'ishecI rmline with Open Ar:(:e.s.' by 10.5‘ Prams and,’ dr'.':rr'bute'r1 under the terms
qf'.'he Creatiw Comirmnts Arriibminn Non—C0mmercim' License.
a'r>.":i0.3233/978-I-61499-419-0-207
Formal Arguments, Preferences, and Natural Language
Interfaces to Humans: an Empirical Evaluation
Federico Cerutti and Nava Tintarev and NirOren1
[CTO14]
90
91. How can we create an human understandable interface to defeasible reasoning
in order to guarantee that human users will agree with the result of the
automated reasoning procedures?
91
92. a1 : σA ⇒ γ
a2 : σB ⇒ ¬γ
a3 : ⇒ a1 ≺ a2
First Scenario
a1: Alice suggests to move in together with Jane
a2: Stacy suggests otherwise because Jane might have a hidden agenda
a3: Stacy is your best friend
a1 a2 don’t know
agreement 12.5 68.8 18.8
• • • • •
Second Scenario
a1: TV1 suggests that tomorrow will rain
a2: TV2 suggests that tomorrow will be cloudy but will not rain
a3: TV2 is generally more accurate than TV1
a1 a2 don’t know
agreement 5.0 50.0 45.0
92