A New Paradigm
for Alignment Extraction
Christian Meilicke & Heiner Stuckenschmidt
University Mannheim
Research Group Data and Web Science
1
Ontology Matching (for sure not complete)
2
Analyse labels
• Normalize and split labels attached to concepts and properties
• Aggregate token specific results to derive similarities for labels
Generate
Candidates
• Interprete label similarities as confidence scores of mapping hypotheses
Refine
Candidates
• Use the structure of the ontologies to refine confidence scores (e.g. similarity
flooding) of the hypotheses
Select Final
Alignment
• Apply threshold to select the final alignment from the hypotheses
• Use logical reasoning to filter out correspondences resulting in incoherencies
Proposed Approach
• Generate hypotheses about both
• Mappings between ontological entities
• Equivalence assumptions about linguistic entities
• Define joint optimization problem (with the help of Markov Logic)
where linguistic equivalence assumtions and mappings between
ontological entities are kept consistent, i.e., such mappings are
not allowed
7
map(1#AcceptedContribution, 2#AcceptedContribution)
map(1#ReviewedContribution, 2#ReviewedPaper)
map(1#ReviewedContribution, 2#ReviewedPaper)
map(1#Contribution, 2#Paper)
{
Markov Logic (simplified)
• Probabilistic formalism to attach weights (=> probabilities) to first order
formulas
• Given a set of weighted formulas and a set of hard formulas, the MAP state
is the most probable subset of the weighted formulas
• Satisfies hard formulas
• Maximizes weights attached to soft formulas
• Due to the underlying log linear model, the MAP State S is the subset that
is optimal with respect to the sum of the weights of those formula that are
true in S
• Can be transformed to ILP (Integer Linear Program), RockIt uses this
approach to compute the MAP state efficiently
8
Three types of entites
• Linguistic entities
• Tokens: 2:Acceptedt, 2:Rejectedt, 2:Contributiont
• Labels: 2:AcceptedContribution
• (Onto) Logical entities (concepts, roles, attributes):
• 2#AcceptedContribution
• A label can consist of several tokens
• A logical entity can have several labels
• Or from one label several labels can be generated
9
Logical Entities
Labels
Tokens
LinguisticEntities
Token equivalences as weighted atoms
• Specify weights between -1.0 and 0.0, the higher the more likely it is
that two tokens are equivalent
• Example:
10
equivt(1:Documentt, 2:Documentt), 0.0
equivt(1:Contributiont, 2:Contributiont), 0.0
equivt(1:Documentt, 2:Documentt), 0.0
equivt(1:Contributiont, 2:Contributiont), 0.0
equivt(1:Reviewedt, 2:Reviewedt), 0.0
equivt(1:Acceptedt, 2:Acceptedt), 0.0
equivt(1:Contributiont, 2:Papert), -0.9
From Tokens to Labels (hard formulas
• Use hard formulas to describe which tokens occur in which labels at
which position
• Example:
• has2Token(2:AcceptedContribution)
• pos1(2:AcceptedContribution, 2:Acceptedt)
• pos2(2:AcceptedContribution, 2:Contributiont)
11
Label Token
From Labels to Logical Entities
• Use hard formulas to make explicit which labels are used to described
which entities
• Example:
• hasLabel(2#AcceptedContribution, 2:AcceptedContribution)
• Several labels might be given or generated within preprocessing step
• E.g. if domain restriction is used as part of the original label, all a reduced label
• hasLabel(2#writesPaper, 2:writesPaper) // original
• hasLabel(2#writesPaper, 2:writes) // added
• E.g. remove of and reverse order of tokens
• hasLabel(2#AuthorOfPaper, 2:AuthorOfPaper) // original
• hasLabel(2#AuthorOfPaper, 2:PaperAuthor) // added
12
Logical Entity Label
Main rules I / II
• Iff logical entities are matched, they need to have (some) equivalent
labels
• map(e1 , e2) ∃l1 ∃l2 (hasLabel(e1, l1)
∧ hasLabel(e2, l2) ∧ equiv(l1, l2))
• Iff labels are equivalent, all of their tokens have to be equivalent
(needs to be specified for all types of labels)
• has2Token(l1) ∧ has2Token(l2) ∧ pos1(l1, t11 ) ∧
pos2(l1, t12 ) ∧ pos1(l2 , t21) ∧ pos2(l2, t22 ) →
(equiv(l1, l2) equiv(t11, t21) ∧ equiv(t12, t22))
13
Main rules II / II
• 1:1 rules for tokens
• equivt(t1,t2) & equivt(t1,t3) => t2 = t3
• Positive reward for generated mappings (soft constraint)
• 0.5 map(e1, e2)
14
Added for each
instantiation
Example
15
Is this outcome consistent with our rule set?
map(1#AcceptedContribution, 2#AcceptedContribution)
map(1#ReviewedContribution, 2#ReviewedPaper)
map(1#ReviewedContribution, 2#ReviewedPaper)
map(1#Contribution, 2#Paper)
No, it is not!
Matching n tokens on n+1 tokens
• Rule set too strict, such a mappings as the following one can never be
generated
• equiv(1:ConferencePaper, 2:Paper)
• Allow to match 2-token labels on 1-token labels iff the headnoun of the 2-
token label is ignored
• Ignoring a word results in a penalty, add this
• -0.9 ignore(t)
• and add weaken the previously mentioned rules ba adding a disjunct
• „two token needs to be matched on two token OR on 1 token if headnoun is ignored
17
Example
• Ontology 1 uses these concepts
• 1#ConferencePaper
• 1#ConferenceFee
• 1#ConferenceParticipant
• Only Black
• Do not ignore 1:Conferencet as modifier, no mappings possible, score = 0.0
• Ignore 1:Conferencet: 0.0 - 0.9 + 1 x 0.5 = -0.4
• Grey and Black
• Do not ignore 1:Conferencet as modifier, no mappings possible, score = 0.0
• Ignore 1:Conferencet: 0.0 + 0.0 + 0.0 - 0.9 + 3 x 0.5 = 0.6
18
• Ontology 2 uses these concepts
• 2#Paper
• 2#Fee
• 2#Participant
Integrating logical reasoning
• By adding the rule set used by CODI (for example) the coherence of
the generated alignment can be ensured*
• E.g.: map(e1,e2) & map(d1, d2) & sub(e1, d1) => !dis(e2, d2)
• This can have an impact on the equivalences on the linguistic layer,
which can have again an impact on parts of the mapping that were
not directly affected by the logical constraint!
* ... not correct: many logical conflicts are taken into account, however, the rules set is not complete!
19
Some more adjustments ...
• Generate multiple labels out of one
• E.g. if range of 1#writesPaper is 1#Paper, assume that
1:writesPaper and 1:writes are labels of 1#writesPaper
• Add for 1#AuthorOfPaper also the label 1#PaperAuthor
• Allow to match 3-token on 2-token labels with some penalty, if all of
the tokens from the 2-token label match
• Only match properties that have a domain and range if their domain
and range are matched
20
Experimental Setup
• Applied to OAEI conference track
• Why not to the others?
• Problem with exponential runtime, will not terminate for ontologies with more than
1000 logical entities (... depends also on some other factors)
• Applicable to some of the benchmarks, however, due to their automated generation,
tokens that appear as parts of labels are not replaced by synonym (are not
supressed)
• MAMBA@OAEI 2015 = this approach
• However, lots of room for improvement when going from experimental
prototype to robust matching system
• Sorry, for the painful installation that some OAEI organizers had to experience
21
Conclusions
• Proposed a new method for lexical ontology matching, but is it a new
paradigm?
• Good results (given the fact that the input similarity is rather weak)
• Achieves „consistent“ results
• Consistent, w.r.t. underlying assumptions that are relevant
• Behaves (sometimes) like a human
• Is in a certain way very simple
• Is very hard to use in practice
• Uses a bunch of parameters
• Horrible runtimes for larger problems (exponential)
• At least, it is worth thinking about
24