SlideShare ist ein Scribd-Unternehmen logo
1 von 130
Downloaden Sie, um offline zu lesen
Understanding 
Figurative-Language 
from the Inside-Out 
Tony Veale, 2014
In this age of the Internet Of Things, in which even everyday objects can be accessed and inter-linked via the Web, it is all too easy to forget that words are the original inter-linkable things. For words too are objects, with their own forms and prescribed uses, public identities, private associations and preferred contexts of use. The Belgian surrealist René Magritte famously exploited the duality of words – to refer to objects and to be objects in their own right – in his provocative inter-plays of word and image. In his surrealist manifesto on the use of words in pictures, entitled Les Mots et Les Images (1929), Magritte identified a magician’s box of tricks for creatively exploiting the tenuous connection between words and what they describe.
As object is not so wedded to its name that we cannot find a more suitable one for it. 
The first claim in Magritte’s manifesto stands out as being particularly relevant to the study of figurative language. Words are so useful precisely because they allow us to refer to objects and to ideas, but this mapping is neither inevitable nor immutable. As Magritte showed, the interface between words, objects and images is a malleable one, especially in the hands of a creative thinker. In a given context the most conventional word for an object may be the last word we wish to use. In this context a word for an altogether different object may actually serve our purposes, and communicate our meanings, with far greater force, clarity and resonance.
The philosopher Ludwig Wittgenstein attributed many problems in philosophy to words: philosophers, he said, unconsciously create ambiguity and confusion by taking words on holiday, by taking them from their home contexts where they make intuitive sense and transplanting them to new and exotic contexts where the local customs are different and where their meanings are subtly stretched. But this is what we consciously do when we use language creatively: we deliberately take our words on holiday, allowing them to take leave of their conventional senses so that they can show us a new and exciting side of themselves.
Figurative Devices, like Metaphor, offer a fast and flexible (if often unconventional) means of conveying meaning from speaker to speaker, or from context to context … 
… when it goes right! 
When it goes wrong, our meanings are like lost luggage: mangled, misplaced, irretrievable, whose contents are often dangerously misunderstood content.
We use words figuratively to evoke other words and to bring other meanings into play: Magritte captured this insight in his surrealist manifesto, Les Mots Et Les Images, thusly: “A word can replace an object in reality” and “An object makes one suppose there are other objects behind it.” 
Consider this evocative example of figurative from the Guardian newspaper, when describing the movie director Sam Mendes: “Appearance: like the painting in George Clooney’s attic.” 
Clearly, we shall need to look behind the words, and past their associated objects, to see what is being said about the target individual Sam Mendes in this figurative comparison. 
Sam Mendes
George Clooney 
Dorian Gray? 
Picture 
Attic 
What pictures might George Clooney keep in his attic? Does he even have an attic? 
The question is pointless because the comparison does not refer to a real picture or a real attic. 
Rather, the mention of “picture” and “attic” in the same sentence will put one in mind of Oscar Wilde’s famous morality tale, The Picture of Dorian Gray. 
Gray, a gilded youth, stays eternally young and unblemished while his portrait, hidden in the attic, accrues the ravishes of time and debauchery in his sinful stead.
If George Clooney is Dorian Gray, then the painting in his attic 
is a painting of a time-ravished Mr. Clooney, not one of Mr. Gray 
We must invent a new version of the tale to understand this figure. 
this comparison prompts a conceptual blend of past knowledge (the Dorian tale) and topical gossip about Clooney. 
In the theory of researchers Mark Turner & Gilles Fauconnier,
Ultimately, Mendes is not being compared to Clooney as we know him, 
but to one part of the conceptual blend of Clooney and Dorian Gray, and the unflattering part at that! 
In other words, Mendes looks the way Clooney deserves to look (but does not). 
The added implication is that Mendes once looked like Clooney, before his looks fell prey to the depredations of time. 
What is truly remarkable about this blend is how effortlessly we process it. 
A throw-away piece of snark in a gossip column mobilizes the full machinery of our intellect and we hardly even notice.
Quick … 
Alert the media! 
It is convenient to think of metaphor as performing a unidirectional information transfer, 
in which knowledge from a source concept is transferred to, and projected onto, a target concept. 
But metaphors can also make information flow in both directions simultaneously. Consider this old remark about media guru Arianne Huffington (founder of news blog The Huffington Post): 
“She is the greatest social climber since Icarus.” 
Notice how Huffington and Icarus are each described as social climbers in this metaphor, forcing us to update our perceptions of both. We must now view Icarus as a social climber of sorts too. 
Ariadne Huffington & Icarus are effectively blended into one ambitious person in this metaphor.
Figurative language exploits the way our conceptual structures are wired together, and even permits us to rewire our conceptual systems, by allowing us to connect seemingly unrelated ideas in persuasive new ways. As with computer cables, which come in different sizes and bandwidths, a figurative connection can carry a single piece of significant information or a large amount of related information in parallel. A scientific analogy, for instance, will establish a whole system of coherent mappings between a source and target domains, while a humorous simile may build a wonderfully detailed source picture to convey just one small piece of knowledge.
Consider Shakespeare’s oft-quoted metaphor from Romeo and Juliet: “Juliet is the sun.” This metaphor conveys more than the mere radiance of Juliet’s beauty (as perceived by Romeo). It underpins an altogether grander system of metaphoric mappings that runs throughout the play. In this metaphorical solar system, Juliet is perceived as the gravitational center around which all the other characters are destined to orbit. 
We can choose to view this metaphor as a thin-pipe that conveys a single piece of information, or a fat-pipe that conveys the systematic richness of a solar-system metaphor.
Structure-Mapping Accounts of Analogy are computationally well-understood 
(Gentner, Forbus, Falkenhainer, 1989; Holyoak & Thagard; Hofstadter & FARGS) 
The Bohr/Rutherford Atom as Solar-System Analogy
Many creative similes and humorous pseudo-analogies use a thin-pipe to convey a message. These figures typically require us to build a complex image of the source-domain, to infer some significant quality from this mental image, and to project this single quality into the target domain. 
Consider this pseudo-analogical simile from George Carlin: “Having a smoking section in a restaurant is like having a peeing section in a swimming pool”. Now, we can construct a fat-pipe between the domains of restaurants and pools, to see the similarities between how smoke spreads through air and pee spreads through water. But the only significant quality we need to project to appreciate Carlin’s simile is this: the latter is clearly stupid, and so the former is too!
Elaborate similes often use additional content to create mood, where mapping is optional: so we can choose to use a fat pipe to interpret such a simile, or savor its effect through a thin pipe. 
Consider this artful simile from a master of the form, Raymond Chandler: “Even on Central Avenue … he looked about as inconspicuous as a tarantula on a slice of angel food.” We can choose to map “he” (meaning Moose Molloy, a hulking white man) to “tarantula” and “Central Avenue” (a main thoroughfare in a black neighborhood) to “angel food”, but notice how the color scheme is playfully inverted (white on black  black on white). Chandler aims to shock with this simile, to convey just how out-of-place Molloy must seem to the wary residents of Central Avenue.
Knowledge is knowing that a tomato is a fruit 
BUT 
Wisdom is knowing not to put it in a fruit salad 
Where does all the relevant knowledge come from? And of what kind is it? 
Metaphor, Simile, Analogy and other figurative forms are knowledge-hungry devices. In fact, creative language exploits heterogeneous types of knowledge: 
Propositional knowledge (actions, events, behaviors, tendencies, norms) 
Property-level knowledge (category membership criteria, expectations) 
Semantic knowledge (e.g. dictionaries) vs Pragmatic knowledge (e.g. corpora)
In 1888, in their house at Arles, Vincent Van Gogh and Paul Gauguin painted the same topic: 
a chair. The resulting paintings are very different, both in composition and in chair-ness. 
Van Gogh’s chair is humble, unpretentious and brightly lit, while Gauguin’s is somber and ornate. Do these differing visions of a simple everyday object reveal deep differences in psychology? 
Each chair is very clearly a chair, yet each differs significantly from the other. What is the minimal set of qualities any chair should possess? What is the most prototypical chair? And shouldn’t we know what to expect from everyday categories before we can employ them creatively? 
Chair 
Ear
Though it is often convenient to think of a category as simply a set of all of its members, 
the mathematic idea of a set is not nuanced enough to capture the textures of human categories. 
For some members of a category will stand out as being more obvious, or typical, or representative members of the category than others. Just think of the category BIRD. 
Some members of the category are more representative of bird-ness than others. What are the first birds that come to your mind? Birds that are small, pretty, gaily feathered, singing their songs in the branches of a nearby tree? Well, not all birds are like these birds. 
Cognitive Psychologists (such as Eleanor Rosch) and Cognitive Linguists (such as George Lakoff) argue that human categories are radial in nature, with the most representative members sitting at the centre, and with less typical members arranged at varying distances along the radius.
E. Rosch, G. Lakoff 
“Radial” Categories 
See e.g. George Lakoff’s (1987) book, “Women, Fire and Dangerous Things”. 
A prototypical bird (for many this will be a robin, a sparrow, a thrush, a cuckoo etc.)
Have you ever seen the game show Family Fortunes (called Family Feud in the US)? 
Teams of family members are asked to provide examples of a given question category, such as A KIND OF BIRD, and are scored on the representativeness of their answers. 
How does the game assess representativeness? By posing the same question to 100 members of the public, and by counting the number of times the same answers are given in response. The most representative answers will be the ones that are provided most often in these public surveys. 
So Family Fortunes was really testing our knowledge of radial categories, and our ability to explore the center region of each category. The center of a radial category really is the bulls eye!
Less Representative … 
… More Peripheral
A more popular, and challenging, variant of Family Fortunes is a game called Pointless. 
Teams are still asked to provide exemplars of a given category, such as A KIND OF BIRD, but players are now scored on the obscurity, or anti-representativeness, of their answers. 
How can we assess anti-representativeness? We can again pose the same question to 100 members of the public, and again count the number of times an answer is given in response. The least representative answers will be the ones that are given least often, but at least once. 
So Pointless is really testing our knowledge of the outer reaches of our radial categories, where the most obscure, the least typical and the most creative and surprising members reside.
Atypical Problem Cases
If Pointless generates more laughs per category than Family Fortunes/Feuds, then 
a good deal of the humour arises from the category members themselves. For the most peripheral members of a category are usually the last ones to come to mind, and reside in the grey areas of our common-sense knowledge where our safest generalizations easily break down. 
We all know that bats aren’t birds, but ask someone to describe the color and size of a bat’s egg and – for a moment at least – the question seems a perfectly natural one. 
Words than name categories evoke strong expectations that the members we have in mind will resemble the most prototypical members, but jokes thrive on violating such expectations.
Appropriate Incongruity 
Essential to Humor Theories: see Victor Raskin (1985), Elliott Oring (2003) 
State bird of 
Transylvania? 
State bird of 
Minnesota?
Accessing the Web in real-time to acquire texts dynamically, or to test linguistic hypotheses on the fly, can be a time-consuming business that is frowned upon by most Web search engines. 
The Google n-grams is a vast collection of text snippets from the Web – an n-gram is a contiguous sequence of n words or tokens – that can be quickly searched in a local database. 
Each Google n-grams is between 1 and 5 words long (so 5  n  1) and has a minimum frequency of 40 occurrences (case-sensitive counts apply) on the World-Wide-Web. 
Large bodies of text – called text corpora – are a rich (if mostly implicit) source of practical real-world knowledge. 
The texts of the Web are a constantly growing source of both trending topics and tacit common-sense knowledge. 
Web corpora have many advantages as a source of knowledge for a computer.
A Web n-gram corpus can be viewed as a Lexicalized Idea Space 
Style! 
The Google N-Grams is vast database of recurring Web-text fragments 
n-gram 
Web n-grams attest to the viability of many combinations of words & ideas
I’m Salt 
“salt and pepper” (724,197 hits) 
The linguist J.R. Firth once remarked that “You shall know a word by the company it keeps.” 
Words are found in the company of many other words in large text corpora, but the strongest associations produce observable patterns of co-occurrence. Thus, “salt” and “pepper” denote very similar kinds of things, and are very frequently found in each other’s company, not least because they share a common category (condiment) and are often found together in the real world. 
Coordination patterns – like “salt and pepper” or “angels and demons” or “knives and forks” – offer valuable insights into the category structures that motivate these co-occurrence patterns. 
So we can use the Google n-grams to look at coordinations of proper names, substances, and bare plural nouns, to determine how words and ideas come together to form radial categories. 
and I’m Pepper
I’ll take “disasters” for 12, please Alex. 
How can we turn co-ordination patterns into Radial Categories? 
Coordinations like “doctors and lawyers” or “tables and chairs” tell us that two associated ideas will, in some contexts, be co-resident in the same category. The set of words/ideas that co-occur with “tables” in coordination 3-grams will collectively form an implicit furniture & furnishings category, just as the words/ideas that co-ordinate with “disasters” will form an implicit category of catastrophic events. 
For a given term T we can gather the coordinated terms with which it co-occurs in Google’s “and” 3-grams. We can sort these other terms by their similarity to T, using a WordNet-based similarity metric. The resulting set will have a radial structure, with the most T-like ideas clustering closest to T, and the least T-like keeping their distance.
My sunglasses are a burqa for my eyes. A burqa for a man. 
Dictionaries tell us how words should be used, while corpora 
tell us how we actually use them. Likewise, a hierarchical system of word-senses such as WordNet enumerates the senses that words conventionally possess, as well as the categories they conventionally reside in, while corpora show us how speakers actually use them. 
A corollary of Magritte’s manifesto is that no word is so wedded to its sense or its reference that it cannot be given new ones in the right context. Consider Karl Lagerfeld’s description of his sunglasses as a burqa for his eyes – a privacy guard to shield against other people. Lagerfeld is here using the word “burqa” not in its specific Islamic sense, but as a prototype of the broader category of dark, privacy- protecting accessories. Sunglasses are in this category too, he says. 
Karl Lagerfeld 
Fashion Designer, 
Zoolander-esque fashion supervillain
Aristotle (in his Poetics) saw metaphor as a question of categories and taxonomies: a metaphor applied the name of one category of things to another 
Computer scientists have found this a very attractive view, to apply to IS-A hierarchies and ontologies (e.g. Wilks 1978; Way 1991; Fass 1991; Veale 2003)
Eileen Cornell Way (1991) argues metaphor needs a Dynamic Type Hierarchy (DTH) 
Psychologists also champion this view: 
see Glucksberg (2008) 
Category Inclusion
{ } 
Psychologist Sam Glucksberg sees metaphor as a way of creating and expanding categories. 
A metaphoric turn of the form “X is Y” is not an identity statement but a category inclusion statement. Y stands in place of a category Y’ in which Y is a prominent exemplar, and X is asserted to be a member of this category Y’ as well. The challenge for a computer scientist is to determine, given X and Y, the implied category Y’ that units X and Y under a meaningful category umbrella. 
We are unlikely to find this category Y’ to be an existing category in a conventional category system like that provided by WordNet or other standard lexical ontologies. As Eileen Cornell Way has argued, Y’ is likely a dynamic category that is created on the fly for the metaphor X is Y. 
Nonetheless, a system like WordNet can be used as a comprehensive foundation of static categories, on which new dynamic categories can be imposed as needed/created.
Taxonomic Ideal 
{LETTER} 
{BETA} {ALPHA} 
{GAMMA} 
isa 
isa 
isa 
isa 
{BETH} 
{GIMEL} 
isa 
isa 
{DALETH} 
{ALEPH} 
{DELTA} isa 
isa 
E.g. if Alpha is “The 1st letter of the Greek Alphabet”, what is the Hebrew Alpha? 
We certainly could 
use a DTH here, as 
WordNet often lacks 
category structure 
where it counts 
What new dynamic 
categories can we 
add to facilitate the 
mapping of Greek  
Hebrew letters?
Taxonomic Ideal 
{LETTER} 
{BETA} {ALPHA} 
{GAMMA} 
isa 
isa 
isa 
isa 
{BETH} 
{GIMEL} 
isa 
isa 
{DALETH} 
{ALEPH} 
{DELTA} 
isa 
isa 
{1ST_LETTER} 
isa isa 
“The 1st letter of 
the Greek alphabet” 
“The 1st letter of the 
Hebrew alphabet” 
WordNet glosses: 
WordNet provides a 
dictionary-like text 
definition (or gloss) 
for each of its word-sense 
entries 
We can lift salient 
terms out of these 
glosses to create new 
dynamic categories 
for 2 or more senses 
Here the term “1st” is salient because of its position in each gloss, and because it 
is shared by another WordNet sense at the same depth under {letter}
Taxonomic Ideal 
{LETTER} 
{ALPHA} {BETA} {GAMMA} 
{GREEK_LETTER} 
{HEBREW_LETTER} 
isa 
isa isa 
… 
isa isa 
{BETH} {GIMEL} 
isa 
isa 
isa 
… 
{ALEPH} 
{1ST_LETTER} {3RD_LETTER} 
isa 
isa 
isa 
isa 
isa 
isa isa 
isa 
isa 
{2ND_LETTER} 
The categories {Greek_letter} and {Hebrew_letter} are created in the same way 
Veale (2003) The Analogical 
Thesaurus
{DEITY, GOD} 
{GAEA} 
{ZEUS} 
isa 
{ATHENA} 
{GREEK_DEITY} 
{VARUNA} 
{GANESH} {SHIVA} 
{HINDU_DEITY} 
isa 
… … 
{WISDOM_DEITY} 
gloss: goddess of wisdom and … gloss: god of wisdom or prophesy 
Same depth, common parent, 
hence pivotal 
Dynamic categories can be inferred in many areas of the WordNet category system 
Analogical Thesaurus Veale (2003)
Can we measure the effectiveness of dynamic categories? 
Consider the challenge of mapping from the Greek alphabet to the Hebrew alphabet (or vice versa). Clearly we would hope that the 1st letter of the Greek alphabet is mapped to the 1st letter of the Hebrew alphabet, and not the 5th or 19th. 
Likewise, consider a mapping from Roman to Greek, or Viking to Teutonic deities, so that Mars maps to Ares, Minerva to Athena and Jupiter to Zeus amongst others. In an unadorned WordNet these mappings cannot be done with any accuracy, as the category structures do not discriminate by theme (e.g. wisdom vs. fertility gods) or by letter position (e.g. 1st versus 3rd letters). But dynamic categories can fill these gaps.
Deity to Deity Mapping Task 
Precision 
Recall 
Static WN representations 
0.115 
0.34 
Dynamic WN representation 
(+ gloss-feature reification) 
0.935 
0.61 
Letter to Letter Mapping Task 
Precision 
Recall 
Static WN representations 
0.04 
0.98 
Dynamic WN representation 
(+ gloss-feature reification) 
0.96 
0.98 
E.G., Greek to Roman gods, Hindu to Semitic gods, etc. 
I.E., Greek to Hebrew letters, and Hebrew to Greek letters. 
WordNet glosses are a good source of relevant dynamic categories for figurative processing, but analysis of large corpora or the texts of the Web can provide even greater depth, coverage and nuance.
The Web is a vast echo chamber of many diverse and competing voices. 
Yet these voices will converge when expressing the same tacit beliefs and expectations of the world.
Google search 
Are you feeling lucky, punk? 
Why do businesses _ 
One source of convergence on the Web is the set of frequently-posed Web queries. 
Google exploits this convergence to provide a set of natural completions for the most common Web queries. Simply enter a partial query and watch Google anticipate your information need. 
Many people still type fully-formed questions into the Google search box. WHY Questions such as “why do dogs chase dogs?” or “why do pirates wear eye patches?” tell us what the speaker believes, but also tell us that the speaker believes that everyone else shares this belief too. 
We can milk Google for its popular completions to common WHY questions, by providing the partial query “why do Xs” for each topic of interest X. 
Each question provides a common presupposition about the world that is shared by many of us. We coax as many completions from Google as possible for the same stub “why do Xs”
“Why” questions are a rich source of tacit norms that are widely assumed to be self-evident 
Search engine query completions are a rich source of no-brainer “Why” questions / norms 
Veale & Hao (2007) 
Q logs: Pasca & Van Durme (2007) 
Q Completions: Veale & Li (2011)
We can harvest Why completions using a lexical trie : so e.g. why do r why do re why do rel … 
Veale & Li (2011) 
Özbal & Strapparava (2012) also use Google completions as a source of tacit world knowledge
By using different search locales and search languages, we can also obtain culture- specific completions 
Thus, e.g. Google France may provide a different normative perspective on cats than Google USA
How might we exploit these common beliefs / norms to understand novel metaphors? 
Suppose we wish to interpret the metaphor “religion is business.” 
A computer can acquire a reasonable stock of norms for what constitutes a business, for instance that businesses have leaders, develop and follow strategies and aim for objectives, pay taxes etc. 
Which of these norms are most appropriately projected onto religion in this metaphor? We can use Google 2-grams to seek out contexts in which religion is associated with leadership, strategy, objectives, taxes, etc. Each such 2-gram is motivating evidence for the projection of the corresponding business norm, to achieve what can be called a mash-up of business and religion. 
http://Afflatus.UCD.ie 
Metaphor Eyes app
Metaphor Eyes app 
Veale & Li (2011)
When you 
the present 
cut into 
the future 
leaks out 
William Burroughs and Brion Gysin invented the Cut-Up Technique as a collage-like means 
of generating new texts that are unaffected by a sub-conscious obedience to cliché or convention. 
Running with scissors, Burroughs was soon cutting and randomly re-splicing into any representative form he could find, moving from newsprint to audio and video tape. 
Computationally, it is also possible to cut-up and re-combine the knowledge representations that actually underpin these forms. The result will be a blend-like conceptual cut-up, or mash-up, that splices together the representations of two concepts X and Y to interpret the metaphor X is Y. 
William Burroughs
Note the asymmetry in business is a religion
Similes also contain presuppositions, and Web similes open a window onto our shared stereotypes 
Veale & Hao (2007) 
A nuanced model of stereotypical beliefs can be extracted – if we can avoid the ironic similes!
Have you ever explicitly ordered a black espresso? Or a strong espresso? Or a small espresso? 
Most likely you never have, even if these are precisely the qualities you are looking for. 
The reason we don’t have to explicitly ask for these qualities is that they are already presupposed to be part of everyone’s stereotype of an espresso. We don’t ask because we shouldn’t need to. 
How might a computer learn that such qualities are salient if we never explicitly ask for them (or only ruefully note their absence)? A computer can learn to expect such qualities by noting how they are used in similes: when one says “as black as espresso” or “as strong as espresso”, the success of these similes is predicated on the shared assumption that espresso is black and strong.
Use Web query pattern 
“ as * as a | an * ” 
to harvest 10,000s of similes 
To acquire Stereotypical Knowledge: Mine Simile instances from Web 
brick 
peacock 
butcher 
surgeon 
lion 
sponge 
shark 
fox 
snowflake 
tiger 
puppy 
rock 
eagle 
robot 
soap opera 
oak 
espresso 
statue
Four Strategies for Exploiting Stereotypes in Similes 
Bona-Fide “Straight” 
Ironic “Incongruous” 
POSitive Ground 
(high affect ADJ) 
E.g., as sharp as a 
razor 
E.g., as subtle as a 
sledgehammer 
NEGative Ground 
(low affect ADJ) 
E.g., as unpleasant as 
a root_canal 
E.g., as blind as 
a hawk 
Analysis of syntactically-simple Web similes (with simple lexicalized vehicles) 
Observation: ~ 18% of simple web-similes (simple vehicles) are ironic ! 
see Veale & Hao (2007)
“About” is a signal of sardonic intent in humorous similes 
“So, there I was, still single at 40, feeling about as marketable as flesh-eating bacteria” 
(Washington Post writer, Jeannie McDonald) 
“Even on Central Avenue, not the quietest dressed street in the world, he looked about as inconspicuous as a tarantula on a slice of angel food” 
(Raymond Chandler in Farewell, My Lovely) 
“They'd put you in the psycho ward, and believe me, the people who run that place are about as sympathetic as Georgia chain-gang guards” 
(Raymond Chandler in The Long Goodbye) 
ABOUT signals pragmatic insincerity (Kumon-Nakamura, Glucksberg & Brown, 1995)
Quantifying the role of “About” in figurative comparisons 
Bona-Fide “About” 
(Congruous use of ideas: 23%) 
Ironic “About” 
(Incongruous uses: 77%) 
POSitive Ground 
(high-affect ADJ) 
E.g., About as happy as a 
rabbit in a carrot patch 
E.g., About as subtle as a 
fat man in speedos 
NEGative Ground 
(low-affect ADJ) 
E.g., About as repulsive as 
a monkey in a negligee 
E.g., About as cold as the 
centre of the Sun 
Harvest double-hedged Web similes with query: “about as ADJ as a NOUN” 
Analysis of 20,299 complex “about” similes from Web (a complex simile is one in which the vehicle may be a syntactically complex phrase, as above) 
see Veale & Hao (2010)
“About” Signals both Creativity and a Sardonic World-view 
Bona-Fide “About” 
(Generic claims) 
Ironic “About” 
(Incongruous claims) 
HIGH affect: (+) grounds 
12.3 % 
72.7 % 
LOW affect: (-) grounds 
8.6 % 
6 .3% 
* Based on set of 8789 similes with discernibly HIGH/LOW grounds in DoA 
Use Whissel’s Dictionary of Affect (DoA) to characterize the +/- affect of Similes 
We conclude that “About” is a strong marker of a Sardonic perspective (81%) 
Veale & Hao (2010) 
see Veale (2013)
The cleverest and most effective similes are re-used 
over and over on the Web, for the Web is a vast echo- chamber for new ideas and for new turns of phrase. 
A given simile that strikes a reader as novel may in fact already occur many times across the Web. 
Some of these existing occurrences may use the “about” form – that is, they may be double-hedged, while other uses of the same core simile may only be single-hedged. 
Any given simile may be found on the Web in the “about” form, but a Web-simile is dominant in the “about” form if most of its occurrences on the Web use the “about” form. 
The Web provides enough evidence then to consider whether the “about” marker in similes is the linguistic equivalent of a sly nod or a wink to an audience, to subtly signal the presence of irony or a creative, sardonic intent.
Exploiting “About” as an cue for irony in Unmarked Similes 
Bona-Fide 
(Generic claims) 
Ironic 
(Incongruous claims) 
Found in “about” form* 
10 % (1246 similes) 
43 % (1188 similes) 
Dominant in “about” form* 
2 % (208 similes) 
40 % (1031 similes) 
* Found = found on Web *Dominant = in more than half of all occurrences 
Similes are often reused on the Web, with and without the “about” marker 
So “About” is the lexical equivalent of a wink, raised eyebrow or sardonic tone 
see Hao & Veale (2010)
Every proverb has an equally wise anti-proverb. “A little knowledge is a dangerous thing” is thus tempered with the optimism of “From little acorns to mighty oak trees grow.” 
The tacit common-sense knowledge that can be acquired from the Web certainly qualities as a little knowledge, given the scale of the knowledge possessed by an average human adult. Fortunately, this little knowledge is an ideal starting point from which to grow a successively larger knowledge-base of common sense norms and beliefs. 
The process we use is an iterative one called Web bootstrapping: we use the knowledge we do possess to frame hypotheses about the knowledge we do not yet possess, and validate or refute these hypotheses on the Web. Any validated knowledge is then a basis for further bootstrapping. 
Stand back … 
I’ve no idea how big I’m gonna grow.
Simile associations provide an excellent seed from which to grow a rich knowledge-base. 
For instance, Web similes tell us (and our computers) that foxes are cunning, that espresso is black and strong, that whiskey is likewise strong, that mummies are dry, silk is soft, and so on. 
These associations are landmarks in a conceptual landscape relative to which many other points on the landscape can also be identified. What other animals are commonly considered cunning? Which other beverages are black, or strong? What other materials are soft? 
We construct a triple from each of these simile-derived associations, but leave the third part of the triple blank, as similes do not explicitly identify a category for the topic being described. This third part can be identified later, during the first stage of bootstrapping on the Web.
Form Initial Triples From
It takes knowledge to acquire knowledge, for it takes insight to pose a meaningful question. 
For instance, if we know that Caviar is expensive, we can ask just what kind of expensive item is it? 
The simile pattern is frequently used for ironic ends. To sidestep irony we need a bootstrapping pattern that is very rarely used ironically. The “M-Xs such as Ys and Zs” construct is such a pattern. 
We can re-express Y=Caviar is M=expensive as the Web query “expensive * such as Caviar and *” to find a value for X (the category of Caviar) and for Z (another expensive item like Caviar). 
Suppose we learn that Caviar is an expensive food, and that Salmon is too. We can now use the association Salmon is an expensive food in further bootstrapping, and so on and on.
Acquiring Fine-Grained Perspectives with Double-Anchored Queries 
Adj 
Noun(s) 
as 
such 
E.g., “expensive foods such as salmon [and champagne]” 
X (s) 
* 
Anchors 
Y (s) 
and 
* 
Adj 
Noun(s) 
as 
such 
E.g., “expensive foods such as caviar [and salmon]” 
X (s) 
* 
Anchors 
Y (s) 
and 
* 
Veale, Li & Hao (2009) 
Kozareva, Riloff & Hovy (2008) 
Hearst (1992)
Each bootstrapping cycle builds on and extends the knowledge gains of the previous cycle. 
The first cycle uses the simile associations (with incomplete triples) to generate bootstrapping queries that will both complete each triple and also find alternate fillers for the same triples. 
The subsequent cycle generates new bootstrapping queries from these newly-acquired alternative fillers/triples, to acquire yet more new triples from the Web. Acquisition is thus a targeted process. 
The knowledge-base grows geometrically with each cycle, over a thousand-fold during five cycles.
Bootstapping queries on the Web: Rapid Growth of Knowledge 
Starting from a trusted solid foundation of detailed viewpoints, 
use bootstrapping over web-content to acquire more and more … 
Seed 
1st Cycle 
2nd Cycle 
Kozareva, Riloff and Hovy (2008) 
Veale, Li & Hao (2009) 
3rd Cycle
Bootstrapping grows a knowledge-base at a rapid-rate, since each existing association spurs 
the acquisition of many more in the next cycle. Bootstrapping is a knowledge-magnification process. 
However, the process is not immune to noise, which can cause it to acquire dubious or nonsensical triples. This noise will be magnified many times over in subsequent cycles. Garbage in, Garbage out. 
It is thus essential that newly acquired triples are carefully vetted, and that noise is filtered after each cycle, lest in metastasize wildly (and prompt many unnecessary queries to the Web).
Noise Removal Strategies between Bootstrapping Cycles 
Kozareva, Riloff & Hovy outline a variety of graph- based metrics for noise detection and filtering. 
Veale, Li & Hao use a coarse WordNet-based filter to remove dubious triples between cycles 
Noise / Nonsense accumulates rapidly in a bootstrapping system on the Web!
Every bootstrapped triple represents an attested fine-grained categorization of a given topic. 
These fine-grained categories are radial. If the same triple is found again and again for a topic, then this topic is deemed to be a highly representative member of the corresponding radial category. 
Bootstrapping is a productive means of growing a large number of fine-grained radial categories, and of growing the membership of these categories by identifying attested members on the Web. 
We have constructed a Web service called Thesaurus Rex that delivers these categorizations on demand for a given topic. The size of a category name conveys the representativeness of the topic.
creativity 
Veale & Li (2013) 
see Afflatus.UCD.ie
Good metaphors draw out latent similarities between their topics and their vehicles. 
A creative individual spies a curious resemblance between two objects or ideas, and constructs an appropriate metaphor to help others see this otherwise overlooked similarity too. 
Thesaurus Rex allows its users to explore the hidden or conventionally unnoticed similarities between concepts by intersecting the set of radial categories that they both reside in. 
For instance, by identifying the fine-grained categorizations that can be applied to both creativity and to leadership (attested on the Web), we can see the many tacit connections between the two.
creativity & leadership 
Veale & Li (2013)
Even ideas which seem like complete opposites may share some fascinating categorizations. 
For opposites complement each other and thus form a larger categorical whole. 
Consider the concepts of birth and death. A wealth of shared categorizations for these naturally antagonistic processes are identified via Web bootstrapping for Thesaurus Rex (overleaf). 
For instance, both are natural processes and both are major, irreversible events. Each can be stressful yet meaningful event, though each is also a universal experience that is often marked as a legal event, a historical event and a special occasion.
divorce & war 
birth & death
Words are tools that we too often assume possess just a single prescribed functionality. 
An important function of metaphor is to reveal the secondary functions of our words, to show that the ideas conveyed by two very different words can share some surprising similarities. 
Since metaphor facilitates our recognition of the similar in the dissimilar, it may contribute to our sense of similarity overall. Can Thesaurus Rex‘s categories enhance a general sense of similarity? 
Measures of the semantic similarity of two words (and their meanings) are usually evaluated on the gold standard of Miller & Charles (M&C)’s 30-word-pairs ranked by human similarity judgments. 
C’mon and see!
1. car - automobile 11. bird - cock 21. coast - hill 2. gem - jewel 12. bird - crane 22. forest - graveyard 3. journey - voyage 13. tool - implement 23. shore - woodland 4. boy - lad 14. brother - monk 24. monk - slave 5. coast - shore 15. crane - implement 25. coast - forest 6. asylum - madhouse 16. lad - brother 26. lad - wizard 7. magician - wizard 17. journey - car 27. chord - smile 8. midday - noon 18. monk - oracle 28. glass - magician 9. furnace - stove 19. cemetery - woodland 29. rooster - voyage 10. food - fruit 20. food - rooster 30. noon - string 
Miller & Charles (1991) Lexical similarity Gold-Standard of 30 word pairs 
WordNet + Thesaurus Rex: 0.93 correlation with M&C human ratings 
see Veale & Li (2013) for implementation of similarity measure using T. Rex
A representation of a stereotypical concept is more than just a bag of salient features. 
The features that make up our stereotypical view of a concept are not random and disjoint, but connected and overlapping. Features reinforce each other, imply each other, and evoke each other. 
To appreciate the degree to which the features of a stereotype relate to each other, we look to how they are reinforce each other in a simile with multiple grounds, like “as hot and humid as a jungle”. 
In general, similes with the double-ground form X is “as P1 and P2 as a Y” attest to the relationship between P1 and P2. The more often that P1 and P2 support each other in an attested simile, the more likely that one will evoke the other in a descriptive context. 
By mining double-ground similes on the Web we build a matrix of mutually-reinforcing properties. 
What? 
Hey!
Double-anchored query 
“ as * and * as ” 
to acquire associations 
Learn how Stereotypical Properties Suggest and Imply Together 
Adjacency matrix of mutually-reinforcing properties acquired from WWW: 
hot 
spicy 
humid 
fiery 
dry 
sultry 
… 
hot 
--- 
35 
39 
6 
34 
11 
… 
spicy 
75 
--- 
0 
15 
1 
1 
… 
humid 
18 
0 
--- 
0 
1 
0 
… 
fiery 
6 
0 
0 
--- 
0 
0 
… 
dry 
6 
0 
0 
0 
--- 
0 
… 
sultry 
11 
1 
0 
2 
0 
--- 
… 
… 
… 
… 
… 
… 
… 
… 
…
Veale & Li (2012) 
Any given property (e.g cunning) will be highly connected to related properties
Very few properties are either wholly positive or wholly negative. There are shades of grey. 
Consider the property cunning, whose local network of reinforcing properties (with which it occurs in double-ground similes) is illustrated on the previous page. 
There are positive aspects to being cunning, as possession of this property implies a quickness of thought and a subtly of action. We might be pleased to have our plans described as cunning. 
Suppose we take a standard off-the-shelf affective lexicon, that will tell us which properties are quite positive (high pleasantness rating) and which are quite negative (high unpleasantness rating). 
We color our property-to-property graph accordingly, with quite positive properties in blue, quite negative properties in red, and everything else (neither very positive or very negative) in white. 
Why so blue? 
You make me see red
Obviously positive words in 
Obviously negative words in
Birds of a feather flock together. Misery loves company. Peas in a pod. Lie down with dogs … 
If we can know a word by the company it keeps, it is intuitive to assume that the lexical affect of a word will tend to reflect that of its neighbors. Happy words flock together. Sad words love company. 
In our property-to-property graph, the positive neighbors of a word X are the blue nodes to which it is directly linked. Let’s call this set N+(X), and let’s call the set of X’s red negative neighbors N-(X). 
Every edge in our graph can be considered a context in which X is used: a positive context will link X to a positive word (a blue node); a negative context will link X to a negative word (a red node). 
The positivity of a word/node X can be estimated as the proportion of positive contexts in which it appears: |N+(X)| / ( |N+(X)| + |N-(X)| )
Veale & Li (2012) 
For 99.6% of positive exemplars 
(1309 of 1314), 
pos(x) > neg(x) 
Strong 
pos(X) = 
|N+(X)| 
|N+(X)  N-(X)|
We need to get out more and meet new 
people 
Positivity & Negativity allow for shades of affect rather than a binary blue/red distinction. 
The negativity of a word/node X can be estimated as the proportion of negative contexts in which it appears: |N-(X)| / ( |N+(X)| + |N-(X)| ) 
Thus, pos(x) + neg(x) = 1 so pos(x) = 1 – neg(x) and neg(x) = 1 – pos(x) 
A property like cunning will thus possess shades of positivity and negativity (if more of the latter) 
We expect obviously pleasant words to have a positivity greater than negativity (pos(x) > neg(x)) And we expect quite unpleasant words to have negativity greater than positivity (neg(x) > pos(x)) 
We can thus test the basic intuition underpinning pos(x) and neg(x) by checking that obviously pleasant/unpleasant words in an affect lexicon are appropriately shaded as more or less blue/red
|N-(X)| 
|N+(X)  N-(X)| 
neg(X) = 
For 98.1% of negative exemplars 
(1359 of 1385), 
neg(x) > pos(x) 
Veale & Li (2012)
For there is nothing either good or bad, but thinking makes 
it so. 
Speak for yourself, mate! 
Different contexts (such as metaphors) can draw out the subtle affective shades of a word. 
The word “baby” evokes our nuanced stereotype of a human BABY, but in the right context, such as “You are my baby”, we focus on just the positive nuances of this stereotype. 
Another figurative context, such as “You are such a baby!”, emphasizes the negatives of BABY. 
We thus need to be able to “spin” a stereotype representation on demand, to focus on just the positive nuances or just the negative nuances to suit the metaphor being interpreted. 
For convenience, let’s refer to the negative aspects as –Baby and the positive aspects as +Baby.
bawling 
screaming 
weak 
angelic 
soft 
delicate 
whining 
whimpering 
Stereotypical Baby properties (163 in all) 
sniveling 
adorable 
drooling 
innocent 
warm 
sobbing 
peaceful 
wailing 
heartwarming 
cute 
lovable 
cranky 
indulged 
mewling
sniveling 
lovable 
adorable 
cute 
drooling 
bawling 
mewling 
screaming 
cranky 
innocent 
indulged 
warm 
sobbing 
weak 
angelic 
soft 
delicate 
peaceful 
whining 
wailing 
whimpering 
heartwarming 
+Baby e.g. “She’s my baby” 
-Baby e.g. “He’s such a baby”
Figurative “spin” requires an ability to affectively slice stereotypes in context, on demand. 
If we represent a stereotype as a set of properties with shades of positivity and negativity, we shall need to partition this set in the subset of properties {p} for which pos(p) > neg(p), and a disjoint subset of properties for which neg(p) > pos(p). 
We can use an off-the-shelf affect lexicon to judge the results of these partitions for each stereotype. 
Positive recall is dented whenever a positive property is mistakenly assumed to be negative. Positive precision is dented whenever a negative property is placed into the positive partition. 
Encouraging macro-averages for precision & recall across 6,230 stereotypes is presented overleaf.
Average P/R/F1 scores for the affective retrieval of positive and negative properties from 6,230 stereotypes 
Macro Average 
(6230 stereotypes) 
Positive 
properties 
Negative 
properties 
Precision 
.962 
.98 
Recall 
.975 
.958 
F-Score 
.968 
.968 
Avg. 6.51 properties per stereotype 
Veale & Li (2012)
How do we model the relationship between metaphors and feelings?
Metaphors do more than convey propositions: they convey feelings about those propositions. 
A good metaphor resonates with emotion, so much so that listeners resonate to the same frequency. 
How might a computer, with no emotions of its own, appreciate the feelings evoked by a metaphor? 
What, for instance, are the emotional resonances of a property like bloody? And what are the resonances of a concept that is stereotypically bloody, such as a butcher or a murderer? 
We can once again use similes as a guide, or rather, our property-to-property graph derived from double-ground Web similes. So what feeling-heavy properties does bloody evoke? 
The obvious properties are those properties p that can be expressed thusly: “I feel p-ed by …” 
Note to self: 
Next time just go with corpus analysis!
“I feel disgusted by” “I feel appalled by …” 
First Model the Relationship Between Properties and Feelings 
Patterns of co-description in similes reveal how properties make us feel 
disgusting 
terrifying 
revolting 
disturbing 
frightening 
appalling 
exciting 
bloody 
8 
4 
3 
3 
2 
2 
2 
vile 
34 
0 
1 
5 
0 
0 
0 
filthy 
14 
0 
4 
0 
0 
0 
0 
bizarre 
11 
1 
2 
0 
11 
1 
1 
horrible 
11 
3 
0 
1 
2 
0 
0 
horrid 
10 
0 
2 
0 
1 
0 
0 
… 
… 
… 
… 
… 
… 
… 
… 
Veale (2013) 
Using simple morphology rules we can identify properties that correspond directly with the expression of feelings
Our property-to-property graph effectively defines a radial category for each property: since for every property, it provides a textured set of other properties that it evokes & reinforces. 
So, given even a small set of morphologically-transparent mappings from properties to feelings, we can assume that other properties in the same radial category will evoke those feelings too, albeit to a lesser extent, in line with the centrality of those other properties in its radial categories. 
Thus, because bloody is frequently found in double-ground similes with disgusting (see table on previous page) we can assume that bloody will likewise evoke the feeling disgusted_by. 
For a given stereotype representation, we can map each of its properties onto one or more feelings, ordered by weights from the property-to-property graph, to get a textured space of possible feelings. The space of feelings for the metaphor art is a challenge is shown overleaf.
Output of Metaphor Magnet service: Veale & Li (2012)
Successful metaphors are the common currency of a language. We use them continuously, as linguistic tender with well-understood meanings, feelings and communicative functions. 
The most common metaphors are so conventionalized that we hardly recognize them as metaphors. 
A landmark analysis of conventional metaphors is provided in George Lakoff and Mark Johnson’s 1980 book Metaphors We Live By. These metaphors are pervasive and cognitively entrenched. 
A rich source of the most common X is Y metaphors is the Google n-grams database. An analysis of Web 4-grams reveals a wealth of copula metaphors for a broad swathe of everyday concepts. 
We use these time-worn figures as a basis for understanding novel metaphors. If, then, Apple is said to be a religion, shouldn’t we understand how religions are commonly described using metaphor?
Veale & Li (2012) 
Apple is a religion 
We can mine commonplace metaphors from Google n-grams 
... and extrapolate to new target words & concepts
What if a wide assortment of metaphors could be created on demand for a given target? concept 
What if metaphors could be created by a dedicated service – a Web Service, such as Metaphor Magnet 
Veale & Li (2012) 
Veale (2013)
Metaphor Magnet is a Web app / service that generates and analyzes metaphors on demand. 
Users may access it via a browser, and apps may access it via a URL interface that returns XML. 
Since a metaphor may be used to place a positive or negative spin on a topic, and since this spin will certainly affect any interpretation, users may prefix a topic with + or – to indicate an explicit spin. 
Suppose we ask Metaphor Magnet to provide metaphors that elaborate on the basic conceit that Apple is a -religion. Metaphor Magnet will retrieve common metaphors for religion from n-grams. 
It will filter these metaphors by affect (negative in this case), aggregate the various properties transferred by each (e.g. depraved, authoritarian, threatening, pernicious, etc.) and seek out corpus evidence (in the Google n-grams) to support each of these being applied to the topic Apple.
On the Web: Metaphor Magnet
Metaphor Magnet uses its ability to generate a textured feeling space to emotionally explain the selected 
metaphor Apple is a dogmatic cult
Metaphor Magnet uses the Google n-grams to retrieve the negative metaphors (below right) for –prison, and transfers the associated properties and stereotypes into the domain of love (below left)
The Horror genre is filled with metaphors and conceptual blends, which allow stories to 
create chimerical characters that straddle the boundaries of antagonistic categories. 
Thus, vampires & zombies straddle the categories of living & dead. Ghosts straddle the categories of tangible & intangible. Golems & gargoyles straddle the categories of animate & inanimate. Werewolves (and to an extent, vampires) straddle the categories of human & animal. 
These blends are unnerving not just because they force a union of opposites. They unnerve us because they give rise to surprising emergent properties present in neither of the input concepts. 
Well, I am 
half man and 
half canine … 
So I must 
be my own 
best friend!
Do you remember the horror movie The Fly? Are you a fan of David Cronenberg’s 1980’s body horror, or do you perhaps prefer the creepy Vincent Price original? 
Seth Brundle, a mad scientist of sorts, develops a Star Trek-like matter transporter, but when he tests the device on himself, his DNA is accidentally scrambled with that of a common housefly. 
Brundle’s cells becomes a genetic cut-up of human and fly DNA so bizarre it might have come from a William Burroughs novel (indeed, Cronenberg also adapted Burroughs’s The Naked Lunch). 
The chimerical result, Brundlefly, combines properties of Brundle and of the housefly, but exhibits emergent qualities of its own, such as malevolence, paranoia and even super-strength! 
I am big … 
It’s the movies that got small.
Fauconnier & Turner (1998) 
Fauconnier & Turner (2002) 
Veale & O’Donoghue (2000) 
Pereira (2007)
Conceptual blending is a complex cognitive process that can be applied at any level of conceptual organization. 
The cut-up process of Burroughs & Gysin is a manipulation of media whose result, when meaningful, is a conceptual blend. The mash-up of knowledge representations, seen earlier in Metaphor Eyes, is a metaphor-oriented blending process. 
We define now another simple model of blending, directed at the property-level of stereotype representation. Consider the metaphor love is the grave. Properties of love may combine with stereotypical properties of grave in the resulting blend. 
A phrasal blend is an attested phrase “M H” in which M denotes a property of one concept and H denotes a property of another. For instance, “cold embrace” is a phrasal blend of the property cold of grave and embracing of love. 
We can retrieve attested phrases of this kind (and thus, phrasal blends) from the Google n-grams for a given source & target.
Love + Grave 
Love 
Grave 
Attested Web ngrams 
“Bridging Terms” in 
Attested Web ngrams
Input1 
Input2 
Metaphor Magnet can thus generate a set of corpus-attested phrasal blends for any given pairing of Source and Target concepts, such as Love and Grave (see previous page). 
Some phrasal blends arise entirely out of a single input, such as “dreary chill” of the grave. If the Google n-grams attest that love can be both dreary and chilly, then dreary chill is projected onto it. 
Some phrasal blends arise only from a combination of both inputs and are not found in any one input alone, such as romantic darkness and gentle silence. These are emergent qualities of the blend. 
Emergent qualities may nonetheless be present in a 3rd unnamed concept: romantic darkness can arise from thunderstorm alone, and gentle silence from sigh alone, so these may also be evoked.
Sigh 
Thunderstorm
And poetry is the cherry on top! 
The phrasal blends that are generated by Metaphor Magnet (via attested n-gram retrieval) for a given metaphor comprise a set of very evocative and surprisingly poetic phrasal descriptions. Metaphor Magnet provides a poetry-generation service to further exploit these descriptions. 
Metaphors/phrasal blends are packaged into simple poems using a model called Stereotrope: selecting from the available set of phrasal blends, and by recruiting additional concepts that are evoked by these blends (such as sigh and thunderstorm for love is the grave), Stereotrope packages each phrasal blend in one of a variety of poetic tropes, such as simile or superlative.
Stereotrope: Veale (2013) 
RobotComix.com/metaphor-magnet-acl
A (Web-) Service-Oriented Architecture is “an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented” 
Erl (2008) 
New Metaphor Services should be discoverable, autonomous and widely reusable, and should be flexible enough to compose in groups, while remaining loosely coupled to others. Services should also maintain minimal state information and use abstraction to hide the complexity of their inner workings and data.
Imagine if creativity could be delivered on a platter to any application, as and when it needed it 
The key to providing computational creativity on demand on the Web is the provision of a thriving marketplace of competing or cooperating services, each solving one small piece of the larger puzzle. 
Metaphor Magnet, Metaphor Eyes and Thesaurus Rex are just three reusable tiles in this planned mosaic of creative Web services. We need other metaphor services, and many other services besides to exploit, compose and mash-up the outputs of these three metaphor/blend/category providers. 
These new services should adhere Erl’s principles for a well-designed service-oriented architecture. Specifically, they should be easily discoverable and should play well (inter-operably) with others.
Computational modelers of metaphor can be at the vanguard of this vision of A Creative Web 
As René Magritte outlined in his manifesto Les Mots et Les Images, words, objects and images are interoperable and interchangeable commodities: software services capable of handling metaphor and other forms of linguistic creativity can provide a solid basis for creative systems more generally. 
To do so, we must also conceive of our models as interoperable and interchangeable commodities themselves. So let’s get developing, sharing, and composing!
This tutorial has, by necessity. been highly selective. Many interesting works by many interesting researchers have unavoidably been overlooked. 
Metaphor has been the subject of intense study since antiquity. Computer scientists are late to the party but no less fascinated or enthusiastic. 
For further reading, see the bibliography, or check out RobotComix.com, or look for this book for computationally- minded readers.
Taxonomies and Metaphor 
Aristotle. (335 B.C. / 1997). Poetics. Translated by Malcolm Heath. Penguin Classics. 
Yorick Wilks. (1978). Making Preferences More Active. Artificial Intelligence 11(3):197-223. 
Dan Fass. (1991). Met*: a method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90. 
Eileen Cornell Way. (1991). Knowledge Representation and Metaphor. Studies in Cognitive systems. Kluwer Academic. 
Christiane Fellbaum. (Ed.). (1998). WordNet: An electronic lexical database. MIT Press. 
Tony Veale. (2006). An analogy-oriented type hierarchy for linguistic creativity. Journal of Knowledge- Based Systems, 19(7):471-479. 
Categorization, Prototype Theory and Metaphor 
Eleanor Rosch. (1975). Cognitive Representations of Semantic Categories. Journal of Experimental Psychology: General, 104(3):192–233. 
Selected Bibliography and Additional Readings
George Lakoff. (1987). Women, Fire and Dangerous Things. University of Chicago Press. 
Patrick Hanks. (1994). Linguistic Norms and Pragmatic Exploitations, Or Why Lexicographers need Prototype Theory, and Vice Versa. In F. Kiefer, G. Kiss, and J. Pajzs (Eds.) Papers in Computational Lexicography: Complex-1994. Hungarian Academy of Sciences, Budapest. 
Sam Glucksberg. (1998). Understanding metaphors. Current Directions in Psychological Science, 7:39-43. 
Sam Glucksberg (with Matthew McGlone). (2001) Understanding Figurative Language: From Metaphors to Idioms. Oxford University Press. 
Dirk Geeraerts. (2006). Prototype Theory: Prospects and Problems. In Dirk Geeraerts (Ed.), Cognitive linguistics: basic readings. Walter de Gruyter. 
Tony Veale. (2007). Dynamic Creation of Analogically-Motivated Terms and Categories in Lexical Ontologies. In Judith Munat (Ed.), Lexical Creativity, Texts and Contexts (Studies in Functional and Structural Linguistics), 189-212. John Benjamins. 
Tony Veale and Yanfen Hao. (2007). Making Lexical Ontologies Functional and Context-Sensitive. Proceedings of ACL 2007, the 45th Annual Meeting of the Association of Computational Linguistics, 57–64. 
Sam Glucksberg. (2008). How metaphor creates categories – quickly! In Raymond W. Gibbs, Jr. (Ed.), The Cambridge Handbook of Metaphor and Thought (chapter 4). Cambridge University Press.
Conventional Metaphors 
George Lakoff and Mark Johnson. (1980). Metaphors We Live By. University of Chicago Press. 
James H. Martin. (1990). A Computational Model of Metaphor Interpretation. Academic Press. 
Tony Veale and Mark T. Keane. (1992). Conceptual Scaffolding: A spatially founded meaning representation for metaphor comprehension, Computational Intelligence, 8(3):494-519. 
Dan Fass. (1997). Processing Metonymy and Metaphor. Contemporary Studies in Cognitive Science & Technology. New York: Ablex. 
Brian Bowdle & Dedre Gentner. (2005). The Career of Metaphor. Psychological Review, 112(1):193-216. 
John Barnden. (2006). Artificial Intelligence, figurative language and cognitive linguistics. In G. Kristiansen, M. Achard, R. Dirven, & F. J. Ruiz de Mendoza Ibanez (Eds.), Cognitive Linguistics: Current Application and Future Perspectives, 431-459. Mouton de Gruyter. 
Similes 
Archer Taylor. (1954). Proverbial Comparisons and Similes from California. Folklore Studies 3. University of California Press. 
Neal R. Norrick. (1986). Stock Similes. Journal of Literary Semantics, XV(1):39-52.
David Fishelov. (1992). Poetic and Non-Poetic Simile: Structure, Semantics, Rhetoric. Poetics Today, 14(1):1-23. 
Rosamund Moon. (2008). Conventionalized as-similes in English: A problem case. International Journal of Corpus Linguistics, 13(1):3-37. 
Tony Veale. (2013). Humorous Similes. HUMOR: International Journal of Humor Research, 21(1):3-22. 
Conceptual Blending Theory 
Gilles Fauconnier. (1994). Mental spaces: aspects of meaning construction in natural language. Cambridge University Press. 
Gilles Fauconnier and Mark Turner. (1994). Conceptual Projection and Middle Spaces. University of California at San Diego, Department of Computer Science Technical Report 9401. 
Gilles Fauconnier. (1997). Mappings in Thought and Language. Cambridge University Press. 
Gilles Fauconnier and Mark Turner. (1998). Conceptual Integration Networks. Cognitive Science, 22(2):133–187. 
Tony Veale and Diarmuid O’Donoghue. (2000). Computation and Blending. Cognitive Linguistics, 11(3- 4):253-281.
Gilles Fauconnier and Mark Turner. (2002). The Way We Think. Conceptual Blending and the Mind's Hidden Complexities. Basic Books. 
Francisco Câmara Pereira. (2007). Creativity and artificial intelligence: a conceptual blending approach. Walter de Gruyter. 
Analogy and Structure-Mapping Theory 
Dedre Gentner. (1983). Structure-mapping: A Theoretical Framework. Cognitive Science 7(2):155–170. 
Dedre Gentner and Cecile Toupin. (1986). Systematicity and Surface Similarity in the Development of Analogy. Cognitive Science, 10(3):277–300. 
Brian Falkenhainer, Kenneth D. Forbus and Dedre Gentner. (1989). Structure- Mapping Engine: Algorithm and Examples. Artificial Intelligence, 41:1-63. 
Keith J. Holyoak and Paul Thagard. (1989) Analogical Mapping by Constraint Satisfaction, Cognitive Science, 13:295-355. 
Douglas R. Hofstadter and the Fluid Analogies Research Group. (1995). Fluid Concepts and Creative Analogies. Computer Models of the Fundamental Mechanisms of Thought. Basic Books. 
Tony Veale and Mark T. Keane. (1997). The Competence of Sub-Optimal Structure Mapping on ‘Hard’ Analogies. Proceedings of IJCAI’97, the 15th International Joint Conference on Artificial Intelligence.
Lexical Analogy 
Peter D. Turney, M. L. Littman, J. Bigham & V. Shnayder. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing. 
Tony Veale. (2003). The Analogical Thesaurus. Proceedings of the 2003 Conference on Innovative applications of Artificial Intelligence, Acapulco, Mexico. Morgan Kaufmann, San Mateo, CA. 
Tony Veale. (2004). WordNet sits the S.A.T.: A Knowledge-based Approach to Lexical Analogy. Proceedings of ECAI-2004, the 16th European Conference on Artificial Intelligence. 
Peter D. Turney. (2006). Similarity of semantic relations. Computational Linguistics, 32(3):379-416. 
Metaphor and Similarity 
Mary K. Camac, and Sam Glucksberg. (1984). Metaphors do not use associations between concepts, they are used to create them. Journal of Psycholinguistic Research, 13:443-455. 
Sam Glucksberg and Boaz Keysar. (1990). Understanding Metaphorical Comparisons: Beyond Similarity. Psychological Review, 97(1):3-18.
George A. Miller and Walter. G. Charles. (1991). Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1):1-28. 
Tony Veale & Guofu Li. (2013). Creating Similarity: Lateral Thinking for Vertical Similarity Judgments. In Proceedings of ACL 2013, the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria. 
Irony 
Herbert H. Clark and Richard J. Gerrig. (1984). On the pretense theory of irony. Journal of Experimental Psychology: General, 113:121-126. 
Sachi Kumon-Nakamura, Sam Glucksberg and Mary Brown. (1995). How about another piece of pie: The Allusional Pretense Theory of Discourse Irony. Journal of Experimental Psychology: General 124:3-21 
Rachel Giora and Ofer Fein. (1999). Irony: Context and Salience, Metaphor and Symbol, 14(4):241-257. 
Yanfen Hao and Tony Veale. (2010). An Ironic Fist in a Velvet Glove: Creative Mis- Representation in the Construction of Ironic Similes. Minds and Machines, 20(4):483-488. 
Tony Veale and Yanfen Hao. (2010). Detecting Ironic Intent in Creative Comparisons. Proceedings of ECAI-2010, the 19th European conference on Artificial Intelligence.
Tony Veale. (2013). Strategies and tactics for ironic subversion. In: Marta Dynel (Ed.), Developments in Linguistic Humour Theory. John Benjamins publishing company. 
Antonio Reyes, Paolo Rosso & Tony Veale. (2013). A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation 47:239--268. 
Incongruity and Humour 
Jerry M. Suls. (1972). A Two-Stage Model for the Appreciation of Jokes and Cartoons: An information- processing analysis. In J.H. Goldstein & P.E. McGhee (Eds.), The Psychology of Humor. Academic Press. 
Victor Raskin. (1985). Semantic Mechanisms of Humor. D. Reidel. 
Graeme Ritchie. (1999). Developing the Incongruity-Resolution Theory. Proceedings of the AISB Symposium on Creative Language: Stories and Humour, (Edinburgh, Scotland). 
Elliott Oring. (2003). Engaging Humor. University of Illinois Press. 
Graeme Ritchie. (2003). The Linguistic Analysis of Jokes. Routledge Studies in Linguistics, 2. Routledge. 
Tony Veale, Kurt Feyaerts and Geert Brône. (2006). The cognitive mechanisms of adversarial humor. HUMOR: The International Journal of Humor Research, 19-3:305-339.
N-Gram / Web / Corpus-derived models of linguistic norms 
Marti Hearst. (1992). Automatic acquisition of hyponyms from large text corpora. In Proc. of the 14th International Conference on Computational Linguistics, pp 539–545. 
Thorsten Brants and Alex Franz. (2006). Web 1T 5-gram Version 1. Linguistic Data Consortium. 
Adam Kilgarriff. (2007). Googleology is Bad Science. Computational Linguistics, 33(1):147-151. 
Marius Pasca & Benjamin Van Durme. (2007). What You Seek is What You Get: Extraction of Class Attributes from Query Logs. In Proc. of IJCAI-07, the 20th Int. Joint Conference on Artificial Intelligence. 
Zornitsa Kozareva, Eileen Riloff and Eduard Hovy. (2008). Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. In Proc. of the 46th Annual Meeting of the ACL, pp 1048-1056. 
Tony Veale, Guofu Li and Yanfen Hao. (2009). Growing Finely-Discriminating Taxonomies from Seeds of Varying Quality and Size. In Proc. of EACL’09, the 12th Conference of the European Chapter of the Association for Computational Linguistics pp. 835-842. 
Tony Veale and Guofu Li. (2011). Creative Introspection and Knowledge Acquisition. Proceedings of AAAI-11, The 25th AAAI Conference on Artificial Intelligence. 
Tony Veale. (2011). Creative Language Retrieval. Proceedings of ACL 2011, the 49th Annual Meeting of the Association for Computational Linguistics.
Gozde Özbal & Carlo Strapparava. (2012). A computational approach to automatize creative naming. In Proc. of the 50th annual meeting of the Association of Computational Linguistics, Jeju, South Korea. 
Web-Services and Metaphor 
Thomas Erl. (2008). SOA: Principles of Service Design. Prentice Hall. 
Tony Veale & Guofu Li. (2012). Specifying Viewpoint and Information Need with Affective Metaphors: A System Demonstration of Metaphor Magnet. In Proceedings of ACL’2012, the 50th Annual Conference of the Association for Computational Linguistics, Jeju, South Korea. 
Tony Veale. (2013). A Service-Oriented Architecture for Computational Creativity. .Journal of Computing Science and Engineering, 7(3):159-167. 
Tony Veale. (2013). Less Rhyme, More Reason: Knowledge-based Poetry Generation with Feeling, Insight and Wit. In Proceedings of ICCC 2013, the 4th International Conference on Computational Creativity. Sydney, Australia, June 2013. 
Computational Creativity and Metaphor 
Tony Veale. (2012). Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity. London: Bloomsbury Academic.
For more content see RobotComix.com

Weitere ähnliche Inhalte

Was ist angesagt?

Translation loss and gain
Translation loss and gainTranslation loss and gain
Translation loss and gainAngelito Pera
 
Corpora in language teaching
Corpora in language teachingCorpora in language teaching
Corpora in language teachingJonathan Smart
 
Grice maxims and implicature in waiting for godot
Grice maxims  and implicature in waiting for godotGrice maxims  and implicature in waiting for godot
Grice maxims and implicature in waiting for godotrizwan shabbir
 
Discourse and the sentence
Discourse and the sentenceDiscourse and the sentence
Discourse and the sentenceStudent
 
Foucauldian discourse analysis.
Foucauldian discourse analysis.Foucauldian discourse analysis.
Foucauldian discourse analysis.Nabeela Taimur Ali
 
Reference And Inference By Dr.Shadia.Pptx
Reference And Inference  By Dr.Shadia.PptxReference And Inference  By Dr.Shadia.Pptx
Reference And Inference By Dr.Shadia.PptxDr. Shadia Banjar
 
Sight translation
Sight translationSight translation
Sight translationShona Whyte
 
ChatGPT_Prompts.pptx
ChatGPT_Prompts.pptxChatGPT_Prompts.pptx
ChatGPT_Prompts.pptxChakrit Phain
 
Discourse Analysis and grammar
Discourse Analysis and grammarDiscourse Analysis and grammar
Discourse Analysis and grammarH. R. Marasabessy
 
Introduction to ChatGPT
Introduction to ChatGPTIntroduction to ChatGPT
Introduction to ChatGPTannusharma26
 
Misconceptions about linguistics
Misconceptions about linguisticsMisconceptions about linguistics
Misconceptions about linguisticsAli Lodhra
 
Discourse analysis and grammar
Discourse analysis and grammarDiscourse analysis and grammar
Discourse analysis and grammarAmal Mustafa
 
Pakistani English Vs. British English
Pakistani English Vs. British EnglishPakistani English Vs. British English
Pakistani English Vs. British EnglishManzoor Panhwer
 
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Applitools
 

Was ist angesagt? (20)

Code Switching
Code SwitchingCode Switching
Code Switching
 
Translation loss and gain
Translation loss and gainTranslation loss and gain
Translation loss and gain
 
Corpora in language teaching
Corpora in language teachingCorpora in language teaching
Corpora in language teaching
 
Definition of translation
Definition of translationDefinition of translation
Definition of translation
 
Grice maxims and implicature in waiting for godot
Grice maxims  and implicature in waiting for godotGrice maxims  and implicature in waiting for godot
Grice maxims and implicature in waiting for godot
 
Discourse and the sentence
Discourse and the sentenceDiscourse and the sentence
Discourse and the sentence
 
Foucauldian discourse analysis.
Foucauldian discourse analysis.Foucauldian discourse analysis.
Foucauldian discourse analysis.
 
Discourse analysis
Discourse analysisDiscourse analysis
Discourse analysis
 
Reference And Inference By Dr.Shadia.Pptx
Reference And Inference  By Dr.Shadia.PptxReference And Inference  By Dr.Shadia.Pptx
Reference And Inference By Dr.Shadia.Pptx
 
Sight translation
Sight translationSight translation
Sight translation
 
ChatGPT_Prompts.pptx
ChatGPT_Prompts.pptxChatGPT_Prompts.pptx
ChatGPT_Prompts.pptx
 
Discourse Analysis and grammar
Discourse Analysis and grammarDiscourse Analysis and grammar
Discourse Analysis and grammar
 
Introduction to ChatGPT
Introduction to ChatGPTIntroduction to ChatGPT
Introduction to ChatGPT
 
Misconceptions about linguistics
Misconceptions about linguisticsMisconceptions about linguistics
Misconceptions about linguistics
 
Discourse analysis and grammar
Discourse analysis and grammarDiscourse analysis and grammar
Discourse analysis and grammar
 
Adjacency pair
Adjacency pairAdjacency pair
Adjacency pair
 
Discourse analysis and grammar
Discourse analysis and grammar Discourse analysis and grammar
Discourse analysis and grammar
 
Discourse analysis and language teaching
Discourse analysis and language teachingDiscourse analysis and language teaching
Discourse analysis and language teaching
 
Pakistani English Vs. British English
Pakistani English Vs. British EnglishPakistani English Vs. British English
Pakistani English Vs. British English
 
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...
 

Ähnlich wie Tutorial on Creative Metaphor Processing

Online-Aesthetics. From Genre to Subculture
Online-Aesthetics. From Genre to SubcultureOnline-Aesthetics. From Genre to Subculture
Online-Aesthetics. From Genre to SubcultureAnton Hecht
 
Essay On A Raisin In The Sun.pdf
Essay On A Raisin In The Sun.pdfEssay On A Raisin In The Sun.pdf
Essay On A Raisin In The Sun.pdfVictoria Coleman
 
Elit 48 c class 17 post qhq
Elit 48 c class 17 post qhqElit 48 c class 17 post qhq
Elit 48 c class 17 post qhqjordanlachance
 
Postmodernism version 2
Postmodernism version 2Postmodernism version 2
Postmodernism version 2anniapple
 
Postmodernism version 2
Postmodernism version 2Postmodernism version 2
Postmodernism version 2anniapple
 
A2Y2 Media Studies, Media Language theory Postmodernism hyperreality
A2Y2 Media Studies, Media Language theory Postmodernism hyperrealityA2Y2 Media Studies, Media Language theory Postmodernism hyperreality
A2Y2 Media Studies, Media Language theory Postmodernism hyperrealityKBucket
 
MASKS AND THERE APPLICATION
MASKS AND THERE APPLICATIONMASKS AND THERE APPLICATION
MASKS AND THERE APPLICATIONAlix Harrow
 
Semiotics In That 70S Show
Semiotics In That 70S ShowSemiotics In That 70S Show
Semiotics In That 70S ShowOlga Bautista
 
Narrative Image: The How and Why of Visual Storytelling
Narrative Image: The How and Why of Visual StorytellingNarrative Image: The How and Why of Visual Storytelling
Narrative Image: The How and Why of Visual StorytellingDaniela Molnar
 
Role Model Essay Example
Role Model Essay ExampleRole Model Essay Example
Role Model Essay ExampleAshley Hargrove
 
Postmodernism theories and texts
Postmodernism theories and textsPostmodernism theories and texts
Postmodernism theories and textsraybloggs
 
Essays On Leadership.pdf
Essays On Leadership.pdfEssays On Leadership.pdf
Essays On Leadership.pdfCarmen Ercoli
 
Design Fiction @ Shift 2008
Design Fiction @ Shift 2008Design Fiction @ Shift 2008
Design Fiction @ Shift 2008Julian Bleecker
 
Oh The Places YouLl Go Dr Seuss Activities, Elementar
Oh The Places YouLl Go Dr Seuss Activities, ElementarOh The Places YouLl Go Dr Seuss Activities, Elementar
Oh The Places YouLl Go Dr Seuss Activities, ElementarAllison Thompson
 
Ways ofseeeing
Ways ofseeeingWays ofseeeing
Ways ofseeeingCSADillo
 
Sample Essay Topics For College. Choosing Your Co
Sample Essay Topics For College. Choosing Your CoSample Essay Topics For College. Choosing Your Co
Sample Essay Topics For College. Choosing Your CoNancy Ideker
 

Ähnlich wie Tutorial on Creative Metaphor Processing (20)

Online-Aesthetics. From Genre to Subculture
Online-Aesthetics. From Genre to SubcultureOnline-Aesthetics. From Genre to Subculture
Online-Aesthetics. From Genre to Subculture
 
Essay On A Raisin In The Sun.pdf
Essay On A Raisin In The Sun.pdfEssay On A Raisin In The Sun.pdf
Essay On A Raisin In The Sun.pdf
 
Elit 48 c class 17 post qhq
Elit 48 c class 17 post qhqElit 48 c class 17 post qhq
Elit 48 c class 17 post qhq
 
Postmodernism version 2
Postmodernism version 2Postmodernism version 2
Postmodernism version 2
 
Postmodernism version 2
Postmodernism version 2Postmodernism version 2
Postmodernism version 2
 
A2Y2 Media Studies, Media Language theory Postmodernism hyperreality
A2Y2 Media Studies, Media Language theory Postmodernism hyperrealityA2Y2 Media Studies, Media Language theory Postmodernism hyperreality
A2Y2 Media Studies, Media Language theory Postmodernism hyperreality
 
MASKS AND THERE APPLICATION
MASKS AND THERE APPLICATIONMASKS AND THERE APPLICATION
MASKS AND THERE APPLICATION
 
Multimodality
MultimodalityMultimodality
Multimodality
 
Semiotics In That 70S Show
Semiotics In That 70S ShowSemiotics In That 70S Show
Semiotics In That 70S Show
 
Imaginative Landscape Essays
Imaginative Landscape EssaysImaginative Landscape Essays
Imaginative Landscape Essays
 
Narrative Image: The How and Why of Visual Storytelling
Narrative Image: The How and Why of Visual StorytellingNarrative Image: The How and Why of Visual Storytelling
Narrative Image: The How and Why of Visual Storytelling
 
Role Model Essay Example
Role Model Essay ExampleRole Model Essay Example
Role Model Essay Example
 
Postmodernism theories and texts
Postmodernism theories and textsPostmodernism theories and texts
Postmodernism theories and texts
 
Essays On Leadership.pdf
Essays On Leadership.pdfEssays On Leadership.pdf
Essays On Leadership.pdf
 
05. Contemporary Media Issues - Postmodern Aesthetics
05. Contemporary Media Issues - Postmodern Aesthetics05. Contemporary Media Issues - Postmodern Aesthetics
05. Contemporary Media Issues - Postmodern Aesthetics
 
Design Fiction @ Shift 2008
Design Fiction @ Shift 2008Design Fiction @ Shift 2008
Design Fiction @ Shift 2008
 
Oh The Places YouLl Go Dr Seuss Activities, Elementar
Oh The Places YouLl Go Dr Seuss Activities, ElementarOh The Places YouLl Go Dr Seuss Activities, Elementar
Oh The Places YouLl Go Dr Seuss Activities, Elementar
 
Ways ofseeeing
Ways ofseeeingWays ofseeeing
Ways ofseeeing
 
Sample Essay Topics For College. Choosing Your Co
Sample Essay Topics For College. Choosing Your CoSample Essay Topics For College. Choosing Your Co
Sample Essay Topics For College. Choosing Your Co
 
Pascamod before uts
Pascamod before utsPascamod before uts
Pascamod before uts
 

Mehr von Tony Veale

Plug and Play for a Transferrable Sense of Humour
Plug and Play for a Transferrable Sense of HumourPlug and Play for a Transferrable Sense of Humour
Plug and Play for a Transferrable Sense of HumourTony Veale
 
Appointment in Samarra: Bicameral Story-telling bots
Appointment in Samarra: Bicameral Story-telling botsAppointment in Samarra: Bicameral Story-telling bots
Appointment in Samarra: Bicameral Story-telling botsTony Veale
 
A giant sarcastic robot? What a Great Idea!
A giant sarcastic robot? What a Great Idea!A giant sarcastic robot? What a Great Idea!
A giant sarcastic robot? What a Great Idea!Tony Veale
 
Building a sense of humour: The Robot's guide to Humorous Incongruity
Building a sense of humour: The Robot's guide to Humorous IncongruityBuilding a sense of humour: The Robot's guide to Humorous Incongruity
Building a sense of humour: The Robot's guide to Humorous IncongruityTony Veale
 
West of Eden: Building Characters with Personality
West of Eden: Building Characters with PersonalityWest of Eden: Building Characters with Personality
West of Eden: Building Characters with PersonalityTony Veale
 
Fifty shades of orange: Building Bots with Personality
Fifty shades of orange: Building Bots with PersonalityFifty shades of orange: Building Bots with Personality
Fifty shades of orange: Building Bots with PersonalityTony Veale
 
Fifty shades of Dorian Gray: Affective Computing and The Self
Fifty shades of Dorian Gray: Affective Computing and The SelfFifty shades of Dorian Gray: Affective Computing and The Self
Fifty shades of Dorian Gray: Affective Computing and The SelfTony Veale
 
Pizza maker: A Tutorial on Building Twitterbots
Pizza maker: A Tutorial on Building TwitterbotsPizza maker: A Tutorial on Building Twitterbots
Pizza maker: A Tutorial on Building TwitterbotsTony Veale
 
Design patterns: An Introduction to Software Design Patterns
Design patterns: An Introduction to Software Design PatternsDesign patterns: An Introduction to Software Design Patterns
Design patterns: An Introduction to Software Design PatternsTony Veale
 
Better than the real thing: AI at the Movies
Better than the real thing: AI at the MoviesBetter than the real thing: AI at the Movies
Better than the real thing: AI at the MoviesTony Veale
 
Divine sparks: Empathic Ethical Machines
Divine sparks: Empathic Ethical MachinesDivine sparks: Empathic Ethical Machines
Divine sparks: Empathic Ethical MachinesTony Veale
 
Apt pupils: Machine Learning in the Movies
Apt pupils: Machine Learning in the MoviesApt pupils: Machine Learning in the Movies
Apt pupils: Machine Learning in the MoviesTony Veale
 
Mechanical muses
Mechanical musesMechanical muses
Mechanical musesTony Veale
 
Hawking's riddle: An OWL lesson
Hawking's riddle: An OWL lessonHawking's riddle: An OWL lesson
Hawking's riddle: An OWL lessonTony Veale
 
Metaphors all the way down
Metaphors all the way downMetaphors all the way down
Metaphors all the way downTony Veale
 
Placebo effect
Placebo effectPlacebo effect
Placebo effectTony Veale
 
Game of tropes
Game of tropesGame of tropes
Game of tropesTony Veale
 
Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...
Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...
Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...Tony Veale
 
Seduced and abandoned in the Chinese Room
Seduced and abandoned in the Chinese RoomSeduced and abandoned in the Chinese Room
Seduced and abandoned in the Chinese RoomTony Veale
 
Metaphor for NLP (at NAACL 2015)
Metaphor for NLP (at NAACL 2015)Metaphor for NLP (at NAACL 2015)
Metaphor for NLP (at NAACL 2015)Tony Veale
 

Mehr von Tony Veale (20)

Plug and Play for a Transferrable Sense of Humour
Plug and Play for a Transferrable Sense of HumourPlug and Play for a Transferrable Sense of Humour
Plug and Play for a Transferrable Sense of Humour
 
Appointment in Samarra: Bicameral Story-telling bots
Appointment in Samarra: Bicameral Story-telling botsAppointment in Samarra: Bicameral Story-telling bots
Appointment in Samarra: Bicameral Story-telling bots
 
A giant sarcastic robot? What a Great Idea!
A giant sarcastic robot? What a Great Idea!A giant sarcastic robot? What a Great Idea!
A giant sarcastic robot? What a Great Idea!
 
Building a sense of humour: The Robot's guide to Humorous Incongruity
Building a sense of humour: The Robot's guide to Humorous IncongruityBuilding a sense of humour: The Robot's guide to Humorous Incongruity
Building a sense of humour: The Robot's guide to Humorous Incongruity
 
West of Eden: Building Characters with Personality
West of Eden: Building Characters with PersonalityWest of Eden: Building Characters with Personality
West of Eden: Building Characters with Personality
 
Fifty shades of orange: Building Bots with Personality
Fifty shades of orange: Building Bots with PersonalityFifty shades of orange: Building Bots with Personality
Fifty shades of orange: Building Bots with Personality
 
Fifty shades of Dorian Gray: Affective Computing and The Self
Fifty shades of Dorian Gray: Affective Computing and The SelfFifty shades of Dorian Gray: Affective Computing and The Self
Fifty shades of Dorian Gray: Affective Computing and The Self
 
Pizza maker: A Tutorial on Building Twitterbots
Pizza maker: A Tutorial on Building TwitterbotsPizza maker: A Tutorial on Building Twitterbots
Pizza maker: A Tutorial on Building Twitterbots
 
Design patterns: An Introduction to Software Design Patterns
Design patterns: An Introduction to Software Design PatternsDesign patterns: An Introduction to Software Design Patterns
Design patterns: An Introduction to Software Design Patterns
 
Better than the real thing: AI at the Movies
Better than the real thing: AI at the MoviesBetter than the real thing: AI at the Movies
Better than the real thing: AI at the Movies
 
Divine sparks: Empathic Ethical Machines
Divine sparks: Empathic Ethical MachinesDivine sparks: Empathic Ethical Machines
Divine sparks: Empathic Ethical Machines
 
Apt pupils: Machine Learning in the Movies
Apt pupils: Machine Learning in the MoviesApt pupils: Machine Learning in the Movies
Apt pupils: Machine Learning in the Movies
 
Mechanical muses
Mechanical musesMechanical muses
Mechanical muses
 
Hawking's riddle: An OWL lesson
Hawking's riddle: An OWL lessonHawking's riddle: An OWL lesson
Hawking's riddle: An OWL lesson
 
Metaphors all the way down
Metaphors all the way downMetaphors all the way down
Metaphors all the way down
 
Placebo effect
Placebo effectPlacebo effect
Placebo effect
 
Game of tropes
Game of tropesGame of tropes
Game of tropes
 
Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...
Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...
Unweaving the lexical rainbow: Grounding Linguistic Creativity in Perceptual ...
 
Seduced and abandoned in the Chinese Room
Seduced and abandoned in the Chinese RoomSeduced and abandoned in the Chinese Room
Seduced and abandoned in the Chinese Room
 
Metaphor for NLP (at NAACL 2015)
Metaphor for NLP (at NAACL 2015)Metaphor for NLP (at NAACL 2015)
Metaphor for NLP (at NAACL 2015)
 

Kürzlich hochgeladen

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 

Kürzlich hochgeladen (20)

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 

Tutorial on Creative Metaphor Processing

  • 1. Understanding Figurative-Language from the Inside-Out Tony Veale, 2014
  • 2. In this age of the Internet Of Things, in which even everyday objects can be accessed and inter-linked via the Web, it is all too easy to forget that words are the original inter-linkable things. For words too are objects, with their own forms and prescribed uses, public identities, private associations and preferred contexts of use. The Belgian surrealist René Magritte famously exploited the duality of words – to refer to objects and to be objects in their own right – in his provocative inter-plays of word and image. In his surrealist manifesto on the use of words in pictures, entitled Les Mots et Les Images (1929), Magritte identified a magician’s box of tricks for creatively exploiting the tenuous connection between words and what they describe.
  • 3. As object is not so wedded to its name that we cannot find a more suitable one for it. The first claim in Magritte’s manifesto stands out as being particularly relevant to the study of figurative language. Words are so useful precisely because they allow us to refer to objects and to ideas, but this mapping is neither inevitable nor immutable. As Magritte showed, the interface between words, objects and images is a malleable one, especially in the hands of a creative thinker. In a given context the most conventional word for an object may be the last word we wish to use. In this context a word for an altogether different object may actually serve our purposes, and communicate our meanings, with far greater force, clarity and resonance.
  • 4. The philosopher Ludwig Wittgenstein attributed many problems in philosophy to words: philosophers, he said, unconsciously create ambiguity and confusion by taking words on holiday, by taking them from their home contexts where they make intuitive sense and transplanting them to new and exotic contexts where the local customs are different and where their meanings are subtly stretched. But this is what we consciously do when we use language creatively: we deliberately take our words on holiday, allowing them to take leave of their conventional senses so that they can show us a new and exciting side of themselves.
  • 5. Figurative Devices, like Metaphor, offer a fast and flexible (if often unconventional) means of conveying meaning from speaker to speaker, or from context to context … … when it goes right! When it goes wrong, our meanings are like lost luggage: mangled, misplaced, irretrievable, whose contents are often dangerously misunderstood content.
  • 6. We use words figuratively to evoke other words and to bring other meanings into play: Magritte captured this insight in his surrealist manifesto, Les Mots Et Les Images, thusly: “A word can replace an object in reality” and “An object makes one suppose there are other objects behind it.” Consider this evocative example of figurative from the Guardian newspaper, when describing the movie director Sam Mendes: “Appearance: like the painting in George Clooney’s attic.” Clearly, we shall need to look behind the words, and past their associated objects, to see what is being said about the target individual Sam Mendes in this figurative comparison. Sam Mendes
  • 7. George Clooney Dorian Gray? Picture Attic What pictures might George Clooney keep in his attic? Does he even have an attic? The question is pointless because the comparison does not refer to a real picture or a real attic. Rather, the mention of “picture” and “attic” in the same sentence will put one in mind of Oscar Wilde’s famous morality tale, The Picture of Dorian Gray. Gray, a gilded youth, stays eternally young and unblemished while his portrait, hidden in the attic, accrues the ravishes of time and debauchery in his sinful stead.
  • 8. If George Clooney is Dorian Gray, then the painting in his attic is a painting of a time-ravished Mr. Clooney, not one of Mr. Gray We must invent a new version of the tale to understand this figure. this comparison prompts a conceptual blend of past knowledge (the Dorian tale) and topical gossip about Clooney. In the theory of researchers Mark Turner & Gilles Fauconnier,
  • 9. Ultimately, Mendes is not being compared to Clooney as we know him, but to one part of the conceptual blend of Clooney and Dorian Gray, and the unflattering part at that! In other words, Mendes looks the way Clooney deserves to look (but does not). The added implication is that Mendes once looked like Clooney, before his looks fell prey to the depredations of time. What is truly remarkable about this blend is how effortlessly we process it. A throw-away piece of snark in a gossip column mobilizes the full machinery of our intellect and we hardly even notice.
  • 10. Quick … Alert the media! It is convenient to think of metaphor as performing a unidirectional information transfer, in which knowledge from a source concept is transferred to, and projected onto, a target concept. But metaphors can also make information flow in both directions simultaneously. Consider this old remark about media guru Arianne Huffington (founder of news blog The Huffington Post): “She is the greatest social climber since Icarus.” Notice how Huffington and Icarus are each described as social climbers in this metaphor, forcing us to update our perceptions of both. We must now view Icarus as a social climber of sorts too. Ariadne Huffington & Icarus are effectively blended into one ambitious person in this metaphor.
  • 11. Figurative language exploits the way our conceptual structures are wired together, and even permits us to rewire our conceptual systems, by allowing us to connect seemingly unrelated ideas in persuasive new ways. As with computer cables, which come in different sizes and bandwidths, a figurative connection can carry a single piece of significant information or a large amount of related information in parallel. A scientific analogy, for instance, will establish a whole system of coherent mappings between a source and target domains, while a humorous simile may build a wonderfully detailed source picture to convey just one small piece of knowledge.
  • 12.
  • 13. Consider Shakespeare’s oft-quoted metaphor from Romeo and Juliet: “Juliet is the sun.” This metaphor conveys more than the mere radiance of Juliet’s beauty (as perceived by Romeo). It underpins an altogether grander system of metaphoric mappings that runs throughout the play. In this metaphorical solar system, Juliet is perceived as the gravitational center around which all the other characters are destined to orbit. We can choose to view this metaphor as a thin-pipe that conveys a single piece of information, or a fat-pipe that conveys the systematic richness of a solar-system metaphor.
  • 14. Structure-Mapping Accounts of Analogy are computationally well-understood (Gentner, Forbus, Falkenhainer, 1989; Holyoak & Thagard; Hofstadter & FARGS) The Bohr/Rutherford Atom as Solar-System Analogy
  • 15. Many creative similes and humorous pseudo-analogies use a thin-pipe to convey a message. These figures typically require us to build a complex image of the source-domain, to infer some significant quality from this mental image, and to project this single quality into the target domain. Consider this pseudo-analogical simile from George Carlin: “Having a smoking section in a restaurant is like having a peeing section in a swimming pool”. Now, we can construct a fat-pipe between the domains of restaurants and pools, to see the similarities between how smoke spreads through air and pee spreads through water. But the only significant quality we need to project to appreciate Carlin’s simile is this: the latter is clearly stupid, and so the former is too!
  • 16. Elaborate similes often use additional content to create mood, where mapping is optional: so we can choose to use a fat pipe to interpret such a simile, or savor its effect through a thin pipe. Consider this artful simile from a master of the form, Raymond Chandler: “Even on Central Avenue … he looked about as inconspicuous as a tarantula on a slice of angel food.” We can choose to map “he” (meaning Moose Molloy, a hulking white man) to “tarantula” and “Central Avenue” (a main thoroughfare in a black neighborhood) to “angel food”, but notice how the color scheme is playfully inverted (white on black  black on white). Chandler aims to shock with this simile, to convey just how out-of-place Molloy must seem to the wary residents of Central Avenue.
  • 17. Knowledge is knowing that a tomato is a fruit BUT Wisdom is knowing not to put it in a fruit salad Where does all the relevant knowledge come from? And of what kind is it? Metaphor, Simile, Analogy and other figurative forms are knowledge-hungry devices. In fact, creative language exploits heterogeneous types of knowledge: Propositional knowledge (actions, events, behaviors, tendencies, norms) Property-level knowledge (category membership criteria, expectations) Semantic knowledge (e.g. dictionaries) vs Pragmatic knowledge (e.g. corpora)
  • 18. In 1888, in their house at Arles, Vincent Van Gogh and Paul Gauguin painted the same topic: a chair. The resulting paintings are very different, both in composition and in chair-ness. Van Gogh’s chair is humble, unpretentious and brightly lit, while Gauguin’s is somber and ornate. Do these differing visions of a simple everyday object reveal deep differences in psychology? Each chair is very clearly a chair, yet each differs significantly from the other. What is the minimal set of qualities any chair should possess? What is the most prototypical chair? And shouldn’t we know what to expect from everyday categories before we can employ them creatively? Chair Ear
  • 19.
  • 20. Though it is often convenient to think of a category as simply a set of all of its members, the mathematic idea of a set is not nuanced enough to capture the textures of human categories. For some members of a category will stand out as being more obvious, or typical, or representative members of the category than others. Just think of the category BIRD. Some members of the category are more representative of bird-ness than others. What are the first birds that come to your mind? Birds that are small, pretty, gaily feathered, singing their songs in the branches of a nearby tree? Well, not all birds are like these birds. Cognitive Psychologists (such as Eleanor Rosch) and Cognitive Linguists (such as George Lakoff) argue that human categories are radial in nature, with the most representative members sitting at the centre, and with less typical members arranged at varying distances along the radius.
  • 21. E. Rosch, G. Lakoff “Radial” Categories See e.g. George Lakoff’s (1987) book, “Women, Fire and Dangerous Things”. A prototypical bird (for many this will be a robin, a sparrow, a thrush, a cuckoo etc.)
  • 22. Have you ever seen the game show Family Fortunes (called Family Feud in the US)? Teams of family members are asked to provide examples of a given question category, such as A KIND OF BIRD, and are scored on the representativeness of their answers. How does the game assess representativeness? By posing the same question to 100 members of the public, and by counting the number of times the same answers are given in response. The most representative answers will be the ones that are provided most often in these public surveys. So Family Fortunes was really testing our knowledge of radial categories, and our ability to explore the center region of each category. The center of a radial category really is the bulls eye!
  • 23. Less Representative … … More Peripheral
  • 24. A more popular, and challenging, variant of Family Fortunes is a game called Pointless. Teams are still asked to provide exemplars of a given category, such as A KIND OF BIRD, but players are now scored on the obscurity, or anti-representativeness, of their answers. How can we assess anti-representativeness? We can again pose the same question to 100 members of the public, and again count the number of times an answer is given in response. The least representative answers will be the ones that are given least often, but at least once. So Pointless is really testing our knowledge of the outer reaches of our radial categories, where the most obscure, the least typical and the most creative and surprising members reside.
  • 26. If Pointless generates more laughs per category than Family Fortunes/Feuds, then a good deal of the humour arises from the category members themselves. For the most peripheral members of a category are usually the last ones to come to mind, and reside in the grey areas of our common-sense knowledge where our safest generalizations easily break down. We all know that bats aren’t birds, but ask someone to describe the color and size of a bat’s egg and – for a moment at least – the question seems a perfectly natural one. Words than name categories evoke strong expectations that the members we have in mind will resemble the most prototypical members, but jokes thrive on violating such expectations.
  • 27. Appropriate Incongruity Essential to Humor Theories: see Victor Raskin (1985), Elliott Oring (2003) State bird of Transylvania? State bird of Minnesota?
  • 28. Accessing the Web in real-time to acquire texts dynamically, or to test linguistic hypotheses on the fly, can be a time-consuming business that is frowned upon by most Web search engines. The Google n-grams is a vast collection of text snippets from the Web – an n-gram is a contiguous sequence of n words or tokens – that can be quickly searched in a local database. Each Google n-grams is between 1 and 5 words long (so 5  n  1) and has a minimum frequency of 40 occurrences (case-sensitive counts apply) on the World-Wide-Web. Large bodies of text – called text corpora – are a rich (if mostly implicit) source of practical real-world knowledge. The texts of the Web are a constantly growing source of both trending topics and tacit common-sense knowledge. Web corpora have many advantages as a source of knowledge for a computer.
  • 29. A Web n-gram corpus can be viewed as a Lexicalized Idea Space Style! The Google N-Grams is vast database of recurring Web-text fragments n-gram Web n-grams attest to the viability of many combinations of words & ideas
  • 30. I’m Salt “salt and pepper” (724,197 hits) The linguist J.R. Firth once remarked that “You shall know a word by the company it keeps.” Words are found in the company of many other words in large text corpora, but the strongest associations produce observable patterns of co-occurrence. Thus, “salt” and “pepper” denote very similar kinds of things, and are very frequently found in each other’s company, not least because they share a common category (condiment) and are often found together in the real world. Coordination patterns – like “salt and pepper” or “angels and demons” or “knives and forks” – offer valuable insights into the category structures that motivate these co-occurrence patterns. So we can use the Google n-grams to look at coordinations of proper names, substances, and bare plural nouns, to determine how words and ideas come together to form radial categories. and I’m Pepper
  • 31.
  • 32. I’ll take “disasters” for 12, please Alex. How can we turn co-ordination patterns into Radial Categories? Coordinations like “doctors and lawyers” or “tables and chairs” tell us that two associated ideas will, in some contexts, be co-resident in the same category. The set of words/ideas that co-occur with “tables” in coordination 3-grams will collectively form an implicit furniture & furnishings category, just as the words/ideas that co-ordinate with “disasters” will form an implicit category of catastrophic events. For a given term T we can gather the coordinated terms with which it co-occurs in Google’s “and” 3-grams. We can sort these other terms by their similarity to T, using a WordNet-based similarity metric. The resulting set will have a radial structure, with the most T-like ideas clustering closest to T, and the least T-like keeping their distance.
  • 33.
  • 34. My sunglasses are a burqa for my eyes. A burqa for a man. Dictionaries tell us how words should be used, while corpora tell us how we actually use them. Likewise, a hierarchical system of word-senses such as WordNet enumerates the senses that words conventionally possess, as well as the categories they conventionally reside in, while corpora show us how speakers actually use them. A corollary of Magritte’s manifesto is that no word is so wedded to its sense or its reference that it cannot be given new ones in the right context. Consider Karl Lagerfeld’s description of his sunglasses as a burqa for his eyes – a privacy guard to shield against other people. Lagerfeld is here using the word “burqa” not in its specific Islamic sense, but as a prototype of the broader category of dark, privacy- protecting accessories. Sunglasses are in this category too, he says. Karl Lagerfeld Fashion Designer, Zoolander-esque fashion supervillain
  • 35. Aristotle (in his Poetics) saw metaphor as a question of categories and taxonomies: a metaphor applied the name of one category of things to another Computer scientists have found this a very attractive view, to apply to IS-A hierarchies and ontologies (e.g. Wilks 1978; Way 1991; Fass 1991; Veale 2003)
  • 36. Eileen Cornell Way (1991) argues metaphor needs a Dynamic Type Hierarchy (DTH) Psychologists also champion this view: see Glucksberg (2008) Category Inclusion
  • 37. { } Psychologist Sam Glucksberg sees metaphor as a way of creating and expanding categories. A metaphoric turn of the form “X is Y” is not an identity statement but a category inclusion statement. Y stands in place of a category Y’ in which Y is a prominent exemplar, and X is asserted to be a member of this category Y’ as well. The challenge for a computer scientist is to determine, given X and Y, the implied category Y’ that units X and Y under a meaningful category umbrella. We are unlikely to find this category Y’ to be an existing category in a conventional category system like that provided by WordNet or other standard lexical ontologies. As Eileen Cornell Way has argued, Y’ is likely a dynamic category that is created on the fly for the metaphor X is Y. Nonetheless, a system like WordNet can be used as a comprehensive foundation of static categories, on which new dynamic categories can be imposed as needed/created.
  • 38. Taxonomic Ideal {LETTER} {BETA} {ALPHA} {GAMMA} isa isa isa isa {BETH} {GIMEL} isa isa {DALETH} {ALEPH} {DELTA} isa isa E.g. if Alpha is “The 1st letter of the Greek Alphabet”, what is the Hebrew Alpha? We certainly could use a DTH here, as WordNet often lacks category structure where it counts What new dynamic categories can we add to facilitate the mapping of Greek  Hebrew letters?
  • 39. Taxonomic Ideal {LETTER} {BETA} {ALPHA} {GAMMA} isa isa isa isa {BETH} {GIMEL} isa isa {DALETH} {ALEPH} {DELTA} isa isa {1ST_LETTER} isa isa “The 1st letter of the Greek alphabet” “The 1st letter of the Hebrew alphabet” WordNet glosses: WordNet provides a dictionary-like text definition (or gloss) for each of its word-sense entries We can lift salient terms out of these glosses to create new dynamic categories for 2 or more senses Here the term “1st” is salient because of its position in each gloss, and because it is shared by another WordNet sense at the same depth under {letter}
  • 40. Taxonomic Ideal {LETTER} {ALPHA} {BETA} {GAMMA} {GREEK_LETTER} {HEBREW_LETTER} isa isa isa … isa isa {BETH} {GIMEL} isa isa isa … {ALEPH} {1ST_LETTER} {3RD_LETTER} isa isa isa isa isa isa isa isa isa {2ND_LETTER} The categories {Greek_letter} and {Hebrew_letter} are created in the same way Veale (2003) The Analogical Thesaurus
  • 41. {DEITY, GOD} {GAEA} {ZEUS} isa {ATHENA} {GREEK_DEITY} {VARUNA} {GANESH} {SHIVA} {HINDU_DEITY} isa … … {WISDOM_DEITY} gloss: goddess of wisdom and … gloss: god of wisdom or prophesy Same depth, common parent, hence pivotal Dynamic categories can be inferred in many areas of the WordNet category system Analogical Thesaurus Veale (2003)
  • 42. Can we measure the effectiveness of dynamic categories? Consider the challenge of mapping from the Greek alphabet to the Hebrew alphabet (or vice versa). Clearly we would hope that the 1st letter of the Greek alphabet is mapped to the 1st letter of the Hebrew alphabet, and not the 5th or 19th. Likewise, consider a mapping from Roman to Greek, or Viking to Teutonic deities, so that Mars maps to Ares, Minerva to Athena and Jupiter to Zeus amongst others. In an unadorned WordNet these mappings cannot be done with any accuracy, as the category structures do not discriminate by theme (e.g. wisdom vs. fertility gods) or by letter position (e.g. 1st versus 3rd letters). But dynamic categories can fill these gaps.
  • 43. Deity to Deity Mapping Task Precision Recall Static WN representations 0.115 0.34 Dynamic WN representation (+ gloss-feature reification) 0.935 0.61 Letter to Letter Mapping Task Precision Recall Static WN representations 0.04 0.98 Dynamic WN representation (+ gloss-feature reification) 0.96 0.98 E.G., Greek to Roman gods, Hindu to Semitic gods, etc. I.E., Greek to Hebrew letters, and Hebrew to Greek letters. WordNet glosses are a good source of relevant dynamic categories for figurative processing, but analysis of large corpora or the texts of the Web can provide even greater depth, coverage and nuance.
  • 44. The Web is a vast echo chamber of many diverse and competing voices. Yet these voices will converge when expressing the same tacit beliefs and expectations of the world.
  • 45. Google search Are you feeling lucky, punk? Why do businesses _ One source of convergence on the Web is the set of frequently-posed Web queries. Google exploits this convergence to provide a set of natural completions for the most common Web queries. Simply enter a partial query and watch Google anticipate your information need. Many people still type fully-formed questions into the Google search box. WHY Questions such as “why do dogs chase dogs?” or “why do pirates wear eye patches?” tell us what the speaker believes, but also tell us that the speaker believes that everyone else shares this belief too. We can milk Google for its popular completions to common WHY questions, by providing the partial query “why do Xs” for each topic of interest X. Each question provides a common presupposition about the world that is shared by many of us. We coax as many completions from Google as possible for the same stub “why do Xs”
  • 46. “Why” questions are a rich source of tacit norms that are widely assumed to be self-evident Search engine query completions are a rich source of no-brainer “Why” questions / norms Veale & Hao (2007) Q logs: Pasca & Van Durme (2007) Q Completions: Veale & Li (2011)
  • 47. We can harvest Why completions using a lexical trie : so e.g. why do r why do re why do rel … Veale & Li (2011) Özbal & Strapparava (2012) also use Google completions as a source of tacit world knowledge
  • 48. By using different search locales and search languages, we can also obtain culture- specific completions Thus, e.g. Google France may provide a different normative perspective on cats than Google USA
  • 49. How might we exploit these common beliefs / norms to understand novel metaphors? Suppose we wish to interpret the metaphor “religion is business.” A computer can acquire a reasonable stock of norms for what constitutes a business, for instance that businesses have leaders, develop and follow strategies and aim for objectives, pay taxes etc. Which of these norms are most appropriately projected onto religion in this metaphor? We can use Google 2-grams to seek out contexts in which religion is associated with leadership, strategy, objectives, taxes, etc. Each such 2-gram is motivating evidence for the projection of the corresponding business norm, to achieve what can be called a mash-up of business and religion. http://Afflatus.UCD.ie Metaphor Eyes app
  • 50. Metaphor Eyes app Veale & Li (2011)
  • 51. When you the present cut into the future leaks out William Burroughs and Brion Gysin invented the Cut-Up Technique as a collage-like means of generating new texts that are unaffected by a sub-conscious obedience to cliché or convention. Running with scissors, Burroughs was soon cutting and randomly re-splicing into any representative form he could find, moving from newsprint to audio and video tape. Computationally, it is also possible to cut-up and re-combine the knowledge representations that actually underpin these forms. The result will be a blend-like conceptual cut-up, or mash-up, that splices together the representations of two concepts X and Y to interpret the metaphor X is Y. William Burroughs
  • 52. Note the asymmetry in business is a religion
  • 53. Similes also contain presuppositions, and Web similes open a window onto our shared stereotypes Veale & Hao (2007) A nuanced model of stereotypical beliefs can be extracted – if we can avoid the ironic similes!
  • 54. Have you ever explicitly ordered a black espresso? Or a strong espresso? Or a small espresso? Most likely you never have, even if these are precisely the qualities you are looking for. The reason we don’t have to explicitly ask for these qualities is that they are already presupposed to be part of everyone’s stereotype of an espresso. We don’t ask because we shouldn’t need to. How might a computer learn that such qualities are salient if we never explicitly ask for them (or only ruefully note their absence)? A computer can learn to expect such qualities by noting how they are used in similes: when one says “as black as espresso” or “as strong as espresso”, the success of these similes is predicated on the shared assumption that espresso is black and strong.
  • 55. Use Web query pattern “ as * as a | an * ” to harvest 10,000s of similes To acquire Stereotypical Knowledge: Mine Simile instances from Web brick peacock butcher surgeon lion sponge shark fox snowflake tiger puppy rock eagle robot soap opera oak espresso statue
  • 56. Four Strategies for Exploiting Stereotypes in Similes Bona-Fide “Straight” Ironic “Incongruous” POSitive Ground (high affect ADJ) E.g., as sharp as a razor E.g., as subtle as a sledgehammer NEGative Ground (low affect ADJ) E.g., as unpleasant as a root_canal E.g., as blind as a hawk Analysis of syntactically-simple Web similes (with simple lexicalized vehicles) Observation: ~ 18% of simple web-similes (simple vehicles) are ironic ! see Veale & Hao (2007)
  • 57. “About” is a signal of sardonic intent in humorous similes “So, there I was, still single at 40, feeling about as marketable as flesh-eating bacteria” (Washington Post writer, Jeannie McDonald) “Even on Central Avenue, not the quietest dressed street in the world, he looked about as inconspicuous as a tarantula on a slice of angel food” (Raymond Chandler in Farewell, My Lovely) “They'd put you in the psycho ward, and believe me, the people who run that place are about as sympathetic as Georgia chain-gang guards” (Raymond Chandler in The Long Goodbye) ABOUT signals pragmatic insincerity (Kumon-Nakamura, Glucksberg & Brown, 1995)
  • 58. Quantifying the role of “About” in figurative comparisons Bona-Fide “About” (Congruous use of ideas: 23%) Ironic “About” (Incongruous uses: 77%) POSitive Ground (high-affect ADJ) E.g., About as happy as a rabbit in a carrot patch E.g., About as subtle as a fat man in speedos NEGative Ground (low-affect ADJ) E.g., About as repulsive as a monkey in a negligee E.g., About as cold as the centre of the Sun Harvest double-hedged Web similes with query: “about as ADJ as a NOUN” Analysis of 20,299 complex “about” similes from Web (a complex simile is one in which the vehicle may be a syntactically complex phrase, as above) see Veale & Hao (2010)
  • 59. “About” Signals both Creativity and a Sardonic World-view Bona-Fide “About” (Generic claims) Ironic “About” (Incongruous claims) HIGH affect: (+) grounds 12.3 % 72.7 % LOW affect: (-) grounds 8.6 % 6 .3% * Based on set of 8789 similes with discernibly HIGH/LOW grounds in DoA Use Whissel’s Dictionary of Affect (DoA) to characterize the +/- affect of Similes We conclude that “About” is a strong marker of a Sardonic perspective (81%) Veale & Hao (2010) see Veale (2013)
  • 60. The cleverest and most effective similes are re-used over and over on the Web, for the Web is a vast echo- chamber for new ideas and for new turns of phrase. A given simile that strikes a reader as novel may in fact already occur many times across the Web. Some of these existing occurrences may use the “about” form – that is, they may be double-hedged, while other uses of the same core simile may only be single-hedged. Any given simile may be found on the Web in the “about” form, but a Web-simile is dominant in the “about” form if most of its occurrences on the Web use the “about” form. The Web provides enough evidence then to consider whether the “about” marker in similes is the linguistic equivalent of a sly nod or a wink to an audience, to subtly signal the presence of irony or a creative, sardonic intent.
  • 61. Exploiting “About” as an cue for irony in Unmarked Similes Bona-Fide (Generic claims) Ironic (Incongruous claims) Found in “about” form* 10 % (1246 similes) 43 % (1188 similes) Dominant in “about” form* 2 % (208 similes) 40 % (1031 similes) * Found = found on Web *Dominant = in more than half of all occurrences Similes are often reused on the Web, with and without the “about” marker So “About” is the lexical equivalent of a wink, raised eyebrow or sardonic tone see Hao & Veale (2010)
  • 62. Every proverb has an equally wise anti-proverb. “A little knowledge is a dangerous thing” is thus tempered with the optimism of “From little acorns to mighty oak trees grow.” The tacit common-sense knowledge that can be acquired from the Web certainly qualities as a little knowledge, given the scale of the knowledge possessed by an average human adult. Fortunately, this little knowledge is an ideal starting point from which to grow a successively larger knowledge-base of common sense norms and beliefs. The process we use is an iterative one called Web bootstrapping: we use the knowledge we do possess to frame hypotheses about the knowledge we do not yet possess, and validate or refute these hypotheses on the Web. Any validated knowledge is then a basis for further bootstrapping. Stand back … I’ve no idea how big I’m gonna grow.
  • 63.
  • 64. Simile associations provide an excellent seed from which to grow a rich knowledge-base. For instance, Web similes tell us (and our computers) that foxes are cunning, that espresso is black and strong, that whiskey is likewise strong, that mummies are dry, silk is soft, and so on. These associations are landmarks in a conceptual landscape relative to which many other points on the landscape can also be identified. What other animals are commonly considered cunning? Which other beverages are black, or strong? What other materials are soft? We construct a triple from each of these simile-derived associations, but leave the third part of the triple blank, as similes do not explicitly identify a category for the topic being described. This third part can be identified later, during the first stage of bootstrapping on the Web.
  • 66. It takes knowledge to acquire knowledge, for it takes insight to pose a meaningful question. For instance, if we know that Caviar is expensive, we can ask just what kind of expensive item is it? The simile pattern is frequently used for ironic ends. To sidestep irony we need a bootstrapping pattern that is very rarely used ironically. The “M-Xs such as Ys and Zs” construct is such a pattern. We can re-express Y=Caviar is M=expensive as the Web query “expensive * such as Caviar and *” to find a value for X (the category of Caviar) and for Z (another expensive item like Caviar). Suppose we learn that Caviar is an expensive food, and that Salmon is too. We can now use the association Salmon is an expensive food in further bootstrapping, and so on and on.
  • 67. Acquiring Fine-Grained Perspectives with Double-Anchored Queries Adj Noun(s) as such E.g., “expensive foods such as salmon [and champagne]” X (s) * Anchors Y (s) and * Adj Noun(s) as such E.g., “expensive foods such as caviar [and salmon]” X (s) * Anchors Y (s) and * Veale, Li & Hao (2009) Kozareva, Riloff & Hovy (2008) Hearst (1992)
  • 68. Each bootstrapping cycle builds on and extends the knowledge gains of the previous cycle. The first cycle uses the simile associations (with incomplete triples) to generate bootstrapping queries that will both complete each triple and also find alternate fillers for the same triples. The subsequent cycle generates new bootstrapping queries from these newly-acquired alternative fillers/triples, to acquire yet more new triples from the Web. Acquisition is thus a targeted process. The knowledge-base grows geometrically with each cycle, over a thousand-fold during five cycles.
  • 69. Bootstapping queries on the Web: Rapid Growth of Knowledge Starting from a trusted solid foundation of detailed viewpoints, use bootstrapping over web-content to acquire more and more … Seed 1st Cycle 2nd Cycle Kozareva, Riloff and Hovy (2008) Veale, Li & Hao (2009) 3rd Cycle
  • 70. Bootstrapping grows a knowledge-base at a rapid-rate, since each existing association spurs the acquisition of many more in the next cycle. Bootstrapping is a knowledge-magnification process. However, the process is not immune to noise, which can cause it to acquire dubious or nonsensical triples. This noise will be magnified many times over in subsequent cycles. Garbage in, Garbage out. It is thus essential that newly acquired triples are carefully vetted, and that noise is filtered after each cycle, lest in metastasize wildly (and prompt many unnecessary queries to the Web).
  • 71. Noise Removal Strategies between Bootstrapping Cycles Kozareva, Riloff & Hovy outline a variety of graph- based metrics for noise detection and filtering. Veale, Li & Hao use a coarse WordNet-based filter to remove dubious triples between cycles Noise / Nonsense accumulates rapidly in a bootstrapping system on the Web!
  • 72. Every bootstrapped triple represents an attested fine-grained categorization of a given topic. These fine-grained categories are radial. If the same triple is found again and again for a topic, then this topic is deemed to be a highly representative member of the corresponding radial category. Bootstrapping is a productive means of growing a large number of fine-grained radial categories, and of growing the membership of these categories by identifying attested members on the Web. We have constructed a Web service called Thesaurus Rex that delivers these categorizations on demand for a given topic. The size of a category name conveys the representativeness of the topic.
  • 73. creativity Veale & Li (2013) see Afflatus.UCD.ie
  • 74. Good metaphors draw out latent similarities between their topics and their vehicles. A creative individual spies a curious resemblance between two objects or ideas, and constructs an appropriate metaphor to help others see this otherwise overlooked similarity too. Thesaurus Rex allows its users to explore the hidden or conventionally unnoticed similarities between concepts by intersecting the set of radial categories that they both reside in. For instance, by identifying the fine-grained categorizations that can be applied to both creativity and to leadership (attested on the Web), we can see the many tacit connections between the two.
  • 75. creativity & leadership Veale & Li (2013)
  • 76. Even ideas which seem like complete opposites may share some fascinating categorizations. For opposites complement each other and thus form a larger categorical whole. Consider the concepts of birth and death. A wealth of shared categorizations for these naturally antagonistic processes are identified via Web bootstrapping for Thesaurus Rex (overleaf). For instance, both are natural processes and both are major, irreversible events. Each can be stressful yet meaningful event, though each is also a universal experience that is often marked as a legal event, a historical event and a special occasion.
  • 77. divorce & war birth & death
  • 78. Words are tools that we too often assume possess just a single prescribed functionality. An important function of metaphor is to reveal the secondary functions of our words, to show that the ideas conveyed by two very different words can share some surprising similarities. Since metaphor facilitates our recognition of the similar in the dissimilar, it may contribute to our sense of similarity overall. Can Thesaurus Rex‘s categories enhance a general sense of similarity? Measures of the semantic similarity of two words (and their meanings) are usually evaluated on the gold standard of Miller & Charles (M&C)’s 30-word-pairs ranked by human similarity judgments. C’mon and see!
  • 79. 1. car - automobile 11. bird - cock 21. coast - hill 2. gem - jewel 12. bird - crane 22. forest - graveyard 3. journey - voyage 13. tool - implement 23. shore - woodland 4. boy - lad 14. brother - monk 24. monk - slave 5. coast - shore 15. crane - implement 25. coast - forest 6. asylum - madhouse 16. lad - brother 26. lad - wizard 7. magician - wizard 17. journey - car 27. chord - smile 8. midday - noon 18. monk - oracle 28. glass - magician 9. furnace - stove 19. cemetery - woodland 29. rooster - voyage 10. food - fruit 20. food - rooster 30. noon - string Miller & Charles (1991) Lexical similarity Gold-Standard of 30 word pairs WordNet + Thesaurus Rex: 0.93 correlation with M&C human ratings see Veale & Li (2013) for implementation of similarity measure using T. Rex
  • 80. A representation of a stereotypical concept is more than just a bag of salient features. The features that make up our stereotypical view of a concept are not random and disjoint, but connected and overlapping. Features reinforce each other, imply each other, and evoke each other. To appreciate the degree to which the features of a stereotype relate to each other, we look to how they are reinforce each other in a simile with multiple grounds, like “as hot and humid as a jungle”. In general, similes with the double-ground form X is “as P1 and P2 as a Y” attest to the relationship between P1 and P2. The more often that P1 and P2 support each other in an attested simile, the more likely that one will evoke the other in a descriptive context. By mining double-ground similes on the Web we build a matrix of mutually-reinforcing properties. What? Hey!
  • 81. Double-anchored query “ as * and * as ” to acquire associations Learn how Stereotypical Properties Suggest and Imply Together Adjacency matrix of mutually-reinforcing properties acquired from WWW: hot spicy humid fiery dry sultry … hot --- 35 39 6 34 11 … spicy 75 --- 0 15 1 1 … humid 18 0 --- 0 1 0 … fiery 6 0 0 --- 0 0 … dry 6 0 0 0 --- 0 … sultry 11 1 0 2 0 --- … … … … … … … … …
  • 82. Veale & Li (2012) Any given property (e.g cunning) will be highly connected to related properties
  • 83. Very few properties are either wholly positive or wholly negative. There are shades of grey. Consider the property cunning, whose local network of reinforcing properties (with which it occurs in double-ground similes) is illustrated on the previous page. There are positive aspects to being cunning, as possession of this property implies a quickness of thought and a subtly of action. We might be pleased to have our plans described as cunning. Suppose we take a standard off-the-shelf affective lexicon, that will tell us which properties are quite positive (high pleasantness rating) and which are quite negative (high unpleasantness rating). We color our property-to-property graph accordingly, with quite positive properties in blue, quite negative properties in red, and everything else (neither very positive or very negative) in white. Why so blue? You make me see red
  • 84. Obviously positive words in Obviously negative words in
  • 85. Birds of a feather flock together. Misery loves company. Peas in a pod. Lie down with dogs … If we can know a word by the company it keeps, it is intuitive to assume that the lexical affect of a word will tend to reflect that of its neighbors. Happy words flock together. Sad words love company. In our property-to-property graph, the positive neighbors of a word X are the blue nodes to which it is directly linked. Let’s call this set N+(X), and let’s call the set of X’s red negative neighbors N-(X). Every edge in our graph can be considered a context in which X is used: a positive context will link X to a positive word (a blue node); a negative context will link X to a negative word (a red node). The positivity of a word/node X can be estimated as the proportion of positive contexts in which it appears: |N+(X)| / ( |N+(X)| + |N-(X)| )
  • 86. Veale & Li (2012) For 99.6% of positive exemplars (1309 of 1314), pos(x) > neg(x) Strong pos(X) = |N+(X)| |N+(X)  N-(X)|
  • 87. We need to get out more and meet new people Positivity & Negativity allow for shades of affect rather than a binary blue/red distinction. The negativity of a word/node X can be estimated as the proportion of negative contexts in which it appears: |N-(X)| / ( |N+(X)| + |N-(X)| ) Thus, pos(x) + neg(x) = 1 so pos(x) = 1 – neg(x) and neg(x) = 1 – pos(x) A property like cunning will thus possess shades of positivity and negativity (if more of the latter) We expect obviously pleasant words to have a positivity greater than negativity (pos(x) > neg(x)) And we expect quite unpleasant words to have negativity greater than positivity (neg(x) > pos(x)) We can thus test the basic intuition underpinning pos(x) and neg(x) by checking that obviously pleasant/unpleasant words in an affect lexicon are appropriately shaded as more or less blue/red
  • 88. |N-(X)| |N+(X)  N-(X)| neg(X) = For 98.1% of negative exemplars (1359 of 1385), neg(x) > pos(x) Veale & Li (2012)
  • 89. For there is nothing either good or bad, but thinking makes it so. Speak for yourself, mate! Different contexts (such as metaphors) can draw out the subtle affective shades of a word. The word “baby” evokes our nuanced stereotype of a human BABY, but in the right context, such as “You are my baby”, we focus on just the positive nuances of this stereotype. Another figurative context, such as “You are such a baby!”, emphasizes the negatives of BABY. We thus need to be able to “spin” a stereotype representation on demand, to focus on just the positive nuances or just the negative nuances to suit the metaphor being interpreted. For convenience, let’s refer to the negative aspects as –Baby and the positive aspects as +Baby.
  • 90. bawling screaming weak angelic soft delicate whining whimpering Stereotypical Baby properties (163 in all) sniveling adorable drooling innocent warm sobbing peaceful wailing heartwarming cute lovable cranky indulged mewling
  • 91. sniveling lovable adorable cute drooling bawling mewling screaming cranky innocent indulged warm sobbing weak angelic soft delicate peaceful whining wailing whimpering heartwarming +Baby e.g. “She’s my baby” -Baby e.g. “He’s such a baby”
  • 92. Figurative “spin” requires an ability to affectively slice stereotypes in context, on demand. If we represent a stereotype as a set of properties with shades of positivity and negativity, we shall need to partition this set in the subset of properties {p} for which pos(p) > neg(p), and a disjoint subset of properties for which neg(p) > pos(p). We can use an off-the-shelf affect lexicon to judge the results of these partitions for each stereotype. Positive recall is dented whenever a positive property is mistakenly assumed to be negative. Positive precision is dented whenever a negative property is placed into the positive partition. Encouraging macro-averages for precision & recall across 6,230 stereotypes is presented overleaf.
  • 93. Average P/R/F1 scores for the affective retrieval of positive and negative properties from 6,230 stereotypes Macro Average (6230 stereotypes) Positive properties Negative properties Precision .962 .98 Recall .975 .958 F-Score .968 .968 Avg. 6.51 properties per stereotype Veale & Li (2012)
  • 94. How do we model the relationship between metaphors and feelings?
  • 95. Metaphors do more than convey propositions: they convey feelings about those propositions. A good metaphor resonates with emotion, so much so that listeners resonate to the same frequency. How might a computer, with no emotions of its own, appreciate the feelings evoked by a metaphor? What, for instance, are the emotional resonances of a property like bloody? And what are the resonances of a concept that is stereotypically bloody, such as a butcher or a murderer? We can once again use similes as a guide, or rather, our property-to-property graph derived from double-ground Web similes. So what feeling-heavy properties does bloody evoke? The obvious properties are those properties p that can be expressed thusly: “I feel p-ed by …” Note to self: Next time just go with corpus analysis!
  • 96. “I feel disgusted by” “I feel appalled by …” First Model the Relationship Between Properties and Feelings Patterns of co-description in similes reveal how properties make us feel disgusting terrifying revolting disturbing frightening appalling exciting bloody 8 4 3 3 2 2 2 vile 34 0 1 5 0 0 0 filthy 14 0 4 0 0 0 0 bizarre 11 1 2 0 11 1 1 horrible 11 3 0 1 2 0 0 horrid 10 0 2 0 1 0 0 … … … … … … … … Veale (2013) Using simple morphology rules we can identify properties that correspond directly with the expression of feelings
  • 97. Our property-to-property graph effectively defines a radial category for each property: since for every property, it provides a textured set of other properties that it evokes & reinforces. So, given even a small set of morphologically-transparent mappings from properties to feelings, we can assume that other properties in the same radial category will evoke those feelings too, albeit to a lesser extent, in line with the centrality of those other properties in its radial categories. Thus, because bloody is frequently found in double-ground similes with disgusting (see table on previous page) we can assume that bloody will likewise evoke the feeling disgusted_by. For a given stereotype representation, we can map each of its properties onto one or more feelings, ordered by weights from the property-to-property graph, to get a textured space of possible feelings. The space of feelings for the metaphor art is a challenge is shown overleaf.
  • 98. Output of Metaphor Magnet service: Veale & Li (2012)
  • 99. Successful metaphors are the common currency of a language. We use them continuously, as linguistic tender with well-understood meanings, feelings and communicative functions. The most common metaphors are so conventionalized that we hardly recognize them as metaphors. A landmark analysis of conventional metaphors is provided in George Lakoff and Mark Johnson’s 1980 book Metaphors We Live By. These metaphors are pervasive and cognitively entrenched. A rich source of the most common X is Y metaphors is the Google n-grams database. An analysis of Web 4-grams reveals a wealth of copula metaphors for a broad swathe of everyday concepts. We use these time-worn figures as a basis for understanding novel metaphors. If, then, Apple is said to be a religion, shouldn’t we understand how religions are commonly described using metaphor?
  • 100. Veale & Li (2012) Apple is a religion We can mine commonplace metaphors from Google n-grams ... and extrapolate to new target words & concepts
  • 101. What if a wide assortment of metaphors could be created on demand for a given target? concept What if metaphors could be created by a dedicated service – a Web Service, such as Metaphor Magnet Veale & Li (2012) Veale (2013)
  • 102. Metaphor Magnet is a Web app / service that generates and analyzes metaphors on demand. Users may access it via a browser, and apps may access it via a URL interface that returns XML. Since a metaphor may be used to place a positive or negative spin on a topic, and since this spin will certainly affect any interpretation, users may prefix a topic with + or – to indicate an explicit spin. Suppose we ask Metaphor Magnet to provide metaphors that elaborate on the basic conceit that Apple is a -religion. Metaphor Magnet will retrieve common metaphors for religion from n-grams. It will filter these metaphors by affect (negative in this case), aggregate the various properties transferred by each (e.g. depraved, authoritarian, threatening, pernicious, etc.) and seek out corpus evidence (in the Google n-grams) to support each of these being applied to the topic Apple.
  • 103. On the Web: Metaphor Magnet
  • 104. Metaphor Magnet uses its ability to generate a textured feeling space to emotionally explain the selected metaphor Apple is a dogmatic cult
  • 105. Metaphor Magnet uses the Google n-grams to retrieve the negative metaphors (below right) for –prison, and transfers the associated properties and stereotypes into the domain of love (below left)
  • 106. The Horror genre is filled with metaphors and conceptual blends, which allow stories to create chimerical characters that straddle the boundaries of antagonistic categories. Thus, vampires & zombies straddle the categories of living & dead. Ghosts straddle the categories of tangible & intangible. Golems & gargoyles straddle the categories of animate & inanimate. Werewolves (and to an extent, vampires) straddle the categories of human & animal. These blends are unnerving not just because they force a union of opposites. They unnerve us because they give rise to surprising emergent properties present in neither of the input concepts. Well, I am half man and half canine … So I must be my own best friend!
  • 107. Do you remember the horror movie The Fly? Are you a fan of David Cronenberg’s 1980’s body horror, or do you perhaps prefer the creepy Vincent Price original? Seth Brundle, a mad scientist of sorts, develops a Star Trek-like matter transporter, but when he tests the device on himself, his DNA is accidentally scrambled with that of a common housefly. Brundle’s cells becomes a genetic cut-up of human and fly DNA so bizarre it might have come from a William Burroughs novel (indeed, Cronenberg also adapted Burroughs’s The Naked Lunch). The chimerical result, Brundlefly, combines properties of Brundle and of the housefly, but exhibits emergent qualities of its own, such as malevolence, paranoia and even super-strength! I am big … It’s the movies that got small.
  • 108. Fauconnier & Turner (1998) Fauconnier & Turner (2002) Veale & O’Donoghue (2000) Pereira (2007)
  • 109. Conceptual blending is a complex cognitive process that can be applied at any level of conceptual organization. The cut-up process of Burroughs & Gysin is a manipulation of media whose result, when meaningful, is a conceptual blend. The mash-up of knowledge representations, seen earlier in Metaphor Eyes, is a metaphor-oriented blending process. We define now another simple model of blending, directed at the property-level of stereotype representation. Consider the metaphor love is the grave. Properties of love may combine with stereotypical properties of grave in the resulting blend. A phrasal blend is an attested phrase “M H” in which M denotes a property of one concept and H denotes a property of another. For instance, “cold embrace” is a phrasal blend of the property cold of grave and embracing of love. We can retrieve attested phrases of this kind (and thus, phrasal blends) from the Google n-grams for a given source & target.
  • 110. Love + Grave Love Grave Attested Web ngrams “Bridging Terms” in Attested Web ngrams
  • 111. Input1 Input2 Metaphor Magnet can thus generate a set of corpus-attested phrasal blends for any given pairing of Source and Target concepts, such as Love and Grave (see previous page). Some phrasal blends arise entirely out of a single input, such as “dreary chill” of the grave. If the Google n-grams attest that love can be both dreary and chilly, then dreary chill is projected onto it. Some phrasal blends arise only from a combination of both inputs and are not found in any one input alone, such as romantic darkness and gentle silence. These are emergent qualities of the blend. Emergent qualities may nonetheless be present in a 3rd unnamed concept: romantic darkness can arise from thunderstorm alone, and gentle silence from sigh alone, so these may also be evoked.
  • 113. And poetry is the cherry on top! The phrasal blends that are generated by Metaphor Magnet (via attested n-gram retrieval) for a given metaphor comprise a set of very evocative and surprisingly poetic phrasal descriptions. Metaphor Magnet provides a poetry-generation service to further exploit these descriptions. Metaphors/phrasal blends are packaged into simple poems using a model called Stereotrope: selecting from the available set of phrasal blends, and by recruiting additional concepts that are evoked by these blends (such as sigh and thunderstorm for love is the grave), Stereotrope packages each phrasal blend in one of a variety of poetic tropes, such as simile or superlative.
  • 114. Stereotrope: Veale (2013) RobotComix.com/metaphor-magnet-acl
  • 115. A (Web-) Service-Oriented Architecture is “an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented” Erl (2008) New Metaphor Services should be discoverable, autonomous and widely reusable, and should be flexible enough to compose in groups, while remaining loosely coupled to others. Services should also maintain minimal state information and use abstraction to hide the complexity of their inner workings and data.
  • 116. Imagine if creativity could be delivered on a platter to any application, as and when it needed it The key to providing computational creativity on demand on the Web is the provision of a thriving marketplace of competing or cooperating services, each solving one small piece of the larger puzzle. Metaphor Magnet, Metaphor Eyes and Thesaurus Rex are just three reusable tiles in this planned mosaic of creative Web services. We need other metaphor services, and many other services besides to exploit, compose and mash-up the outputs of these three metaphor/blend/category providers. These new services should adhere Erl’s principles for a well-designed service-oriented architecture. Specifically, they should be easily discoverable and should play well (inter-operably) with others.
  • 117.
  • 118. Computational modelers of metaphor can be at the vanguard of this vision of A Creative Web As René Magritte outlined in his manifesto Les Mots et Les Images, words, objects and images are interoperable and interchangeable commodities: software services capable of handling metaphor and other forms of linguistic creativity can provide a solid basis for creative systems more generally. To do so, we must also conceive of our models as interoperable and interchangeable commodities themselves. So let’s get developing, sharing, and composing!
  • 119. This tutorial has, by necessity. been highly selective. Many interesting works by many interesting researchers have unavoidably been overlooked. Metaphor has been the subject of intense study since antiquity. Computer scientists are late to the party but no less fascinated or enthusiastic. For further reading, see the bibliography, or check out RobotComix.com, or look for this book for computationally- minded readers.
  • 120. Taxonomies and Metaphor Aristotle. (335 B.C. / 1997). Poetics. Translated by Malcolm Heath. Penguin Classics. Yorick Wilks. (1978). Making Preferences More Active. Artificial Intelligence 11(3):197-223. Dan Fass. (1991). Met*: a method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90. Eileen Cornell Way. (1991). Knowledge Representation and Metaphor. Studies in Cognitive systems. Kluwer Academic. Christiane Fellbaum. (Ed.). (1998). WordNet: An electronic lexical database. MIT Press. Tony Veale. (2006). An analogy-oriented type hierarchy for linguistic creativity. Journal of Knowledge- Based Systems, 19(7):471-479. Categorization, Prototype Theory and Metaphor Eleanor Rosch. (1975). Cognitive Representations of Semantic Categories. Journal of Experimental Psychology: General, 104(3):192–233. Selected Bibliography and Additional Readings
  • 121. George Lakoff. (1987). Women, Fire and Dangerous Things. University of Chicago Press. Patrick Hanks. (1994). Linguistic Norms and Pragmatic Exploitations, Or Why Lexicographers need Prototype Theory, and Vice Versa. In F. Kiefer, G. Kiss, and J. Pajzs (Eds.) Papers in Computational Lexicography: Complex-1994. Hungarian Academy of Sciences, Budapest. Sam Glucksberg. (1998). Understanding metaphors. Current Directions in Psychological Science, 7:39-43. Sam Glucksberg (with Matthew McGlone). (2001) Understanding Figurative Language: From Metaphors to Idioms. Oxford University Press. Dirk Geeraerts. (2006). Prototype Theory: Prospects and Problems. In Dirk Geeraerts (Ed.), Cognitive linguistics: basic readings. Walter de Gruyter. Tony Veale. (2007). Dynamic Creation of Analogically-Motivated Terms and Categories in Lexical Ontologies. In Judith Munat (Ed.), Lexical Creativity, Texts and Contexts (Studies in Functional and Structural Linguistics), 189-212. John Benjamins. Tony Veale and Yanfen Hao. (2007). Making Lexical Ontologies Functional and Context-Sensitive. Proceedings of ACL 2007, the 45th Annual Meeting of the Association of Computational Linguistics, 57–64. Sam Glucksberg. (2008). How metaphor creates categories – quickly! In Raymond W. Gibbs, Jr. (Ed.), The Cambridge Handbook of Metaphor and Thought (chapter 4). Cambridge University Press.
  • 122. Conventional Metaphors George Lakoff and Mark Johnson. (1980). Metaphors We Live By. University of Chicago Press. James H. Martin. (1990). A Computational Model of Metaphor Interpretation. Academic Press. Tony Veale and Mark T. Keane. (1992). Conceptual Scaffolding: A spatially founded meaning representation for metaphor comprehension, Computational Intelligence, 8(3):494-519. Dan Fass. (1997). Processing Metonymy and Metaphor. Contemporary Studies in Cognitive Science & Technology. New York: Ablex. Brian Bowdle & Dedre Gentner. (2005). The Career of Metaphor. Psychological Review, 112(1):193-216. John Barnden. (2006). Artificial Intelligence, figurative language and cognitive linguistics. In G. Kristiansen, M. Achard, R. Dirven, & F. J. Ruiz de Mendoza Ibanez (Eds.), Cognitive Linguistics: Current Application and Future Perspectives, 431-459. Mouton de Gruyter. Similes Archer Taylor. (1954). Proverbial Comparisons and Similes from California. Folklore Studies 3. University of California Press. Neal R. Norrick. (1986). Stock Similes. Journal of Literary Semantics, XV(1):39-52.
  • 123. David Fishelov. (1992). Poetic and Non-Poetic Simile: Structure, Semantics, Rhetoric. Poetics Today, 14(1):1-23. Rosamund Moon. (2008). Conventionalized as-similes in English: A problem case. International Journal of Corpus Linguistics, 13(1):3-37. Tony Veale. (2013). Humorous Similes. HUMOR: International Journal of Humor Research, 21(1):3-22. Conceptual Blending Theory Gilles Fauconnier. (1994). Mental spaces: aspects of meaning construction in natural language. Cambridge University Press. Gilles Fauconnier and Mark Turner. (1994). Conceptual Projection and Middle Spaces. University of California at San Diego, Department of Computer Science Technical Report 9401. Gilles Fauconnier. (1997). Mappings in Thought and Language. Cambridge University Press. Gilles Fauconnier and Mark Turner. (1998). Conceptual Integration Networks. Cognitive Science, 22(2):133–187. Tony Veale and Diarmuid O’Donoghue. (2000). Computation and Blending. Cognitive Linguistics, 11(3- 4):253-281.
  • 124. Gilles Fauconnier and Mark Turner. (2002). The Way We Think. Conceptual Blending and the Mind's Hidden Complexities. Basic Books. Francisco Câmara Pereira. (2007). Creativity and artificial intelligence: a conceptual blending approach. Walter de Gruyter. Analogy and Structure-Mapping Theory Dedre Gentner. (1983). Structure-mapping: A Theoretical Framework. Cognitive Science 7(2):155–170. Dedre Gentner and Cecile Toupin. (1986). Systematicity and Surface Similarity in the Development of Analogy. Cognitive Science, 10(3):277–300. Brian Falkenhainer, Kenneth D. Forbus and Dedre Gentner. (1989). Structure- Mapping Engine: Algorithm and Examples. Artificial Intelligence, 41:1-63. Keith J. Holyoak and Paul Thagard. (1989) Analogical Mapping by Constraint Satisfaction, Cognitive Science, 13:295-355. Douglas R. Hofstadter and the Fluid Analogies Research Group. (1995). Fluid Concepts and Creative Analogies. Computer Models of the Fundamental Mechanisms of Thought. Basic Books. Tony Veale and Mark T. Keane. (1997). The Competence of Sub-Optimal Structure Mapping on ‘Hard’ Analogies. Proceedings of IJCAI’97, the 15th International Joint Conference on Artificial Intelligence.
  • 125. Lexical Analogy Peter D. Turney, M. L. Littman, J. Bigham & V. Shnayder. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing. Tony Veale. (2003). The Analogical Thesaurus. Proceedings of the 2003 Conference on Innovative applications of Artificial Intelligence, Acapulco, Mexico. Morgan Kaufmann, San Mateo, CA. Tony Veale. (2004). WordNet sits the S.A.T.: A Knowledge-based Approach to Lexical Analogy. Proceedings of ECAI-2004, the 16th European Conference on Artificial Intelligence. Peter D. Turney. (2006). Similarity of semantic relations. Computational Linguistics, 32(3):379-416. Metaphor and Similarity Mary K. Camac, and Sam Glucksberg. (1984). Metaphors do not use associations between concepts, they are used to create them. Journal of Psycholinguistic Research, 13:443-455. Sam Glucksberg and Boaz Keysar. (1990). Understanding Metaphorical Comparisons: Beyond Similarity. Psychological Review, 97(1):3-18.
  • 126. George A. Miller and Walter. G. Charles. (1991). Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1):1-28. Tony Veale & Guofu Li. (2013). Creating Similarity: Lateral Thinking for Vertical Similarity Judgments. In Proceedings of ACL 2013, the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria. Irony Herbert H. Clark and Richard J. Gerrig. (1984). On the pretense theory of irony. Journal of Experimental Psychology: General, 113:121-126. Sachi Kumon-Nakamura, Sam Glucksberg and Mary Brown. (1995). How about another piece of pie: The Allusional Pretense Theory of Discourse Irony. Journal of Experimental Psychology: General 124:3-21 Rachel Giora and Ofer Fein. (1999). Irony: Context and Salience, Metaphor and Symbol, 14(4):241-257. Yanfen Hao and Tony Veale. (2010). An Ironic Fist in a Velvet Glove: Creative Mis- Representation in the Construction of Ironic Similes. Minds and Machines, 20(4):483-488. Tony Veale and Yanfen Hao. (2010). Detecting Ironic Intent in Creative Comparisons. Proceedings of ECAI-2010, the 19th European conference on Artificial Intelligence.
  • 127. Tony Veale. (2013). Strategies and tactics for ironic subversion. In: Marta Dynel (Ed.), Developments in Linguistic Humour Theory. John Benjamins publishing company. Antonio Reyes, Paolo Rosso & Tony Veale. (2013). A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation 47:239--268. Incongruity and Humour Jerry M. Suls. (1972). A Two-Stage Model for the Appreciation of Jokes and Cartoons: An information- processing analysis. In J.H. Goldstein & P.E. McGhee (Eds.), The Psychology of Humor. Academic Press. Victor Raskin. (1985). Semantic Mechanisms of Humor. D. Reidel. Graeme Ritchie. (1999). Developing the Incongruity-Resolution Theory. Proceedings of the AISB Symposium on Creative Language: Stories and Humour, (Edinburgh, Scotland). Elliott Oring. (2003). Engaging Humor. University of Illinois Press. Graeme Ritchie. (2003). The Linguistic Analysis of Jokes. Routledge Studies in Linguistics, 2. Routledge. Tony Veale, Kurt Feyaerts and Geert Brône. (2006). The cognitive mechanisms of adversarial humor. HUMOR: The International Journal of Humor Research, 19-3:305-339.
  • 128. N-Gram / Web / Corpus-derived models of linguistic norms Marti Hearst. (1992). Automatic acquisition of hyponyms from large text corpora. In Proc. of the 14th International Conference on Computational Linguistics, pp 539–545. Thorsten Brants and Alex Franz. (2006). Web 1T 5-gram Version 1. Linguistic Data Consortium. Adam Kilgarriff. (2007). Googleology is Bad Science. Computational Linguistics, 33(1):147-151. Marius Pasca & Benjamin Van Durme. (2007). What You Seek is What You Get: Extraction of Class Attributes from Query Logs. In Proc. of IJCAI-07, the 20th Int. Joint Conference on Artificial Intelligence. Zornitsa Kozareva, Eileen Riloff and Eduard Hovy. (2008). Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. In Proc. of the 46th Annual Meeting of the ACL, pp 1048-1056. Tony Veale, Guofu Li and Yanfen Hao. (2009). Growing Finely-Discriminating Taxonomies from Seeds of Varying Quality and Size. In Proc. of EACL’09, the 12th Conference of the European Chapter of the Association for Computational Linguistics pp. 835-842. Tony Veale and Guofu Li. (2011). Creative Introspection and Knowledge Acquisition. Proceedings of AAAI-11, The 25th AAAI Conference on Artificial Intelligence. Tony Veale. (2011). Creative Language Retrieval. Proceedings of ACL 2011, the 49th Annual Meeting of the Association for Computational Linguistics.
  • 129. Gozde Özbal & Carlo Strapparava. (2012). A computational approach to automatize creative naming. In Proc. of the 50th annual meeting of the Association of Computational Linguistics, Jeju, South Korea. Web-Services and Metaphor Thomas Erl. (2008). SOA: Principles of Service Design. Prentice Hall. Tony Veale & Guofu Li. (2012). Specifying Viewpoint and Information Need with Affective Metaphors: A System Demonstration of Metaphor Magnet. In Proceedings of ACL’2012, the 50th Annual Conference of the Association for Computational Linguistics, Jeju, South Korea. Tony Veale. (2013). A Service-Oriented Architecture for Computational Creativity. .Journal of Computing Science and Engineering, 7(3):159-167. Tony Veale. (2013). Less Rhyme, More Reason: Knowledge-based Poetry Generation with Feeling, Insight and Wit. In Proceedings of ICCC 2013, the 4th International Conference on Computational Creativity. Sydney, Australia, June 2013. Computational Creativity and Metaphor Tony Veale. (2012). Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity. London: Bloomsbury Academic.
  • 130. For more content see RobotComix.com