AI: Artificial Intelligence
1
Reading response
Peter Dormer, “Craft and the Turing Test for Practical Thinking,” in The Challenge of Technology.
What is personal know-how? What is distributed knowledge?
How do they relate to the Turing test?
Give one example of your own how these concepts matter today to artists and makers, or better yet, in your own experience?
Journal homework
Keep a record (text and drawings) of events in daily life where human and machine intersect and interact. Fill at least two pages with your observations.
Mary Shelley, Frankenstein, or The Modern Prometheus, 1818
Boris Karloff in Frankenstein in 1931 directed by James Whale
Mary Shelley first published Frankenstein, or the Modern Prometheus 1818. the novel allegorizes the Romantic obsession with discovering the power or principle of life. Ideas about a life power were consistent with the scientific understanding of the day. Darwin himself spoke of an organizing “spirit of animation” in his Zoonomia; or, The Laws of Organic Life, in which he stated “the world itself might have been generated, rather than created.”
Dr. Frankenstein picked all the parts for his monster based on their beauty, but when it comes to life, the monster is unbearably ugly. “I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body…the beauty of the dream vanished, and breathless horror and disgust filled my heart. Unable to endure the aspect of the being I had created, I rushed out of the room”.
4
Two definitions of AI:
“The use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular.
--Margaret Boden
“The science of making machines do things that would require intelligence if done by humans.”
-Marvin Minsky
BOTH OF THESE STATEMENTS ORIGINATE IN ALAN TURING’S FIRST COMPUTER SCIENCE ARTICLE
Working assumption: all cognition is computable
Question:
Is what’s not yet known to be computable actually computable?
if so, then what?
if not, why not, and what does that tell us about cognition?
7
Who was Alan Turing?
B. 1912 London, attended King’s College, Cambridge and Princeton University. He studied mathematics and logic (he hadn’t invented computer science yet)
At 23, he invented the “Turing machine” and published “On Computable Numbers in 1936, the first and most important paper in comp. sci.
During WWII, solved the German Enigma code by use of electromechanical devices—a precursor to the computer
Laid the foundation for major subfields of comp sci: theory of computation, design of hardware and software, and the study of artificial intelligence
“The Imitation Game,”
aka
“The Turing Test”
In 1950, Turing posited a way to test machine intelligence: a person in a room before a screen. S/he would correspond with two agents and based on their responses, decide which was a machine and which was human. If the machine can pass fo.
AI Artificial Intelligence1Reading responsePeter .docx
1. AI: Artificial Intelligence
1
Reading response
Peter Dormer, “Craft and the Turing Test for Practical
Thinking,” in The Challenge of Technology.
What is personal know-how? What is distributed knowledge?
How do they relate to the Turing test?
Give one example of your own how these concepts matter today
to artists and makers, or better yet, in your own experience?
Journal homework
Keep a record (text and drawings) of events in daily life where
human and machine intersect and interact. Fill at least two
pages with your observations.
Mary Shelley, Frankenstein, or The Modern Prometheus, 1818
Boris Karloff in Frankenstein in 1931 directed by James Whale
Mary Shelley first published Frankenstein, or the Modern
Prometheus 1818. the novel allegorizes the Romantic obsession
with discovering the power or principle of life. Ideas about a
2. life power were consistent with the scientific understanding of
the day. Darwin himself spoke of an organizing “spirit of
animation” in his Zoonomia; or, The Laws of Organic Life, in
which he stated “the world itself might have been generated,
rather than created.”
Dr. Frankenstein picked all the parts for his monster based on
their beauty, but when it comes to life, the monster is
unbearably ugly. “I had worked hard for nearly two years, for
the sole purpose of infusing life into an inanimate body…the
beauty of the dream vanished, and breathless horror and disgust
filled my heart. Unable to endure the aspect of the being I had
created, I rushed out of the room”.
4
Two definitions of AI:
“The use of computer programs and programming techniques to
cast light on the principles of intelligence in general and human
thought in particular.
--Margaret Boden
“The science of making machines do things that would require
intelligence if done by humans.”
-Marvin Minsky
BOTH OF THESE STATEMENTS ORIGINATE IN ALAN
3. TURING’S FIRST COMPUTER SCIENCE ARTICLE
Working assumption: all cognition is computable
Question:
Is what’s not yet known to be computable actually computable?
if so, then what?
if not, why not, and what does that tell us about cognition?
7
Who was Alan Turing?
B. 1912 London, attended King’s College, Cambridge and
Princeton University. He studied mathematics and logic (he
hadn’t invented computer science yet)
At 23, he invented the “Turing machine” and published “On
Computable Numbers in 1936, the first and most important
paper in comp. sci.
During WWII, solved the German Enigma code by use of
electromechanical devices—a precursor to the computer
Laid the foundation for major subfields of comp sci: theory of
computation, design of hardware and software, and the study of
artificial intelligence
“The Imitation Game,”
aka
“The Turing Test”
In 1950, Turing posited a way to test machine intelligence: a
person in a room before a screen. S/he would correspond with
two agents and based on their responses, decide which was a
machine and which was human. If the machine can pass for
human, the machine is intelligent.
This is still a question. Is passing the Turing Test necessary for
AI? Or desirable? Stuart Watt (1996) has proposed an “inverted
4. Turing Test”: have the computer as the interrogator,
distinguishing between a machine and human. This would prove
a theory of mind for the computer.
Currently, “reverse Turing Tests” are used when contacting
companies or signing up for email services to filter out bots
(spell a word out of deformed letters, or click on images with
signs in them)
Turing hypothesized that in fifty years (year 2000), it would be
“pointless” to asking if machines can think==we can think of
this in the same way we say planes “fly” and submarines
“swim.”
9
The idea of putting a computer through a test already implies
some agency on the part of the machine. It’s the same process
that Descartes recommended for determining if other beings
have a mind.
11
blade runner
What's more, the Turing Test has been referenced many times in
popular-culture depictions of robots and artificial life – perhaps
most notably inspiring the polygraph-like Voight-Kampff Test
that opened the movie Blade Runner.
12
5. But more often than not, these fictional representations
misrepresent the Turing Test, turning it into a measure of
whether a robot can pass for human. The original Turing Test
wasn’t intended for that, but rather, for deciding whether a
machine can be considered to think in a manner
indistinguishable from a human - and that, even Turing himself
discerned, depends on which questions you ask.
What’s more, there are many other aspects of humanity that the
test neglects – and that’s why several researchers have devised
new variants of the Turing Test that aren’t about the capacity to
hold a plausible conversation.
13
Take game-playing, for example. To rival or surpass human
cognitive powers in something more sophisticated than mere
number-crunching, Turing thought that chess might be a good
place to start – a game that seems to be characterised by
strategic thinking, perhaps even invention.
Deep Blue won its first game against a world champion on 10
February 1996, when it defeated Garry Kasparov in game one of
a six-game match. However, Kasparov won three and drew two
of the following five games, defeating Deep Blue by a score of
4–2. Deep Blue was then heavily upgraded, and played
Kasparov again in May 1997.[1] Deep Blue won game six,
therefore winning the six-game rematch 3½–2½ and becoming
the first computer system to defeat a reigning world champion
6. in a match under standard chess tournament time
controls.[2] Kasparov accused IBM of cheating and demanded a
rematch. IBM refused and retired Deep Blue.
The “44th move” per se represents the moment when a human
being (Kasparov) realised he was facing a superior intellect
(Deep Blue).
The IBM vs. Kasparov game taught us not to be naïve about the
advancements in brute force (calculative) computing or
artificial intelligence. Kasparov’s frustration and anger
following the loss against Deep Blue almost feels cute today (I
say this as a huge fan of Garry, it almost pains me to write that
sentence). It’s likely that we underestimate advancements in a
similar manner, due to sheer disbelief or ignorance, the
incapacity of imagining a future where we work in a different
way all-together. It’s a pity.
And we now have algorithms that are all but invincible (in the
long term) for bluffing games like poker – although this turns
out to be less psychological than you might think, and more a
matter of hard maths.
14
What about something more creative and ineffable, like music?
Machines can fool us there too. There is now a music-
composing computer called Iamus, which produces work
sophisticated enough to be deemed worthy of attention by
professional musicians. Iamus’s developer Francisco Vico of the
University of Malaga and his colleagues carried out a kind of
Turing Test by asking 250 subjects – half of them professional
7. musicians – to listen to one of Iamus’s compositions and music
in a comparable style by human composers, and decide which is
which. “The computer piece raises the same feelings and
emotions as the human one, and participants can’t distinguish
them”, says Vico. “We would have obtained similar results by
flipping coins.”
15
Then there’s the “Turing touch test”. Turing himself claimed
that even if a material were ever to be found that mimicked
human skin perfectly, there was little reason to try to make a
machine more human by giving it artificial flesh.
Our current motivation is a little different: We know that
prosthetic limbs that can pass for the real thing may lessen the
psychological and emotional impact that wearers report. To this
end, mechanical engineer John-John Cabibihan at Qatar
University and his colleagues are creating materials that look
and feel indistinguishable from human skin. Earlier this year, he
and his coworkers reported that they had created a soft silicone
polymer that, when heated close to body temperature with sub-
surface electronic heaters, closely resembled real skin. The
researchers created an artificial hand by coating a 3D-printed
resin skeleton with the electrically warmed polymer and used it
to touch the forearms of people while the hand itself was
concealed. The participants proved unable to make any reliable
distinction between the touch of the artificial hand and a real
one.
16
8. 2014
a “supercomputer” program called “Eugene Goostman”—an
impersonation of a wisecracking, thirteen-year-old Ukranian
boy—had become the first machine to pass the Turing Test.
Kevin Warwick, a professor of cybernetics at the University of
Reading, who administered the test, wrote, “In the field of
Artificial Intelligence there is no more iconic and controversial
milestone than the Turing Test, when a computer convinces a
sufficient number of interrogators into believing that it is not a
machine but rather is a human.” Warwick went on to call
Goostman’s victory “ a milestone” that “would go down in
history as one of the most exciting” moments in the field of
artificial intelligence.
Developed by PrincetonAI (a small team of programmers and
technologists not affiliated with Princeton University) and
backed by a computer and some gee-whiz algorithms, "Eugene
Goostman" was able to fool the Turing Test 2014 judges 33% of
the time — good enough to surpass the threshold set by
computer scientist Alan Turing in 1950. Turing believed that by
2000, computers would be able to, through five-minute text-
based conversations, fool humans into believing that they were
flesh and blood, at least 30% of the time. Depending on whom
you talk to, Goostman's achievement is either a huge turning
point for technology, or just another blip.
17
Scott: … Do you understand why I’m asking such basic
questions? Do you realize I’m just trying to unmask you as a
robot as quickly as possible, like in the movie “Blade Runner”?
Eugene: … wait
9. Scott: Do you think your ability to fool unsophisticated judges
indicates a flaw with the Turing Test itself, or merely with the
way people have interpreted the test?
Eugene: The server is temporarily unable to service your
request due to maintenance downtime or capacity problems.
Please try again later.
certainly it doesn’t obviously justify claims that the Turing Test
has been passed. As computer scientist Scott Aaronson of the
Massachusetts Institute of Technology has said, “Turing’s
famous example dialogue, involving Mr. Pickwick and
Christmas, clearly shows that the kind of conversation Turing
had in mind was at a vastly higher level than what any chatbot,
including Goostman, has ever been able to achieve.”
More to the point, Aaronson’s splendid conversation with
Eugene, after he decided to probe further into all the publicity
surrounding “him”, demonstrates the limitations rather
graphically:
Scott: … Do you understand why I’m asking such basic
questions? Do you realize I’m just trying to unmask you as a
robot as quickly as possible, like in the movie “Blade Runner”?
Eugene: … wait
Scott: Do you think your ability to fool unsophisticated judges
indicates a flaw with the Turing Test itself, or merely with the
way people have interpreted the test?
Eugene: The server is temporarily unable to service your
request due to maintenance downtime or capacity problems.
Please try again later.
18
Two theories of AI:
Base of knowledge
10. Neural networks
The base of knowledge idea—basically filling a machine with
encyclopedic knowledge is the “bottom-up” method. It
establishes a base of knowledge from which the machine can
operate
The neural net idea constructs a system that will analyze huge
amounts of data. This is the “top-down” method. For example,
through analyzing millions of images of cats, a neural net will
“learn” to recognize a cat.
19
Google’s Deep Dream
Google’s Deep Dream is an example of a neural net. Given an
input image, it analyzes and classifies the image according to
millions of images it’s seen before. The results so far have been
these kaleidoscopic/psychedelic outputs that cram as much
information into one space as possible. It’s job is essentially to
find the sound in noise
https://www.youtube.com/watch?v=egk683bKJYU see esp min
18:00—25:00 for google dream architecture; 32:30—36:00
“shore of portraits”
20
Asked to find bananas, Google’s Deep Dream will find bananas
within a set of noise
21
11. SketchRNN (recurrent neural network)
Google’s Project Magenta includes SketchRNN, in which the
network has “learned” to draw. It is interactive—the user starts
a drawing and the network will finish it
22
What is the most important difference between humans and
artificial intelligence-- what makes us human? Is it thinking?
Learning? Creativity? Emotion? Would it be possible for
machines to achieve this function? How would it be tested? Is
there a good reason to create machines that can perform in this
way? Conversely, is there a reason to prevent this technology?
23
Terence Broad, Blade Runner-Autoencoded
and
Koyaanisqatsi Autoencoded Through Blade Runner
Is this a machine “memory”?
Blade Runner Autoencoded was a research project for Broad’s
dissertation in the Creative Computing program at Goldsmiths.
He trained a type of artificial neural network called an
autoencoder to reconstruct individual frames from Blade
Runner, which he then re-sequenced into a video. The technique
12. was first proposed in 2015 by Larsen et al at the International
Conference on machine Learning (ICML).
Running Koyaanisqatsi through the nerual net trained on Blade
Runner results in a strange merging of the two.
https://arxiv.org/pdf/1512.09300.pdf
24
From Larsen et all, “Autoencoding beyond pixels using a
learned similarity metric,” 2015
Image illustrating an autoencoding process of reconstructing
dataset samples with visual attribute vectors added to latent
representations
25
Terence Broad, Topological Visualization of Convolutional
Neural Network
Open in safari: http://terencebroad.com/convnetvis/vis.html
This is a simplification of the connections between nodes in a
neural network. The algorithm is a “recursive depth first tree
search.” Starting with an input value, the network classifies the
input along each layer. Convolutional networks are what made
the generation of images possible.
26
Trevor Paglen and Kronos Quartet, Sight Machine,2017
13. Paglan is interested in machine vision and surveillance. These
are images not meant for us, but for computers. He explores the
mechanics and the implications for aesthetics but also their
sociological impact.
In Sight Machine, Paglen worked with Obscura Digital to track
the Kronos quartet in real-time with technology sourced from
open source software that runs neural nets. Paglen wanted to
reveal how the networks “see” and process images.
https://www.youtube.com/watch?v=HEI8cuGKiNk
https://www.wired.com/2017/04/unsettling-performance-
showed-world-ais-eyes/
27
Paglen, Machine-Readable Hito, 2017, part of “A Study of
Invisible Images”
https://qz.com/1103545/macarthur-genius-trevor-paglen-
reveals-what-ai-sees-in-the-human-world/
Paglen turns a face-analyzing algorithm on fellow artist Hito
Steyerl. In hundreds of snapshots, she grimaces, laughs, yawns,
shouts, rages, and smiles. Each picture is annotated with the
AI’s earnest guesstimate of Steyerl’s age, gender, and emotional
state. In one instance, she is evaluated as 74% female.
It’s an absurd but simple way to raise a complicated question:
Should computers even attempt to measure existentially
indivisible characteristics like sex, gender, and personality—
and without asking their subject? (Secondarily, what does 100%
female even look like?)
Computers already and increasingly make decisions about you—
14. which advertisement to serve, whether or not you’ve committed
a prior crime—based on vast banks of training data and image
libraries basically inaccessible to anyone not already literate in
machine-vision research. That could soon complicate traditional
ideas of accountability: In the future, humans working with
computer-vision technologies in corporations and law
enforcement agencies may not themselves be capable of tracing
back how an AI made its decision, much less be able to make
that process transparent to consumers and citizens.
28
William Latham, Mutator, 2014
William Latham calls this series of computer animation Organic
Art. He builds generative algorithms based on geometric
patterns in life to create “living” forms
Latham trained as an artist and became a Research Felow at the
IBM UK Scientific Centre
29