1. Article #1
May 12, 2009
Plugging Holes in the Science of Forensics
By HENRY FOUNTAIN
It was time, the panel of experts said, to put more science in forensic science.
A report in February by a committee of the National Academy of Sciences found “serious problems” with much
of the work performed by crime laboratories in the United States. Recent incidents of faulty evidence analysis
— including the case of an Oregon lawyer who was arrested by the F.B.I. after the 2004 Madrid terrorist
bombings based on fingerprint identification that turned out to be wrong — were just high-profile examples of
wider deficiencies, the committee said. Crime labs were overworked, there were few certification programs for
investigators and technicians, and the entire field suffered from a lack of oversight.
But perhaps the most damning conclusion was that many forensic disciplines — including analysis of
fingerprints, bite marks and the striations and indentations left by a pry bar or a gun‟s firing mechanism —
were not grounded in the kind of rigorous, peer-reviewed research that is the hallmark of classic science. DNA
analysis was an exception, the report noted, in that it had been studied extensively. But many other
investigative tests, the report said, “have never been exposed to stringent scientific scrutiny.”
While some forensic experts took issue with that conclusion, many welcomed it. And some scientists are
working on just the kind of research necessary to improve the field. They are refining software and studying
human decision-making to improve an important aspect of much forensic science — the ability to recognize
and compare patterns.
The report was “basically saying what many of us have been saying for a long time,” said Lawrence Kobilinsky,
chairman of the department of sciences at John Jay College of Criminal Justice in New York. “There are a lot of
areas in forensics that need improvement.”
Barry Fisher, a past president of the American Academy of Forensic Sciences and a former director of the crime
laboratory at the Los Angeles County Sheriff‟s Department, said he and others had been pushing for this kind
of independent assessment for years. “There needs to be a demonstration that this stuff is reliable,” he said.
It‟s not that there hasn‟t been any research in forensic science. But over the years much of it has been done in
crime labs themselves. “It hasn‟t gotten to the level where they can state findings in a rigorous scientific way,”
said Constantine Gatsonis, director of the Center for Statistical Sciences at Brown University and co-chairman
of the National Academy of Sciences committee. And rather than being teased out in academic papers and
2. debated at scientific conferences, “a lot of this forensic stuff is being argued in the courtroom,” Mr. Fisher said.
“That‟s not the place to validate any kind of scientific information.”
Much forensic research has been geared to improving technologies and techniques. These studies can result in
the kinds of gee-whiz advances that may show up in the next episode of the “C.S.I.” series — a technique to
obtain fingerprints from a grocery bag or other unlikely source, for example, or equipment that enables
analyses of the tiniest bits of evidence.
This kind of work is useful, Dr. Kobilinsky said, “but it doesn‟t solve the basic problem.”
DNA analysis came out of the biological sciences, and much money and time has been spent developing the
field, resulting in a large body of peer-reviewed research. So when a DNA expert testifies in court that there is a
certain probability that a sample comes from a suspect, that claim is grounded in science.
As evidence to be analyzed, DNA has certain advantages. “DNA has a particular structure, and can be
digitized,” Dr. Gatsonis said. So scientists can agree, for example, on how many loci on a DNA strand to use in
their analyses, and computers can do the necessary computations of probability.
“Fingerprints are a lot more complicated,” Dr. Gatsonis said. “There are a lot of different ways you can select
features and make comparisons.” A smudged print may have only a few ridge endings or other points for
comparison, while a clear print may have many more. And other factors can affect prints, including the
material they were found on and the pressure of the fingers in making them.
Sargur N. Srihari, an expert in pattern recognition at the University at Buffalo, part of the New York state
university system, is trying to quantify the uncertainty. His group did much of the research that led to postal
systems that can recognize handwritten addresses on envelopes, and he works with databases of fingerprints to
derive probabilities of random correspondence between two prints.
Most features on a print are usually represented by X and Y coordinates and by an angle that represents the
orientation of the particular ridge where the feature is located. A single print can have 40 or more comparable
features.
Dr. Srihari uses relatively small databases, including an extreme one that contains fingerprints from dozens of
identical twins (so the probability of matches is high), and employs the results to further refine mathematical
tools for comparison that would work with larger populations.
3. “These numbers are not easy to come by at this point,” he said. The goal is not individualization — matching
two prints with absolute certainty — but coming up with firm probabilities that would be very useful in legal
proceedings.
Other researchers are compiling databases of their own. Nicholas D. K. Petraco, an assistant professor at John
Jay College, is studying microscopic tool marks of the kind made by a screwdriver when a burglar jimmies a
window. It has been hypothesized that no two screwdrivers leave exactly the same pattern of marks, although
that has never been proved. So Dr. Petraco is systematically making marks in jeweler‟s wax and other
materials, creating images of them under a stereo microscope and quantifying the details, assembling a
database that can eventually be mined to determine probabilities that a mark matches a certain tool.
Dr. Petraco, a chemist with a strong background in computer science, looks to industry for ideas about pattern
recognition — the tools that a company like Netflix uses, for example, to classify people by the kinds of movies
they like. “A lot of computational machinery goes into making those kinds of decisions,” he said.
He figures that if something works for industry, it will work for forensic science. “You don‟t want to invent
anything new,” he said, because that raises legal issues of admissibility of evidence.
The work takes time, but the good news is that the data stays around forever. So as software improves, the
probabilities should get more accurate. “Algorithms and data comparison evolve over time,” Dr. Petraco said.
But it may not be possible to develop useful databases in some disciplines — bite mark analysis, for example.
“Using a screwdriver, that‟s very straightforward and simple,” said Ira Titunik, a forensic odontologist and
adjunct professor at John Jay College. But bites involve numerous teeth, and there are other factors, including
condition of the skin, that may make it difficult to quantify them for purposes of determining probabilities.
A few researchers are looking at how errors creep into forensic analysis. The National Institute of Standards
and Technology recently established a working group on fingerprints, with statisticians, psychologists and
others, “to try to understand the circumstances that lead to human error,” said Mark Stolorow, director of the
Office of Law Enforcement Standards at the institute.
In Britain, Itiel Dror, a psychologist who studies decision-making processes, is already looking at human
factors. “I like to say the mind is not a camera, objectively and passively recording information,” said Dr. Dror,
who has a consulting firm and is affiliated with University College London. “The brain is an active and dynamic
device.”
4. He has conducted studies that show that when working on an identification, fingerprint examiners can be
influenced by what else they know about a case. In one experiment, he found that the same examiner can come
to different conclusions about the same fingerprint, if the context is changed over time.
The same kinds of contextual biases arise with other decision-makers, said Dr. Dror, who works with the
military and with financial and medical professionals. He thinks one reason forensic examiners often do not
acknowledge that they make errors is that in these other fields, the mistakes are obvious. “In forensics, they
don‟t really see it,” he said. “People go to jail.”
Forensics experts say the need for research like Dr. Dror‟s and Dr. Srihari‟s does not mean that disciplines like
fingerprint analysis will turn out to be invalid. “I have no doubt that fingerprint evidence and firearms
evidence, once looked into by the appropriate research entities, are going to be shown to be very reliable and
good,” said Mr. Fisher, the former American Academy of Forensic Sciences president.
Dr. Kobilinsky said people should not jump to the conclusion that forensic science is bad science. “There‟s a lot
of experience and knowledge that goes into somebody‟s expertise,” he said.
“It‟s not junk science. But that doesn‟t mean it shouldn‟t be improved.”
5. Article #2
May 12, 2009
Plugging Holes in the Science of Forensics
By HENRY FOUNTAIN
It was time, the panel of experts said, to put more science in forensic science.
A report in February by a committee of the National Academy of Sciences found “serious problems” with much of the work
performed by crime laboratories in the United States. Recent incidents of faulty evidence analysis — including the case of
an Oregon lawyer who was arrested by the F.B.I. after the 2004 Madrid terrorist bombings based on fingerprint
identification that turned out to be wrong — were just high-profile examples of wider deficiencies, the committee said.
Crime labs were overworked, there were few certification programs for investigators and technicians, and the entire field
suffered from a lack of oversight.
But perhaps the most damning conclusion was that many forensic disciplines — including analysis of fingerprints, bite
marks and the striations and indentations left by a pry bar or a gun‟s firing mechanism — were not grounded in the kind of
rigorous, peer-reviewed research that is the hallmark of classic science. DNA analysis was an exception, the report noted,
in that it had been studied extensively. But many other investigative tests, the report said, “have never been exposed to
stringent scientific scrutiny.”
While some forensic experts took issue with that conclusion, many welcomed it. And some scientists are working on just
the kind of research necessary to improve the field. They are refining software and studying human decision-making to
improve an important aspect of much forensic science — the ability to recognize and compare patterns.
The report was “basically saying what many of us have been saying for a long time,” said Lawrence Kobilinsky, chairman
of the department of sciences at John Jay College of Criminal Justice in New York. “There are a lot of areas in forensics
that need improvement.”
Barry Fisher, a past president of the American Academy of Forensic Sciences and a former director of the crime laboratory
at the Los Angeles County Sheriff‟s Department, said he and others had been pushing for this kind of independent
assessment for years. “There needs to be a demonstration that this stuff is reliable,” he said.
It‟s not that there hasn‟t been any research in forensic science. But over the years much of it has been done in crime labs
themselves. “It hasn‟t gotten to the level where they can state findings in a rigorous scientific way,” said Constantine
Gatsonis, director of the Center for Statistical Sciences at Brown University and co-chairman of the National Academy of
Sciences committee. And rather than being teased out in academic papers and debated at scientific conferences, “a lot of
6. this forensic stuff is being argued in the courtroom,” Mr. Fisher said. “That‟s not the place to validate any kind of scientific
information.”
Much forensic research has been geared to improving technologies and techniques. These studies can result in the kinds of
gee-whiz advances that may show up in the next episode of the “C.S.I.” series — a technique to obtain fingerprints from a
grocery bag or other unlikely source, for example, or equipment that enables analyses of the tiniest bits of evidence.
This kind of work is useful, Dr. Kobilinsky said, “but it doesn‟t solve the basic problem.”
DNA analysis came out of the biological sciences, and much money and time has been spent developing the field, resulting
in a large body of peer-reviewed research. So when a DNA expert testifies in court that there is a certain probability that a
sample comes from a suspect, that claim is grounded in science.
As evidence to be analyzed, DNA has certain advantages. “DNA has a particular structure, and can be digitized,” Dr.
Gatsonis said. So scientists can agree, for example, on how many loci on a DNA strand to use in their analyses, and
computers can do the necessary computations of probability.
“Fingerprints are a lot more complicated,” Dr. Gatsonis said. “There are a lot of different ways you can select features and
make comparisons.” A smudged print may have only a few ridge endings or other points for comparison, while a clear
print may have many more. And other factors can affect prints, including the material they were found on and the
pressure of the fingers in making them.
Sargur N. Srihari, an expert in pattern recognition at the University at Buffalo, part of the New York state university
system, is trying to quantify the uncertainty. His group did much of the research that led to postal systems that can
recognize handwritten addresses on envelopes, and he works with databases of fingerprints to derive probabilities of
random correspondence between two prints.
Most features on a print are usually represented by X and Y coordinates and by an angle that represents the orientation of
the particular ridge where the feature is located. A single print can have 40 or more comparable features.
Dr. Srihari uses relatively small databases, including an extreme one that contains fingerprints from dozens of identical
twins (so the probability of matches is high), and employs the results to further refine mathematical tools for comparison
that would work with larger populations.
“These numbers are not easy to come by at this point,” he said. The goal is not individualization — matching two prints
with absolute certainty — but coming up with firm probabilities that would be very useful in legal proceedings.
7. Other researchers are compiling databases of their own. Nicholas D. K. Petraco, an assistant professor at John Jay College,
is studying microscopic tool marks of the kind made by a screwdriver when a burglar jimmies a window. It has been
hypothesized that no two screwdrivers leave exactly the same pattern of marks, although that has never been proved. So
Dr. Petraco is systematically making marks in jeweler‟s wax and other materials, creating images of them under a stereo
microscope and quantifying the details, assembling a database that can eventually be mined to determine probabilities
that a mark matches a certain tool.
Dr. Petraco, a chemist with a strong background in computer science, looks to industry for ideas about pattern recognition
— the tools that a company like Netflix uses, for example, to classify people by the kinds of movies they like. “A lot of
computational machinery goes into making those kinds of decisions,” he said.
He figures that if something works for industry, it will work for forensic science. “You don‟t want to invent anything new,”
he said, because that raises legal issues of admissibility of evidence.
The work takes time, but the good news is that the data stays around forever. So as software improves, the probabilities
should get more accurate. “Algorithms and data comparison evolve over time,” Dr. Petraco said.
But it may not be possible to develop useful databases in some disciplines — bite mark analysis, for example. “Using a
screwdriver, that‟s very straightforward and simple,” said Ira Titunik, a forensic odontologist and adjunct professor at
John Jay College. But bites involve numerous teeth, and there are other factors, including condition of the skin, that may
make it difficult to quantify them for purposes of determining probabilities.
A few researchers are looking at how errors creep into forensic analysis. The National Institute of Standards and
Technology recently established a working group on fingerprints, with statisticians, psychologists and others, “to try to
understand the circumstances that lead to human error,” said Mark Stolorow, director of the Office of Law Enforcement
Standards at the institute.
In Britain, Itiel Dror, a psychologist who studies decision-making processes, is already looking at human factors. “I like to
say the mind is not a camera, objectively and passively recording information,” said Dr. Dror, who has a consulting firm
and is affiliated with University College London. “The brain is an active and dynamic device.”
He has conducted studies that show that when working on an identification, fingerprint examiners can be influenced by
what else they know about a case. In one experiment, he found that the same examiner can come to different conclusions
about the same fingerprint, if the context is changed over time.
The same kinds of contextual biases arise with other decision-makers, said Dr. Dror, who works with the military and with
financial and medical professionals. He thinks one reason forensic examiners often do not acknowledge that they make
8. errors is that in these other fields, the mistakes are obvious. “In forensics, they don‟t really see it,” he said. “People go to
jail.”
Forensics experts say the need for research like Dr. Dror‟s and Dr. Srihari‟s does not mean that disciplines like fingerprint
analysis will turn out to be invalid. “I have no doubt that fingerprint evidence and firearms evidence, once looked into by
the appropriate research entities, are going to be shown to be very reliable and good,” said Mr. Fisher, the former
American Academy of Forensic Sciences president.
Dr. Kobilinsky said people should not jump to the conclusion that forensic science is bad science. “There‟s a lot of
experience and knowledge that goes into somebody‟s expertise,” he said.
“It‟s not junk science. But that doesn‟t mean it shouldn‟t be improved.”
9. Article #3
Judging Honesty by Words, Not Fidgets
By BENEDICT CAREY
Before any interrogation, before the two-way mirrors or bargaining or good-cop, bad-cop routines, police officers
investigating a crime have to make a very tricky determination: Is the person I‟m interviewing being honest, or spinning
fairy tales?
The answer is crucial, not only for identifying potential suspects and credible witnesses but also for the fate of the person
being questioned. Those who come across poorly may become potential suspects and spend hours on the business end of a
confrontational, life-changing interrogation — whether or not they are guilty.
Until recently, police departments have had little solid research to guide their instincts. But now forensic scientists have
begun testing techniques they hope will give officers, interrogators and others a kind of honesty screen, an improved
method of sorting doctored stories from truthful ones.
The new work focuses on what people say, not how they act. It has already changed police work in other countries, and
some new techniques are making their way into interrogations in the United States.
In part, the work grows out of a frustration with other methods. Liars do not avert their eyes in an interview on average
any more than people telling the truth do, researchers report; they do not fidget, sweat or slump in a chair any more often.
They may produce distinct, fleeting changes in expression, experts say, but it is not clear yet how useful it is to analyze
those.
Nor have technological advances proved very helpful. No brain-imaging machine can reliably distinguish a doctored story
from the truthful one, for instance; ditto for polygraphs, which track changes in physiology as an indirect measure of lying.
“Focusing on content is a very good idea,” given the limitations of what is currently being done, said Saul Kassin, a
professor of psychology at John Jay College of Criminal Justice.
One broad, straightforward principle has changed police work in Britain: seek information, not a confession. In the mid-
1980s, following cases of false confessions, British courts prohibited officers from using some aggressive techniques, like
lying about evidence to provoke suspects, and required that interrogations be taped. Officers now work to gather as much
evidence as possible before interviewing a suspect, and they make no real distinction between this so-called investigative
interview and an interrogation, said Ray Bull, a professor of forensic psychology at the University of Leicester.
10. “These interviews sound much more like a chat in a bar,” said Dr. Bull, who, with colleagues like Aldert Vrij at the
University of Portsmouth, has pioneered much of the research in this area. “It‟s a lot like the old „Columbo‟ show, you
know, where he pretends to be an idiot but he‟s gathered a lot of evidence.”
Dr. Bull, who has analyzed scores of interrogation tapes, said the police had reported no drop-off in the number of
confessions, nor major miscarriages of justice arising from false confessions. In one 2002 survey, researchers in Sweden
found that less-confrontational interrogations were associated with a higher likelihood of confession.
Still, forensic researchers have not abandoned the search for verbal clues in interrogations. In analyses of what people say
when they are lying and when they are telling the truth, they have found tantalizing differences.
Kevin Colwell, a psychologist at Southern Connecticut State University, has advised police departments, Pentagon officials
and child protection workers, who need to check the veracity of conflicting accounts from parents and children. He says
that people concocting a story prepare a script that is tight and lacking in detail.
“It‟s like when your mom busted you as a kid, and you made really obvious mistakes,” Dr. Colwell said. “Well, now you‟re
working to avoid those.”
By contrast, people telling the truth have no script, and tend to recall more extraneous details and may even make
mistakes. They are sloppier.
Psychologists have long studied methods for amplifying this contrast. Drawing on work by Dr. Vrij and Dr. Marcia K.
Johnson of Yale, among others, Dr. Colwell and Dr. Cheryl Hiscock-Anisman of National University in La Jolla, Calif.,
have developed an interview technique that appears to help distinguish a tall tale from a true one.
The interview is low-key but demanding. First, the person recalls a vivid memory, like the first day at college, so
researchers have a baseline reading for how the person communicates. The person then freely recounts the event being
investigated, recalling all that happened. After several pointed questions (“Would a police officer say a crime was
committed?” for example), the interviewee describes the event in question again, adding sounds, smells and other details.
Several more stages follow, including one in which the person is asked to recall what happened in reverse.
In several studies, Dr. Colwell and Dr. Hiscock-Anisman have reported one consistent difference: People telling the truth
tend to add 20 to 30 percent more external detail than do those who are lying. “This is how memory works, by
association,” Dr. Hiscock-Anisman said. “If you‟re telling the truth, this mental reinstatement of contexts triggers more
and more external details.”
11. Not so if you‟ve got a concocted story and you‟re sticking to it. “It‟s the difference between a tree in full flower in the
summer and a barren stick in winter,” said Dr. Charles Morgan, a psychiatrist at the National Center for Post-Traumatic
Stress Disorder, who has tested it for trauma claims and among special-operations soldiers.
In one recent study, the psychologists had 38 undergraduates enter a professor‟s office and either steal an exam or replace
one that had been stolen. A week later, half told the truth in this structured interview, and the other half tried not to
incriminate themselves by lying in the interview. A prize of $20 was offered to the most believable liars.
The researchers had four trained raters who did not know which students were lying analyze the transcripts for response
length and richness of added detail, among other things. They correctly categorized 33 of the 38 stories as truthful or
deceitful.
The study, whose co-authors were Amina Memon, Laura Taylor and Jessica Prewett, is one of several showing positive
results of about 75 percent correct or higher.
This summer, Dr. Colwell and Dr. Hiscock-Anisman are scheduled to teach the technique at the San Diego Police
Department, which has a force of some 2,000 officers. “You really develop your own antenna when interviewing people
over the years,” said Chris Ellis, a lieutenant on the force who invited the researchers to give training. “But we‟re very open
to anything that will make our jobs easier and make us more accurate.”
This approach, as promising as it is, has limitations. It applies only to a person talking about what happened during a
specific time — not to individual facts, like, “Did you see a red suitcase on the floor?” It may be poorly suited, too, for
someone who has been traumatized and is not interested in talking, Dr. Morgan said. And it is not likely to flag the person
who changes one small but crucial detail in a story — “Sure, I was there, I threw some punches, but I know nothing about
no knife” — or, for that matter, the expert or pathological liar.
But the science is evolving fast. Dr. Bull, Dr. Vrij and Par-Anders Granhag at Goteborg University in Sweden are finding
that challenging people with pieces of previously gathered evidence, gradually introduced throughout an investigative
interview, increases the strain on liars.
And it all can be done without threats or abuse, which is easier on officers and suspects. Detective Columbo, it turns out,
was not just made for TV.
12. Article #4
Tracking Cyberspies Through the Web Wilderness
By JOHN MARKOFF
For old-fashioned detectives, the problem was always acquiring information. For the cybersleuth, hunting evidence in the
data tangle of the Internet, the problem is different.
“The holy grail is how can you distinguish between information which is garbage and information which is valuable?” said
Rafal Rohozinski, a University of Cambridge-trained social scientist involved in computer security issues.
Beginning eight years ago he co-founded two groups, Information Warfare Monitor and Citizen Lab, which both have
headquarters at the University of Toronto, with Ronald Deibert, a University of Toronto political scientist. The groups
pursue that grail and strive to put investigative tools normally reserved for law enforcement agencies and computer
security investigators at the service of groups that do not have such resources.
“We thought that civil society groups lacked an intelligence capacity,” Dr. Deibert said.
They have had some important successes. Last year Nart Villeneuve, 34, an international relations researcher who works
for the two groups, found that a Chinese version of Skype software was being used for eavesdropping by one of China‟s
major wireless carriers, probably on behalf of Chinese government law enforcement agencies.
This year, he helped uncover a spy system, which he and his fellow researchers dubbed Ghostnet, which looked like a
Chinese-government-run spying operation on mostly South Asian government-owned computers around the world.
Both discoveries were the result of a new genre of detective work, and they illustrate the strengths and the limits of
detective work in cyberspace.
The Ghostnet case began when Greg Walton, the editor of Infowar Monitor and a member of the research team, was
invited to audit the Dalai Lama‟s office network in Dharamsala, India. Under constant attack — possibly from Chinese-
government-sponsored computer hackers — the exiles had turned to the Canadian researchers to help combat the digital
spies that had been planted in their communications system over several years.
Both at the Dalai Lama‟s private office and at the headquarters of the exiled Tibetan government, Mr. Walton used a
powerful software program known as Wireshark to capture the Internet traffic to and from the exile groups‟ computers.
Wireshark is an open-source software program that is freely available to computer security investigators. It is
distinguished by its ease of use and by its ability to sort out and decode hundreds of common Internet protocols that are
13. used for different types of data communications. It is known as a sniffer, and such software programs are essential for the
sleuths who track cybercriminals and spies on the Internet.
Wireshark makes it possible to watch an unencrypted Internet chat session while it is taking place, or in the case of Mr.
Walton‟s research in India, to watch as Internet attackers copied files from the Dalai Lama‟s network.
In almost every case, when the Ghostnet system administrators took over a remote computer they would install a
clandestine Chinese-designed software program called GhOst RAT — for Remote Administration Terminal. GhOst RAT
permits the control of a distant computer via the Internet, to the extent of being able to turn on audio and video recording
features and capture the resulting files. The operators of the system — whoever they were — in addition to stealing digital
files and e-mail messages, could transform office PCs into remote listening posts.
The spying was of immediate concern to the Tibetans, because the documents that were being stolen were related to
negotiating positions the Dalai Lama‟s political representatives were planning to take in negotiations the group was
engaged in.
After returning to Canada, Mr. Walton shared his captured data with Mr. Villeneuve and the two used a second tool to
analyze the information. They uploaded the data into a visualization program that had been provided to the group by
Palantir Technologies, a software company that has developed a program that allows investigators to “fuse” large data sets
to look for correlations and connections that may otherwise go unnoticed.
The company was founded several years ago by a group of technologists who had pioneered fraud detection techniques at
Paypal, the Silicon Valley online payment company. Palantir has developed a pattern recognition tool that is used both by
intelligence agencies and financial services companies, and the Citizen Lab researchers have modified it by adding
capabilities that are specific to Internet data.
Mr. Villeneuve was using this software to view these data files in a basement at the University of Toronto when he noticed
a seemingly innocuous but puzzling string of 22 characters reappearing in different files. On a hunch, he entered the string
into Google‟s search engine and was instantly directed to similar files stored on a vast computerized surveillance system
located on Hainan Island off the coast of China. The Tibetan files were being copied to these computers.
But the researchers were not able to determine with certainty who controlled the system. The system could have been
created by so-called patriotic hackers, independent computer activists in China whose actions are closely aligned with, but
independent from, the Chinese government. Or it could have been created and run by Internet spies in a third country.
Indeed, the discovery raised as many questions as it answered. Why was the powerful eavesdropping system not
password-protected, a weakness that made it easy for Mr. Villeneuve to determine how the system worked? And why
14. among the more than 1,200 compromised government computers representing 103 countries, were there no United States
government systems? These questions remain.
Cyberforensics presents immense technical challenges that are complicated by the fact that the Internet effortlessly spans
both local and national government boundaries. It is possible for a criminal, for example, to conceal his or her activities by
connecting to a target computer through a string of innocent computers, each connected to the Internet on different
continents, making law enforcement investigations time consuming or even impossible.
The most vexing issue facing both law enforcement and other cyberspace investigators is this question of “attribution.”
The famous New Yorker magazine cartoon in which a dog sits at a computer keyboard and points out to a companion, “on
the Internet, nobody knows you‟re a dog,” is no joke for cyberdetectives.
To deal with the challenge, the Toronto researchers are pursuing what they describe as a fusion methodology, in which
they look at Internet data in the context of real world events.
“We had a really good hunch that in order to understand what was going on in cyberspace we needed to collect two
completely different sets of data,” Mr. Rohozinski said. “On one hand we needed technical data generated from Internet
log files. The other component is trying to understand what is going on in cyberspace by interviewing people, and by
understanding how institutions work.”
Veteran cybersecurity investigators agree that the best data detectives need to go beyond the Internet. They may even need
to wear out some shoe leather.
“We can‟t become myopic about our tools,” said Kent Anderson, a security investigator who is a member of security
management committee of the Information Systems Audit and Control Association. “I continually bump up against good
technologists who know how to use tools, but who don‟t understand how their tools fit into the bigger picture of the
investigation.”
This article has been revised to reflect the following correction:
Correction: May 15, 2009
An article on Tuesday about the investigative tools being used in computer security operations misspelled the surname of
a political scientist at the University of Toronto. He is Ronald Deibert, not Diebert. It also misstated the surname of the
editor of Infowar Monitor. He is Greg Walton, not Watson.