1. Abstract
This forecasting methodology identifies 68 indi-
cators of terrorism and employs proven analytic
techniques in a systematic process that safe-
guards against 36 of the 42 common warning pit-
falls that experts have identified throughout his-
tory. The complete version of this research pro-
vides: 1) a step-by-step explanation of how to
forecast terrorism, 2) an evaluation of the fore-
casting system against the 42 common warning
pitfalls that have caused warning failures in the
past, and 3) recommendations for implementa-
tion. The associated CD has the website interface
to this methodology to forecast terrorist attacks.
This methodology could be applied to any intel-
ligence topic (not just terrorism) by simply
changing the list of indicators. The complete
version of this research is available in Forecast-
ing Terrorism: Indicators and Proven Analytic
Techniques, Scarecrow Press, Inc., ISBN 0-
8108-5017-6, for which 100% of the author roy-
alties are being donated to the non-profit Na-
tional Memorial Institute for the Prevention of
Terrorism (www.mipt.org) and the Joint Military
Intelligence College Foundation which supports
the Defense Intelligence Agency, Joint Military
Intelligence College (www.dia.mil.Jmic).
1. Introduction: Correcting Misconcep-
tions
Important lessons have arisen from the study of intelli-
gence warning failures, but some common misconcep-
tions have prevented the Intelligence Community from
recognizing and incorporating these lessons. Analysis,
Rather Than Collection, Is the Most Effective Way to
Improve Warning. The focus to improve warning nor-
mally turns to intelligence collection, rather than analysis
(Kam 1988). That trend continues after September 11th
(Anonymous intelligence source 2002). However, warning
failures are rarely due to inadequate intelligence collec-
tion, are more frequently due to weak analysis, and are
most often due to decision makers ignoring intelligence
(Garst 2000). Decision makers, however, ignore intelli-
gence largely because analytical product is weak (Hey-
mann 2000). Hiring Smart People Does Not Necessarily
Lead to Good Analysis. Studies show that, “frequently
groups of smart, well-motivated people . . . agree . . . on
the wrong solution. . . . They didn’t fail because they
were stupid. They failed because they followed a poor
process in arriving at their decisions” (Russo and Schoe-
maker 1989). A Systematic Process Is the Most Effective
Way to Facilitate Good Analysis. The nonstructured
approach has become the norm in the Intelligence Com-
munity. A key misunderstanding in the debate over intui-
tion versus structured technique is that an analyst must
choose either intuition or structured technique (Folker
2000). In fact, both intuition and structured technique can
be used together in a systematic process. “Anything that
is qualitative can be assigned meaningful numerical val-
ues. These values can then be manipulated to help us
achieve greater insight into the meaning of the data and
to help us examine specific hypotheses” (Trochim 2002). It
is not only possible to combine intuition and structure in
a system, research shows the combination is more effec-
tive than intuition alone. “Doing something systematic is
better in almost all cases than seat-of-the-pants predic-
tion” (Russo 1989). Moreover, decision makers have
called on the Intelligence Community to use methodol-
ogy. “The Rumsfeld Commission noted that, ‘. . . an ex-
pansion of the methodology used by the IC [Intelligence
Community] is needed.’ . . . Keeping chronologies, main-
taining databases and arraying data are not fun or glam-
orous. These techniques are the heavy lifting of analysis,
but this is what analysts are supposed to do. If decision
makers only needed talking heads, those are readily
available elsewhere” (McCarthy 1998).
Forecasting Terrorism: Indicators and Proven Analytic Techniques
Captain Sundri K. Khalsa
USAF
PO BOX 5124
Alameda, California, 94501, United States
SundriKK@hotmail.com
Keywords: Terrorism, Predictive Analysis and Hypothesis Management, Multiple Competing Hypotheses, Question
Answering, Structured Argumentation, Novel Intelligence from Massive Data, Knowledge Discovery and Dissemina-
tion, Information Sharing and Collaboration, Multi-INT/fusion, All Source Intelligence, Visualization, Fusion.
2. 2. How to Forecast Terrorism: Abbrevi-
ated Step-By-Step Explanation of the
Methodology
This forecasting system is based on indicators. The ex-
planation of this methodology begins at the lowest level of
indicators and then builds up to the big picture of countries
within a region. The forecasting assessments of this meth-
odology are maintained on a website display, which is
available on the associated CD. Figure 1 shows a break-
down of the 3 primary types of warning picture views from
the web homepage: 1) country list view, 2) target list view,
and 3) indicator list view. Each potential-terrorist target is
evaluated in a webpage hypothesis matrix based on the
status of 68 indicators of terrorism. The indicators are up-
dated near-real-time with incoming raw intelligence re-
ports/evidence The methodology consists of 23 tasks and 6
phases of warning analysis, which are described very briefly
in this paper. The 23 tasks include 14 daily tasks, 3 monthly
tasks, 4 annual tasks, and 2 as-required tasks. The 14 daily
tasks can be completed in 1 day because tasks have been
automated wherever possible. Three types of analysts are
required to operate this methodology: Raw Reporting Pro-
filers, Indicator Specialists, and Senior Warning Officers.
2.1 Phase I: Define/Validate Key Ele-
ments of the Intelligence Problem (Using
Indicators)
Task 1: Identify/Validate Indicators (Annually). Indi-
cators are the building blocks of this forecasting system.
Indicators are “those [collectable] things that would have
to happen and those that would likely happen as [a] sce-
nario unfolded” (McDevitt 2002). For a terrorist attack,
those would be things like: terrorist travel, weapons
movement, terrorist training, target surveillance, and tests
of security. This project research has identified 68 indica-
tors of terrorism encompassing terrorist intentions, terror-
ist capability, and target vulnerability, which are the
three components of risk. The indicators are also identi-
fied as either quantitative (information that can be
counted) or qualitative (information that cannot be
counted, such as terrorist training). Only 7 of the 68 Ter-
rorism Indicators are quantitative. For security reasons,
many indicators are not shown here.
In task 1 of this methodology, the leading counterter-
rorism experts meet on at least an annual basis to deter-
mine if indicators should be added to or removed from
the list. A list of indicators should never be considered
final and comprehensive. To determine if the list of indi-
cators should be altered, analysts:
1. Review the raw intelligence reports filed under the
Miscellaneous Indicators to determine if any kinds
of significant terrorist activity have been over-
looked.
2. Review U.S. collection capabilities to determine if
the U.S. has gained or lost the capability to collect
on any terrorist activities.
3. Review case studies of terrorist operations to iden-
tify changes in terrorist modus operandi and de-
termine if terrorists are conducting any new activi-
ties against which U.S. intelligence can collect.
Task 2: Prioritize Indicators (Annually). Some indi-
cators are more significant than others. For instance,
among the indicators of terrorist intentions, weapons
movement to the target area must take place before an
attack; whereas an increase in terrorist propaganda is not
a prerequisite. Therefore, weapons movement would
carry a higher significance/priority than increased propa-
ganda. On at least an annual basis, the leading counterter-
rorism experts determine if the priority of any of the in-
dicators needs to be adjusted on a scale of 1 through 3
according to definitions defined in the complete explana-
tion of this methodology.
Task 3: Develop/Validate Key Question Sets on
Each Indicator (Annually). Since indicators are the
foundation and building blocks of this methodology, the
type of information required to assess the status of an
indicator should be clearly defined and recorded. Thus,
on at least an annual basis, the leading counterterrorism
experts validate a list of key questions for each indicator.
The question sets identify the key factors that experts
have determined are necessary to assess the status of a
given indicator. They also function as the list of priori-
tized Collection Requirements for intelligence collectors.
The entire list of indicators and corresponding key ques-
tion sets form an intelligence collection plan against ter-
rorism.
Task 4: Prioritize Questions in Key Question Sets
(Annually). The leading counterterrorism experts also
prioritize the key questions on a scale of 1 through 3 with
1 being the most significant. These priorities affect both
intelligence collection priorities and analysts’ assess-
ments.
2.2 Phase II: Consolidate Information (Us-
ing Master Database)
Task 5: Intelligence Community Master Database Re-
ceives All Raw Intelligence Reports from Intelligence
Collectors (Daily). The daily process begins with the
requirement that all fifteen Member Organizations of the
Intelligence Community and other U.S. government or-
ganizations that may have terrorism-related information
forward all their raw intelligence reports (on all intelli-
gence topics, not just terrorism) to an Intelligence Com-
munity Master Database. The major benefit of hindsight
investigation after intelligence warning failures is that it
is the first time all the information has been consolidated.
2.3 Phase III: Sort Information (Using
Hypothesis Matrices)
Task 6: Enter All Terrorism Related Raw Intelligence
Reports into Terrorism Forecasting Database Under
Appropriate Indicators, Key Questions, Targets,
Countries, Terrorist Groups, and Other Data Profile
Elements (Daily). A large group of junior analysts,
3. called Raw Reporting Profilers, reads through the Intelli-
gence Community’s incoming raw intelligence reports
(an estimated 2500 per day that are already marked as
terrorism related) and enters all of them into a Terrorism
Forecasting Database according to the indicators, key
questions, targets, countries, terrorist groups, and other
terrorism forecasting-specific data profile elements iden-
tified in this methodology.
Task 7: Terrorism Forecasting Database Creates
Potential-Target Hypothesis Matrices with Raw Intelli-
gence Reports Filed by Indicators (Daily). After ana-
lysts enter raw intelligence reports into the Terrorism
Forecasting Database, the computer program automati-
cally creates corresponding Potential-Target Hypothesis
Matrix webpages and displays hyperlinks to the reports
under the appropriate indicator(s) within the hypothesis
matrices.
Task 8: Terrorism Forecasting Database Feeds Raw
Intelligence Reports into Appropriate Indicator Key
Questions, Answers, & Evidence Logs within the Hy-
pothesis Matrices (Daily). The master database also
feeds the raw reports into the appropriate Indicator Key
Questions, Answers, & Evidence Logs within the Hy-
pothesis Matrices, as shown in figure 2.
Task 9: Assess Raw Intelligence Reports’ Informa-
tion Validity (Daily). The computer program combines
Source Credibility and Information Feasibility/Viability
according to rules in utility matrix logic to determine a
report’s Information Validity [on a 5-level scale of: 1)
“Almost Certainly Valid (~90%),” color coded red on the
website, 2) “Probably Valid (~70%),” orange 3) “Proba-
bly Not Valid (~30%),” yellow, 4) “Almost Certainly Not
Valid (~10%),” gray, and 5) “Unknown Validity (or
~50%),” black]. Source Credibility and Information Fea-
sibility/Viability are determined by analysts when they
check a list of boxes for each in the database on a 5-level
scale of: 1) “Almost Certainly (~90%),” about 90 percent
probability, 2) “Probably (~70%),” 3) “Probably Not
(~30%),” 4) “Almost Certainly Not (~10%),” and 5)
“Unknown (or ~50%)”].
2.4 Phase IV: Draw Conclusions (Using
Intuitive and Structured Techniques)
Task 10: Assess Indicator Warning Levels (Daily). The
computer program combines the Indicator Priority and
an Indicator Activity Level to determine an Indicator
Warning Level [on a 5-level scale of: 1) Critical (~90%),
2) Significant (~70%), 3) Minor (~30%), 4) Slight
(~10%), and 5) Unknown (or ~50%)]. The Indicator Ac-
tivity Level is determined by a second group of analysts
using either rules in utility matrix logic (for the quantita-
tive indicators) or the Indicator Key Questions, Answers,
& Evidence Logs (for the qualitative indicators). These
analysts are Indicator Specialists, the Counterterrorism
Community’s designated experts in determining an Indi-
cator Activity Level.
From this point forward, all the warning level calcula-
tions are automated. The third group of analysts, Senior
Warning Officers, is responsible for monitoring and ap-
proving all the warning levels that the computer applica-
tion automatically produces and updates on the web-
pages.
Task 11: Assess Terrorist Intention Warning Level
(Daily). Now that analysts have assessed an Indicator
Warning Level for each of the 68 indicators of terrorist
intentions, terrorist capability, and target vulnerability
(the 3 components of risk), the computer can calculate a
warning level for each of those 3 components. The com-
puter calculates the Terrorist Intention Warning Level for
a target [on a 5-level scale of: 1) Critical (~90%), 2) Sig-
nificant (~70%), 3) Minor (~30%), 4) Slight (~10%), and
5) Unknown (or ~50%)] from the Indicator Warning
Levels of the active terrorist intention indicators using
averages and rules in utility matrix logic.
Task 12: Assess Terrorist Capability Warning Level
(Daily). The computer determines the Terrorist Capabil-
ity Warning Level for a given country [on a 5-level scale
of: 1) Critical (~90%), 2) Significant (~70%), 3) Minor
(~30%), 4) Slight (~10%), and 5) Unknown (or ~50%)]
by taking the highest of all the Indicator Warning Levels
of the active terrorist capability, lethal agent/technique
indicators.
Task 13: Assess Target Vulnerability Warning Level
(Daily). The computer program calculates a target’s Vul-
nerability Warning Level [on a 5-level scale of: 1) Criti-
cal (~90%), 2) Significant (~70%), 3) Minor (~30%), 4)
Slight (~10%), and 5) Unknown (or ~50%)] from the
Indicator Warning Levels of the active terrorist vulner-
ability indicators using averages and rules in utility ma-
trix logic.
Task 14: Assess Target Risk Warning Level (Daily).
Now that analysts have a warning level for each of the 3
components of risk (terrorist intentions, terrorist capabil-
ity, and target vulnerability), the computer can calculate
a risk warning level for a given target. The computer
program calculates the Target Risk Warning Level [on a
5-level scale of: 1) Critical (~90%), 2) Significant
(~70%), 3) Minor (~30%), 4) Slight (~10%), and 5) Un-
known (or ~50%)] by averaging the Terrorist Intention,
Terrorist Capability, and Target Vulnerability Warning
Levels.
Task 15: Assess Country Risk Warning Level
(Daily). The computer program determines the Country
Risk Warning Level by taking the highest Target Risk
Warning Level in the country.
Task 16: Update/Study Trend Analysis of Indicator
Warning Levels (Monthly). This explanation has now
shown how the methodology determines various warning
levels, and that they are displayed in 3 primary views:
indicator list, target list, and country list. There is also
trend analysis for each view. The computer automatically
captures the Indicator Warning Level as it stood for the
majority of each month and plots it on a graph. Senior
Warning Officers write analytical comments discussing
warning failures and successes and how the methodology
will be adjusted, if necessary.
4. Task 17: Update/Study Trend Analysis of Target
Risk Warning Levels (Monthly). A target-oriented
trend analysis is also maintained in the same manner.
Task 18: Update/Study Trend Analysis of Country
Risk Warning Levels (Monthly). Finally, a country-
oriented trend analysis is maintained in the same manner.
2.5 Phase V: Focus Collectors on Intelli-
gence Gaps to Refine/Update Conclusions
(Using Narratives that Describe What We
Know, Think, and Need to Know)
Task 19: Write/Update Indicator Warning Narrative:
What We Know, Think, & Need to Know (Daily). Thus
far, the methodology has provided color-coded graphic
representations of warning assessments, but narratives
are necessary to explain the details behind each color-
coded warning level. Narratives are provided for each of
the 3 graphic views—indicators, targets, and countries.
The key question set provides an outline of all the major
points that the indicator narrative should address. The
narrative begins with a description of what the analyst
knows and thinks about the indicator, then is followed by
a list of questions from the Indicator Key Questions, An-
swers, & Evidence Logs on what he doesn't know—
Intelligence Gaps for intelligence collectors.
Task 20: Write/Update Executive Summary for
Target Warning Narrative: What We Know, Think, &
Need to Know (Daily). The computer program combines
the indicator narratives into a potential-target narrative.
Senior Warning Officers write and maintain executive
summaries for each potential-target narrative.
Task 21: Write/Update Executive Summary for
Country Warning Narrative: What We Know, Think, &
Need to Know (Daily). Finally, the computer program
combines the potential-target executive summaries into a
country narrative. Again, Senior Warning Officers are
responsible for maintaining executive summaries for the
country narratives.
2.6 Phase VI: Communicate Conclu-
sions/Give Warning (Using Website Tem-
plates)
Task 22: Brief Decision Maker with Website Tem-
plates (As Required). The final task of a warning system
is to convey the warning. Senior Warning Officers brief
the warning levels to a decision maker using the website
templates. The website templates are designed to convey
the structured reasoning process behind each warning
level.
Task 23: Rebrief Decision Maker with New Evi-
dence in Website Templates (As Required). If a deci-
sion maker does not heed the warning and does not alter
a security posture/FPCON after hearing the analyst’s
warning assessment, the analyst returns to the decision
maker with new evidence to press the assessment until
the decision maker heeds the warning. Former Secretary
of State Colin Powell describes this requirement elo-
quently, “[Analysts must] recognize that more often than
not, I will throw them out, saying ‘na doesn’t sound
right, get outta here.’ What I need from my I&W system
at that point is, ‘That old bastard, I’m going to prove him
wrong.’ And go back and accumulate more information,
come back the next day and give me some more and get
thrown out again. Constantly come back . . . and per-
suade me that I better start paying attention” (Powell
1991). This is in line with the DCI’s statement that the
purpose of intelligence is “not to observe and comment,
but to warn and protect” (DCI Warning Committee 2002).
3. Conclusion
Rather than face the War on Terrorism with the traditional
intuition-dominated approach, this methodology offers a
systematic forecasting tool that:
Guards against nearly 81 percent of common warning
pitfalls, and ultimately, improves the terrorism warning
process.
Coordinates analysts in a comprehensive, systematic ef-
fort.
Automates many proven analytic techniques into a com-
prehensive system, which is near-real-time, saves time,
saves manpower, and ensures accuracy in calculations
and consistency in necessary, recurring judgments.
Enables collection to feed analysis, and analysis to also
feed collection, which is the way the intelligence cycle is
supposed to work.
Fuses interagency intelligence into a meaningful warning
picture while still allowing for compartmenting necessary
to protect sensitive sources and methods.
Provides a continuously updated analysis of competing
hypotheses for each potential-terrorist target based on the
status of the 68 indicators of terrorism.
Is the first target specific terrorism warning system; thus
far, systems have only been country specific.
Is the first terrorism warning system with built in trend
analysis.
Combines threat (adversary intentions and adversary ca-
pability) with friendly vulnerability to determine risk and
provide a truer risk assessment than typical intelligence
analysis.
Includes a CD that is the tool to implement this terrorism
forecasting system.
Officials in the FBI and the Defense Intelligence Agency
(DIA) characterized this terrorism forecasting system as
“light-years ahead,” “the bedrock for the evolving approach
to terrorism analysis,” and an “unprecedented forecasting
model.”
Declaration of Originality
This paper has not already been accepted by and is not cur-
rently under review for a journal or another conference, nor
will it be submitted for such during IA’s review period.
5. Website Homepage
Select a region of the
world
Country List View
Select a country
Target List View
Select a potential
target
Indicator List Views
Select an indicator of:
Terrorist Intentions,
Terrorist Capability, or
Target Vulnerability
Figure 1. The 3 Primary Warning Picture Views.
References
A source, mid-level intelligence professional at a national intelli-
gence organization, who wishes to remain anonymous. Inter-
view by author, 10 July 2002.
Colin Powell on I&W: Address to the Department of Defense
Warning Working Group. Distributed by the Joint Military
Intelligence College, Washington, D.C. 1991. Videocas-
sette.
Donald Rumsfeld, press conference, quoted in Mary O.
McCarthy, “The Mission to Warn: Disaster Looms,” Defense
Intelligence Journal 7 no. 2 (Fall 1998): 21.
Folker, Robert D. Jr., MSgt, USAF. Intelligence Analysis in
Theater Joint Intelligence Centers: An Experiment in Apply-
ing Structured Methods. Joint Military Intelligence College
Occasional Paper, no. 7. Washington, D.C.: Joint Military
Intelligence College, January 2000.
Garst, Ronald D. “Fundamentals of Intelligence Analysis.” 5-7
in Intelligence Analysis ANA 630, no. 1, edited by Joint
Military Intelligence College. Washington, D.C.: Joint Mili-
tary Intelligence College, 2000.
Heymann, Hans, Jr. “The Intelligence—Policy Relationship.”
53-62 in Intelligence Analysis ANA 630, no. 1, edited by
Joint Military Intelligence College. Washington, D.C.: Joint
Military Intelligence College, 2000.
6. In tasks 19, 20, and
21, this question,
answered “Unknown
(or ~50%)”, auto-
matically appears as
a Collection Re-
quest on the appro-
priate Warning Nar-
rative: What We
Know, Think, &
Need to Know.
Figure 2. Indicator Key Questions, Answers, & Evidence Log (in Hypothesis Matrix),
Kam, Ephraim. Surprise Attack: The Victim’s Perspective.
Cambridge, MA: Harvard University Press, 1988.
McDevitt, James J. Summary of Indicator-Based-Methodology.
Unpublished handout, n.p., n.d. Provided in January 2002 at
the Joint Military Intelligence College.
National Warning Staff, DCI Warning Committee. “National
Warning System.” Handout provided in January 2002 at the
Joint Military Intelligence College.
Russo, J. Edward, and Paul J. H. Schoemaker. Decision Traps:
The Ten Barriers to Brilliant Decision-Making and How to
Overcome Them. New York: Rockefeller Center, 1989.
Trochim, William M. K. “Qualitative Data.” Cornell Univer-
sity: Research Methods Knowledge Base. 2002. tro-
chim.human.cornell.edu/kb/qualdata.htm (31 May 2002).