2. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Cognitive Security is Infosec applied to
disinformation
“Cognitive security is the application of information security principles, practices, and tools
to misinformation, disinformation, and influence operations.
It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms
of “something is wrong on the internet”.
Cognitive security can be seen as a holistic view of disinformation from a security
practitioner’s perspective
“Cognitive Security is the
application of artificial intelligence
technologies, modeled on human
thought processes, to detect
security threats.” - XTN
“Cognitive Security (COGSEC) refers to
practices, methodologies, and efforts
made to defend against social
engineering attempts‒intentional and
unintentional manipulations of and
disruptions to cognition and sensemaking”
- cogsec.org
3. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Cognitive Security is risk management
Confidentiality, integrity, availability
■ Confidentiality: data should only be visible to
people who authorized to see it
■ Integrity: data should not be altered in
unauthorized ways
■ Availability: data should be available to be
used
Possession, authenticity, utility
■ Possession: controlling the data media
■ Authenticity: accuracy and truth of the origin of
the information
■ Utility: usefulness (e.g. losing the encryption
key)
3
Image: Parkerian Hexad, from
https://www.sciencedirect.com/topics/computer-
science/parkerian-hexad
5. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021 Behaviour models: AMITT Red, AMITT
Blue
5
Planning
Strate
gic
Planni
ng
Objec
tive
Planni
ng
Preparation
Devel
op
Peopl
e
Devel
op
Netwo
rks
Microt
argeti
ng
Devel
op
Conte
nt
Chan
nel
Select
ion
Execution
Pump
Primin
g
Expos
ure
Prebun
king
Humorou
s counter
narrative
s
Mark
content
with
ridicule /
decelera
nts
Expire
social
media
likes/
retweets
Influence
r
disavows
misinfo
Cut off
banking
access
Dampen
emotiona
l reaction
Remove
/ rate
limit
botnets
Social
media
amber
alert
Etc
Go
Physi
cal
Persis
tence
Eval
uati
on
Meas
ure
Effecti
venes
s
Have a
disinfor
mation
respon
se plan
Improve
stakehol
der
coordinat
ion
Make
civil
society
more
vibrant
Red
team
disinform
ation,
design
mitigatio
ns
Enhance
d privacy
regulatio
n for
social
media
Platform
regulatio
n
Shared
fact
checking
database
Repair
broken
social
connecti
ons
Pre-
emptive
action
against
disinform
ation
team
infrastruc
ture
Etc
Media
literacy
throug
h
games
Tabletop
simulatio
ns
Make
informati
on
provenan
ce
available
Block
access
to
disinform
ation
resource
s
Educate
influence
rs
Buy out
troll farm
employe
es / offer
jobs
Legal
action
against
for-profit
engagem
ent farms
Develop
compelli
ng
counter
narrative
s
Run
competin
g
campaig
ns
Etc
Find
and
train
influen
cers
Counter-
social
engineeri
ng
training
Ban
incident
actors
from
funding
sites
Address
truth in
narrative
s
Marginali
se and
discredit
extremist
groups
Ensure
platforms
are
taking
down
accounts
Name
and
shame
disinform
ation
influence
rs
Denigrat
e funding
recipient
/ project
Infiltrate
in-groups
Etc
Remov
e old
and
unused
accoun
ts
Unravel
Potemkin
villages
Verify
project
before
posting
fund
requests
Encoura
ge
people to
leave
social
media
Deplatfor
m
message
groups
and
boards
Stop
offering
press
credentia
ls to
disinform
ation
outlets
Free
open
library
sources
Social
media
source
removal
Infiltrate
disinform
ation
platforms
Etc
Fill
informa
tion
voids
Stem
flow of
advertisi
ng
money
Buy
more
advertisi
ng than
disinform
ation
creators
Reduce
political
targeting
Co-opt
disinform
ation
hashtags
Mentorsh
ip:
elders,
youth,
credit
Hijack
content
and
link to
informa
tion
Honeypo
t social
communi
ty
Corporat
e
research
funding
full
disclosur
e
Real-
time
updates
to
factchec
k
database
Remove
non-
relevant
content
from
special
interest
groups
Content
moderati
on
Prohibit
images
in
political
Chanels
Add
metadata
to
original
content
Add
warning
labels on
sharing
Etc
Rate-
limit
engage
ment
Redirect
searches
away
from
disinfo
Honeypo
t: fake
engagem
ent
system
Bot to
engage
and
distract
trolls
Strength
en
verificati
on
methods
Verified
ids to
comment
or
contribut
e to poll
Revoke
whitelist /
verified
status
Microta
rget
likely
targets
with
counter
messa
ges
Train
journalist
s to
counter
influence
moves
Tool
transpare
ncy and
literacy
in
followed
channels
Ask
media
not to
report
false info
Repurpo
se
images
with
counter
message
s
Engage
payload
and
debunk
Debunk/
defuse
fake
expert
credentia
ls
Don’t
engage
with
payloads
Hashtag
jacking
Etc
DMCA
takedo
wn
request
s
Spam
domestic
actors
with
lawsuits
Seize
and
analyse
botnet
servers
Poison
monitor
ing and
evaluat
ion
data
Bomb
link
shortener
s with
calls
Add
random
links to
network
graphs
https://github.com/cogsec-collaborative/AMITT
9. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Business questions
● Is there a market here?
● Does the market pay enough to
sustain businesses?
● Where’s the money coming from?
● What’s it paying for?
● Who is already in this space?
● Who is likely to move into this
space?
● Who is the customer base?
● What features and restrictions do we
have?
9
FOLLOW THE MONEY
10. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Where is the money in a threat landscape?
• Motivations
• Geopolitics mostly absent
• Party politics (internal, inter-party)
• Actors
• Political parties
• Nationstates
• Entrepreneurs
• Activities
• Manipulate faith communities
• Discredit election process
• Discredit/discourage journalists
• Attention (more drama)
• Potential harms / severities
• Assassination
• Voting reduction
• Sources
• WhatsApp
• Blogs
• Facebook pages
• Online newspapers
• Media
• Routes
• Hijacked narratives
• Whatsapp to blogs, vice versa
• Whatsapp forwarding
• facebook to whatsapp
• Social media to traditional media
• Social media to word of mouth
10
15. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Disinformation Actors
Persistent
Manipulators
Advanced teams
• Internet Research Agency
• China, Iran teams etc
For-profit website networks
• Antivax websites
• Pink slime sites
• “Stolen” US election sites
Nationstate media
• Sputnik
• Russia Today
Service
Providers
Disinformation as a Service
• Factories
• Ex-marketing, spam etc
Ad-Hoc paid teams
• EBLA Ghana
• PeaceData USA
Opportunists
Wares Sellers
• Clicks
• T-shirts
• Books etc.
Groups
• Conspiracy groups
• Extremists
Individuals
• Attention-seekers
• Jokers etc
16. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Disinformation as a service (DaaS)
“Doctor Zhivago’s services were priced very specifically, as seen below:
● $15 for an article up to 1,000 characters
● $8 for social media posts and commentary up to 1,000 characters
● $10 for Russian to English translation up to 1,800 characters
● $25 for other language translation up to 2,000 characters
● $1,500 for SEO services to further promote social media posts and traditional media articles, with a time frame of 10 to 15
days
Raskolnikov, on the other hand, had less specific pricing:
● $150 for Facebook and other social media accounts and content
● $200 for LinkedIn accounts and content
● $350–$550 per month for social media marketing
● $45 for an article up to 1,000 characters
● $65 to contact a media source directly to spread material
● $100 per 10 comments for a given article or news story”
Image: https://www.recordedfuture.com/disinformation-service-campaigns/
17. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
DaaS examples
Internet Research Agency
● Russian “troll farm”
● Well organised
● Ex marketing
● Not quite official
Satellite organisation: EBLA
● Cut-out organisation based in Ghana
● Kids round a kitchen table model
Troll farms in the Philippines
● PR experts plus younger social media influencers
● Philippines because English-speaking workforce, used to call center, content moderation work
PR firms, various locations
● US-based: operating in other countries (Venezuela, Bolivia etc)
● MAS Agency (Ukraine-based PR firm)
● Saudi digital marketing firm
18
Image: https://en.wikipedia.org/wiki/Internet_Research_Agency
31. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Response Actors
Disinformation SOCs
Large actors
• ISAOs
• Platforms
• Other large actors
Event-specific
• War rooms
• Agencies
Disinformation
Teams
Disinformation “desk"
• In existing SOC
• Standalone unit
Investigators
• Journalists
• Academics
• Independent journalists
Other Responders
Policymakers
Law enforcement
Corporations
Influencers
Nonprofits
Educators
Individual researchers
Concerned citizens
32. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Example Response Landscape (Needs / Work /
Gaps)
Risk Reduction
● Media and influence
literacy
● information
landscaping
● Other risk reduction
Monitoring
● Radio, TV, newspapers
● Social media platforms
● Tips
Analysis
● Tier 1 (creates tickets)
● Tier 2 (creates
mitigations)
● Tier 3 (creates reports)
● Tier 4 (coordination)
Response
● Messaging
○ prebunk
○ debunk
○ counternarratives
○ amplification
● Actions
○ removal
○ other actions
● Reach
38. SJ
Terp|
The
business
of
cognitive
security
|
NYU
Dec
2021
Fiveby: Adapting supply chain risk
management
● Seattle-based risk
consultancy
● Techniques rooted in fraud
risk assessment
● Aimed at platforms and
other online businesses
Image: https://www.fiveby.com/wp-content/uploads/2021/05/Fiveby_disinformation_whitepaper_032921_final-1.pdf
Hi. I’m SJ. Delighted to be at the NYU Computational Disinformation Symposium. I’m here to put a framework around some of the rest of today.
Links:
NY Times, "inside the disinformation wars"
Harpers, Bad News
First, a recap. I work on cognitive security. It’s wider than just disinformation - it’s about protecting the security of human beliefs and social signals from large-scale technology-enabled efforts to disrupt and manipulate them - but for the purposes of today, it’s about taking a security risk practitioner’s view of disinformation and other human cyber harms.
“Disinformation is not a tech problem; it is a social problem enabled by technology” - Pablo Breuer
Cognitive security is about risk management. Which traditionally is the combination of the frequency of losses combined with the severity of those losses. I’ve done separate work on applying information security risk management frameworks (e.g. NIST and FAIR) to cognitive security, and the components we need for that: threat models, conversion probabilities from threat to loss, loss categories and sizings etc.
I have a whole talk on that, but I’ll just put up this slide. Before you do the sizings, you need to think about what you’re protecting. Most people in infosec know the CIA triangle of confidentiality, integrity, availability, and that disinformation is primarily an integrity problem. Less people know the Parkerian Hexad, which adds possession, authenticity, utility: all properties important to cognitive security.
SJ Parkerian Hexad: https://www.staffhosteurope.com/blog/2019/03/cybersecurity-and-the-parkerian-hexad and https://www.sciencedirect.com/topics/computer-science/parkerian-hexad
We talked last time about frameworks. Here are two ways to think about disinformation incidents: the what and when of Kipling’s six honest serving men (what, why, when, how, where, who).
Rudyard Kipling quote: “I keep six honest serving-men (They taught me all I knew); Their names are: What and Why and When and How and Where and Who”.
This is the AMITT Red framework: the disinformation version of MITRE ATT&CK that we built to model the stages and techniques in disinformation creation, and the AMITT Blue framework of counters to them.
Looking at this from the top to the bottom, the first line is operational phases, then the blue boxes are “tactic stages”, links in the disinformation kill chain. The kill chain is different: we tried to fit it to the existing one, but it didn’t cover every part of disinformation.
The grey boxes are the TTPs (tactic, technique, or procedure) that allow you to complete each stage. TTPs are behaviors that we can view examples and counters for, and the Cognitive Security Intelligence Centre is adding indicators to this.
AMITT Red is deliberately similar to the ATT&CK TTP framework, so you can use all ATT&CK-compatible tools with it.
We’ve been working on how to rapidly share information about actors, behaviours, narratives and artifacts for a while. All that work on standards is for this: getting organisations to work closely together.
So one question is “where do you put a disinformation SOC”? Here are some options, ranging from a large dedicated SOC (e.g. the Cognitive Security ISAO), to a small standalone team, each connected to other organisations and functions.
Different sizings: ISAO will be full works, Platforms and others with this as main problem will have COGSOCS interacting with Infosec; most large businesses will have a team, probably a desk in the infosec soc; small orgs and current teams usually just the desk, not working with infosec SOC.
The two options in the middle are interesting - this is us working out how to connect a disinformation SOC to an existing Infosec SOC - either as an equal partner, or as a “desk” inside the existing SOC.
We also divided cognitive security into three interacting ecosystems: information landscape, the threat landscape within it, and the response landscape both improving the information landscape and countering threats.
An innovations colleague used to say “Is there a hole in the market, is there a market in the hole?”, i.e. is there a business need here, and is there money in meeting that need. Both are needed for businesses to survive.
Here’s a typical threat landscape. There’s money all over this landscape. People get paid directly for disinformation campaigns, but they also get paid for advertisements, influence, websites, youtube views, and selling everything from clicks to fake cures.
We can get a clue from adjacent markets, and specifically the ransomware market. These are businesses - often outsourcing things like finding target markets, recruitment etc. They value things like repeatable processes (think script kiddies), but will also innovate when they encounter countermeasures.
Go read this post and think about how it applies to cognitive security.
Links
https://sec.okta.com/articles/2020/08/crimeops-operational-art-cyber-crime
The disinformation business isn’t as black and white as we’d like it to be. We’re already seeing techniques created for disinformation being reused in other parts of the internet, and tools and feeder markets from adjacent markets like marketing being used in disinformation.
Links:
https://chiefmartec.com/2020/04/marketing-technology-landscape-2020-martech-5000/
Response is also a market - not one we’re covering today, but there are adjacent markets for that too. I’ve started on lumascapes for both creation and response - the main difference between them is that because creation organisations can be ephemeral, it’s more useful to capture the *types* of these organisations, whilst there are now thousands of response organisations, which is more amenable to a logos-in-groups display like this.
Links:
https://momentumcyber.com/docs/CYBERscape.pdf
There are many actors in this space. We hear a lot about threat actors - these are at different scales, with different capabilities and motivations. They’re learning from each other, and they often interact - for instance, we see t-shirt sellers picking up geopolitical narratives.
Today we’re talking about disinformation as a service, but that’s not the only way money changes hands. DaaS has its roots in marketing, but also in the early internet, where people realized they could make money by boosting account follower numbers.
From https://www.recordedfuture.com/disinformation-service-campaigns/
Since then, more ex marketing, spam etc companies have joined in, and actors from countries (using fake groups as cutouts) to individuals (to sway a court case) have used DAAS services.
Here are some actor examples, from the large to the small.
Links
https://web.archive.org/web/20200311133702/https://eblango.org/about-us/
https://public-assets.graphika.com/reports/graphika_report_ira_in_ghana_double_deceit.pdf
https://www.washingtonpost.com/world/asia_pacific/why-crafty-internet-trolls-in-the-philippines-may-be-coming-to-a-website-near-you/2019/07/25/c5d42ee2-5c53-11e9-98d4-844088d135f2_story.html
https://cyber.fsi.stanford.edu/io/news/us-pr-firm-steps-contested-elections
https://medium.com/dfrlab/facebook-removes-hyperlocal-news-pages-that-promoted-political-party-members-in-ukraine-e1073f188d60
https://cyber.fsi.stanford.edu/io/news/smaat-twitter-takedown
https://www.brookings.edu/techstream/how-disinformation-evolved-in-2020/
Entities behind disinformation include nationstates, individuals, and companies. Within those companies, there are disinformation as a service companies, providing disinformation activities for money, and there are organisations creating disinformation campaigns for their own direct profits. Most of these have some form of URL attached to their money making - a website or set of websites, a funding campaign page, a youtube channel, or pages on onlines sales sites.
Some examples, For instance, Naturalnews.com has about 40 other sites in its network.
Natural News, National Vaccine Information Center, Informed Consent Action Network, Children’s Health Defense (yes, that is *that* Robert Kennedy).
Different sizes. Experts make money from attention, talks, books etc.
Smaller scale again. Here’s disinformation as a tool for attention, which can then be monetised in other ways. Selling cards, ad clicks, driving traffic to their own websites, and making money from things like books and talks that aren’t necessarily related to the disinformation campaign.
Let’s look at the support landscape for these. Think back to the ransomware example. What goods and services do disinformation creators need? This is both an issue, and an opportunity: not all of them are on the dark side, and these are all places that we can apply pressure to stop.
Tools can make money: here are some examples. On the right, If This Then That has been used to create a set of repeated tweets across accounts.
As with ransomware, there are supply chains. Here’s one.
Image: https://knowyourmeme.com/photos/923510-wikipedia
Non-disinformation industries also support disinformation. This is an opportunity to reduce the money flows.
Image: https://www.adexchanger.com/venture-capital/ecosystem-map-luma-partners-kawaja/
Other related organizations do all the things for marketing, including audience segmentation. Another opportunity.
Image: https://commons.wikimedia.org/wiki/File:Customer_Segmentation.png
Don’t laugh, but I started some lumascapes. I think it’s time we did this. For disinformation creation, we probably have a primary and secondary market, and will talk about types of organization, not necessarily individuals.
Not talking about this today, but I also did the same thing (following the money) with disinformation response- and here, we can put names against categories. There’s a whole talk on that too - suffice to say that these two markets are both interdependent, and very very unbalanced: organisations making money from disinformation actions versus responders working on grants, and limited in their money making capacities.
Hi. I’m SJ. Delighted to be at the NYU Computational Disinformation Symposium. I’m here to put a framework around some of the rest of today.
Links:
NY Times, "inside the disinformation wars"
Harpers, Bad News
But there are also response actors in this space. These are also at different scales, with different motivations, capabilities, and connections. Our work is on how best to join these together into a coordinated rapid response.
And to do this, we’re borrowing the idea of a Security Operations Center or SOC - a coordinating unit, focussed on cognitive security, connected to other disinformation SOCs. These organizations are starting to exist. This won’t be the same for everyone, so we’re designing operations for different sizes of organization, from ISAO sized down to small teams embedded in an information security unit, to independent teams, with connections to a response network, maybe as information providers or responders.
Extra important that cross-group interfaces are efficient