Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Sj terp emerging tech radar

Wird geladen in …3

Hier ansehen

1 von 44 Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Ähnlich wie Sj terp emerging tech radar (20)


Aktuellste (20)


Sj terp emerging tech radar

  1. 1. Cognitive Security - it’s not what you think SJ Terp | DISARM Foundation Emerging Tech Radar Feb 9th 2022 1
  2. 2. Who is SJ Terp? My work over the past year… Communities ● CogSecCollab ● CTI League disinformation team Collaborations ● Disinformation response coordination: European Union (51 countries), UNDP (170 countries), individual countries (3 english-speaking ones), (WHO Europe: 51+ countries) ● DISARM Foundation (inc MITRE, FIU, EU etc) ● Defcon Misinfo Village (inc CredCo / MisinfoCon) ● Atlantic Council senior fellow Mentoring ● Individuals and organisations ● Book sub-editing ● Machine learning in infosec PhD advisor ● Nonprofit boards (RealityTeam, SocietyLibrary etc) Research ● Risk-based Cognitive Security ○ AMITT model set (DISARM, EU, NATO, etc) ○ AMITT-SPICE model merge (with MITRE, FIU) ○ Extensions to FAIR etc (hopefully Harvard) ○ Community disinfo behaviour tagging (UW) ● Machine learning for cognitive security ○ Disinfo OSINT (U) ○ Community-based disinfo response (UN) ○ Extremism tracking (U) ● One-off research ○ Disinformation market models (DARPA) ○ Assessing disinformation training systems (State Dept) ○ Disinformation social ecological models (ARLIS) ○ Etc Teaching (Uni Maryland) ● Cognitive Security: defence against disinformation ● Ethical hacking: sociotechnical cybersecurity ● Fundamentals of technology innovation 2
  3. 3. Cognitive Security course What we’re dealing with 1. Introduction a. disinformation reports, ethics b. researcher risks 2. fundamentals (objects) 3. cogsec risks Human aspects 1. human system vulnerabilities and patches 2. psychology of influence Building better models 1. frameworks 2. relational frameworks 3. building landscapes Investigating incidents 8. setting up an investigation 9. misinformation data analysis 10. disinformation data analysis Improving our responses 8. disinformation responses 9. monitoring and evaluation 10. games, red teaming and simulations Where this is heading 8. cogsec as a business 9. future possibilities 3
  4. 4. Ethical Hacking course: principles and verticals First, do no harm 1. Ethics = risk management 2. Don’t harm others (harms frameworks) 3. Don’t harm yourself (permissions etc) 4. Fix what you break (purple teaming) It’s systems all the way down 1. Infosec = systems (sociotechnical infosec) 2. All systems can be broken (with resources) 3. All systems have back doors (people, hardware, process, tech etc) Psychology is important 1. Reverse engineering = understanding someone else’s thoughts 2. Social engineering = adapting someone else’s thoughts 3. Algorithms think too (adversarial AI) Be curious about everything 1. Curiosity is a hacker’s best friend 2. Computers are everywhere (IoT etc) 3. Help is everywhere (how to search, how to ask) Cognitive security 1. Yourself (systems thinking) 2. Social media (social engineering) 3. Elections (mixed security modes) Physical security 1. Locksports (vulnerabilities) 2. Buildings and physical (don’t harm self) Cyber security 1. Web, networks, PCs 2. Machine learning (adversarial AI) 3. Maps and algorithms (back doors) 4. Assembler (microcontrollers) 5. Hardware (IoT) 6. Radio (AISB etc) Systems that move 1. Cars (canbuses and bypasses) 2. Aerospace (reverse engineering) 3. Satellites (remote commands) 4. Robotics / automation (don’t harm others) 4
  5. 5. What is cognitive security? 5 (apart from the thing that we’re trying to create)
  6. 6. Cognitive Security is Information Security applied to disinformation+ “Cognitive security is the application of information security principles, practices, and tools to misinformation, disinformation, and influence operations. It takes a socio-technical lens to high-volume, high-velocity, and high-variety forms of “something is wrong on the internet”. Cognitive security can be seen as a holistic view of disinformation from a security practitioner’s perspective 6
  7. 7. Earlier Definitions: Cognitive Security: both of them “Cognitive Security is the application of artificial intelligence technologies, modeled on human thought processes, to detect security threats.” - XTN MLSec - machine learning in information security ● ML used in attacks on information systems ● ML used to defend information systems ● Attacking ML systems and algorithms ● “Adversarial AI” “Cognitive Security (COGSEC) refers to practices, methodologies, and efforts made to defend against social engineering attempts‒intentional and unintentional manipulations of and disruptions to cognition and sensemaking” - cogsec.org CogSec - social engineering at scale ● Manipulation of individual beliefs, belonging, etc ● Manipulation of human communities ● Adversarial cognition 7
  8. 8. Earlier Definitions: Social Engineering: both of them “the use of centralized planning in an attempt to manage social change and regulate the future development and behavior of a society.” ● Mass manipulation etc “the use of deception to manipulate individuals into divulging confidential or personal information that may be used for fraudulent purposes.” ● Phishing etc 8
  9. 9. What we’re dealing with 9 The Three Vs of cognitive security
  10. 10. Actors Entities behind disinformation ● Nationstates ● Individuals ● Companies Entities part of disinformation ● DAAS companies Image: https://gijn.org/2020/07/08/6-tools-and-6-techniques-reporters- can-use-to-unmask-the-actors-behind-covid-19-disinformation/ 10
  11. 11. Channels Lots of channels: Where people seek, share, post information Where people are encouraged to go Image: https://d1gi.medium.com/the-election2016-micro- propaganda-machine-383449cc1fba 11
  12. 12. Influencers Users or accounts with influence over a network ● Not the most followers ● The most influence ● Might be large influence over smaller groups. 12
  13. 13. Groups Social media groups created to create or spread disinformation ● Often real members, fake creators ● Lots of themes ● Often closed groups 13
  14. 14. Messaging Narratives designed to spread fast and be “sticky” ● Often on a theme ● Often repeated Image: https://www.njhomelandsecurity.gov/analysis/false- text-messages-part-of-larger-covid-19-disinformation- campaign 14
  15. 15. Tools ● Bots ● IFTTT variants ● Personas ● Network analysis ● Marketing tools Image: https://twitter.com/conspirator0/status/1249020176382779392 15
  16. 16. 1000s of responders 16
  17. 17. Other attack types from infosec 17
  18. 18. Other attack types from psychology Cognitive bias codex: Chart of about 200 biases Each of these is a vulnerability 18
  19. 19. Media view: Mis/Dis/Mal information “deliberate promotion… of false, misleading or mis-attributed information focus on online creation, propagation, consumption of disinformation We are especially interested in disinformation designed to change beliefs or emotions in a large number of people” 1 9
  20. 20. Military View: Information Operations 20
  21. 21. Communications view: shift to trust management 21
  23. 23. An information security view of disinformation management 23 (TL;DR adapt all the things)
  24. 24. Information Security vs Cognitive Security: Objects Computers Networks Internet Data Actions People Communities Internet Beliefs Actions 24
  25. 25. Disinformation as a risk management problem Manage the risks, not the artifacts ● Risk assessment, reduction, remediation ● Risks: How bad? How big? How likely? Who to? ● Attack surfaces, vulnerabilities, potential losses / outcomes Manage resources ● Mis/disinformation is everywhere ● Detection, mitigation, response ● People, technologies, time, attention ● Connections 25
  26. 26. Using the Parkerian Hexad Confidentiality, integrity, availability ■ Confidentiality: data should only be visible to people who authorized to see it ■ Integrity: data should not be altered in unauthorized ways ■ Availability: data should be available to be used Possession, authenticity, utility ■ Possession: controlling the data media ■ Authenticity: accuracy and truth of the origin of the information ■ Utility: usefulness (e.g. losing the encryption key) 26 Image: Parkerian Hexad, from https://www.sciencedirect.com/topics/computer- science/parkerian-hexad
  27. 27. Digital harms frameworks (List from https://dai-global-digital.com/cyber-harm.html) Physical harm e.g. bodily injury, damage to physical assets (hardware, infrastructure, etc). Psychological harm e.g. depression, anxiety from cyber bullying, cyber stalking etc Economic harm financial loss, e.g. from data breach, cybercrime etc Reputational harm e.g. Organization: loss of consumers; Individual: disruption of personal life; Country: damaged trade negotiations. Cultural harm increase in social disruption, e.g. misinformation creating real- world violence. Political harm e.g. disruption in political process, government services from e.g. internet shutdown, botnets influencing votes 27
  28. 28. Responder Harms Management Psychological damage ● Disinformation can be distressing material. It's not just the hate speech and _really_ bad images that you know are difficult to look at - it's also difficult to spend day after day reading material designed to change beliefs and wear people down. Be aware of your mental health, and take steps to stay healthy ● (this btw is why we think automating as many processes as make sense is good - it stops people from having to interact so much with all the raw material). Security risks ● Disinformation actors aren't always nice people. Operational security (opsec: protecting things like your identity) is important ● You might also want to keep your disinformation work separated from your dayjob. Opsec can help here too. 28
  29. 29. Disinformation Risk Assessment Information Landscape • Information seeking • Information sharing • Information sources • Information voids Threat Landscape • Motivations • Sources/ Starting points • Effects • Misinformation Narratives • Hateful speech narratives • Crossovers • Tactics and Techniques • Artifacts Response Landscape • Monitoring organisations • Countering organisations • Coordination • Existing policies • Technologies • etc 29
  30. 30. Cognitive Security Operations Centers 30
  31. 31. CogSoc info sharing Cognitive ISAO ISAC/ ISAO Infosec SOC Comms Legal COG SOC Trust& Safety Platform ORG Infosec SOC Comms Legal COG Desk Trust& Safety Platform Comms Legal COG Desk Trust& Safety Platform ORG ORG ORG ORG ORG ORG ORG COG SOC 31
  32. 32. Better coordination mechanisms 32 Borrowing from ISACs
  33. 33. Layers of detection, layers of response 3 3 Campaigns Incidents Narratives and behaviours Artifacts
  35. 35. DISARM Red: CogSec version of KillChain and ATT&CK 35
  36. 36. Intelligence community: Countermeasure categories DECEIVE DENY DESTROY DETER DEGRADE DISRUPT DETECT 36
  37. 37. Planning Strategic Planning Objective Planning Preparation Develop People Develop Networks Microtargeting Develop Content Channel Selection Execution Pump Priming Exposure Prebunking Humorous counter narratives Mark content with ridicule / decelerants Expire social media likes/ retweets Influencer disavows misinfo Cut off banking access Dampen emotional reaction Remove / rate limit botnets Social media amber alert Etc Go Physical Persistence Evaluation Measure Effectiveness Have a disinformation response plan Improve stakeholder coordination Make civil society more vibrant Red team disinformation, design mitigations Enhanced privacy regulation for social media Platform regulation Shared fact checking database Repair broken social connections Pre-emptive action against disinformation team infrastructure Etc Media literacy through games Tabletop simulations Make information provenance available Block access to disinformation resources Educate influencers Buy out troll farm employees / offer jobs Legal action against for-profit engagement farms Develop compelling counter narratives Run competing campaigns Etc Find and train influencers Counter-social engineering training Ban incident actors from funding sites Address truth in narratives Marginalise and discredit extremist groups Ensure platforms are taking down accounts Name and shame disinformation influencers Denigrate funding recipient / project Infiltrate in-groups Etc Remove old and unused accounts Unravel Potemkin villages Verify project before posting fund requests Encourage people to leave social media Deplatform message groups and boards Stop offering press credentials to disinformation outlets Free open library sources Social media source removal Infiltrate disinformation platforms Etc Fill information voids Stem flow of advertising money Buy more advertising than disinformation creators Reduce political targeting Co-opt disinformation hashtags Mentorship: elders, youth, credit Hijack content and link to information Honeypot social community Corporate research funding full disclosure Real-time updates to factcheck database Remove non- relevant content from special interest groups Content moderation Prohibit images in political Chanels Add metadata to original content Add warning labels on sharing Etc Rate-limit engagement Redirect searches away from disinfo Honeypot: fake engagement system Bot to engage and distract trolls Strengthen verification methods Verified ids to comment or contribute to poll Revoke whitelist / verified status Microtarget likely targets with counter messages Train journalists to counter influence moves Tool transparency and literacy in followed channels Ask media not to report false info Repurpose images with counter messages Engage payload and debunk Debunk/ defuse fake expert credentials Don’t engage with payloads Hashtag jacking Etc DMCA takedown requests Spam domestic actors with lawsuits Seize and analyse botnet servers Poison monitoring and evaluation data Bomb link shorteners with calls Add random links to network graphs 37 DISARM Blue: Countermeasures Framework
  38. 38. Red/Blue teaming: using blue to red links 38
  39. 39. Lifecycle Models 39
  40. 40. From crisis management: Lifecycle management 40
  41. 41. Infosec Lifecycle models 41
  42. 42. Resource Allocation and Automation • Tagging needs and groups with AMITT labels • Building collaboration mechanisms to reduce lost tips and repeated collection • Designing for future potential surges • Automating repetitive jobs to reduce load on humans 42
  43. 43. US work that might be interesting to you 43 ● DoD's Principal IO Advisor (NDAA Sec 1631) ● Principals ○ interim leader is MG Matthew Easley (ex Army Futures Command) ○ Working with USDP, USDI, and USD R&E ○ USD R&E brokering technology operational implications and capabilities ● Focus ○ shift from counterterrorism focus to Information Environment, Information Advantage, more globally focused national security interests ○ how to engage in the IE amid modern challenges of SM, bots, deepfakes, complex IO campaigns, ○ Future interactions online ● Events ○ Phoenix Challenge event in April, https://phoenixchallengedod.org/
  44. 44. THANK YOU SJ Terp @bodaceacat Dr. Pablo Breuer @Ngree_H0bit https://www.disarm.foundation/ 4 4