Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Behavioral Models of Information Security: Industry irrationality & what to do about it

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 43 Anzeige

Behavioral Models of Information Security: Industry irrationality & what to do about it

Herunterladen, um offline zu lesen

I examine the information security industry through the lens of behavioral models. Traditional ways of thinking about defensive and offensive motivations focus on models such as game theory, which tend to assume the people on each side are “rational” actors. However, humans have lots of quirks in their thinking that are the result of cognitive biases, and that lead to “irrational” behaviors.

The question therefore is: what biases do defenders and attackers have when they make decisions, and how can we leverage these insights to improve the efficacy of defense? In particular, I’ll discuss what implications theories such as Prospect Theory, time inconsistency, less-is-better effect, sunk cost fallacy, dual system theory and social biases such as fairness and trust have for why the industry dynamics are the way they are.

Given at NCC Group's Security Open Forum on August 17, 2016

I examine the information security industry through the lens of behavioral models. Traditional ways of thinking about defensive and offensive motivations focus on models such as game theory, which tend to assume the people on each side are “rational” actors. However, humans have lots of quirks in their thinking that are the result of cognitive biases, and that lead to “irrational” behaviors.

The question therefore is: what biases do defenders and attackers have when they make decisions, and how can we leverage these insights to improve the efficacy of defense? In particular, I’ll discuss what implications theories such as Prospect Theory, time inconsistency, less-is-better effect, sunk cost fallacy, dual system theory and social biases such as fairness and trust have for why the industry dynamics are the way they are.

Given at NCC Group's Security Open Forum on August 17, 2016

Anzeige
Anzeige

Weitere Verwandte Inhalte

Ähnlich wie Behavioral Models of Information Security: Industry irrationality & what to do about it (20)

Aktuellste (20)

Anzeige

Behavioral Models of Information Security: Industry irrationality & what to do about it

  1. 1. Kelly Shortridge August 17, 2016
  2. 2. “Marketscan stay irrational longer than you can stay solvent” 2 “You can stay irrational longer than you can stay uncompromised”
  3. 3. What is behavioral economics?  Oldschoolmodel = homoeconomicus(perfectly rationalhumans)  Behavioralecon=measure howwe actually behave,nothowwe should  Evolutionarilyviablethinking≠ rationalthinking  Neckbeards wouldn’tsurvivelonginthe wild 3
  4. 4. Cognitive biases  Peopleare “bad”atevaluatingdecisioninputs  They’realso“bad”at evaluatingpotential outcomes  In general,lotsof quirks & short-cuts(heuristics) indecision-making  You’reprobablyfamiliarwiththingslike confirmationbias,short-termism,Dunning- Kruger, illusionofcontrol 4
  5. 5. Common complaints about infosec  “Snakeoilserved overwordsalads”  Hype overAPTvs. actualattacks  Notlearningfrom mistakes  Notusingdata to informstrategy  Playingcat-and-mouse 5
  6. 6. “If you can’t handle me at my worst, you don’t deserveme at my best” – Sun Tzu 6
  7. 7. Mygoal  Starta different type of discussiononhowto fix the industry,based onempiricalbehaviorvs. how people “should”behave  Focusonthe framework; myconclusionsarejust a startingpoint  Stopshaming defenders for commonhuman biases;you probablysuck at dieting,bro  (alsoI’llshowoffsome bad amazingcyber art) 7
  8. 8. What will Icover?  Prospect Theory&Loss Aversion  TimeInconsistency/ HyperbolicDiscounting  Less-is-betterEffect  SunkCost Fallacy  Dual-systemTheory  …and whattodo aboutallthis 8
  9. 9. 9
  10. 10. Prospect theory  Peoplechooseby evaluatingpotentialgainsand losses viaprobability,NOTthe objectiveoutcome  Consistentlyinconsistentbasedonbeinginthe domainof lossesor domain ofgains  Care aboutrelativeoutcomesinsteadof objective ones  Prefer asmaller,more certaingainand less- certainchanceof asmaller loss 10
  11. 11. Core tenetsof Prospect Theory  Reference pointisset againstwhichto measure outcomes  Losseshurt 2.25xmorethangainsfeel good  Overweightsmall probabilitiesandunderweight big ones  Diminishingsensitivityto lossesor gainsthe fartherawayfrom thereference point 11
  12. 12. Offensevs.Defense  Risk averse  Quicklyupdates reference point  Focuson probabilisticvs. absoluteoutcome 12  Risk-seeking  Slowto update reference point  Focusonabsolutevs. probabilistic outcome
  13. 13. InfoSecreference points  Defenders: wecanwithstandZset of attacksand notexperience material breaches,spending $X — Domainof losses  Attackers:we cancompromise a targetfor$X withoutbeingcaught,achievinggoalof value$Y — Domainof gains 13
  14. 14. Implications of referencepoints  Defenders: loss whenbreached withZset of attacks;gainfromstopping harder-than-Zattacks  Attackers:gainwhenspend lessthan$Xor have outcome> $Y;losswhencaughtor when$X> $Y 14
  15. 15. Prospect theory in InfoSec  Defenders overweightsmall probabilityattacks (APT)and underweightcommonones (phishing)  Defenders alsoprefer aslimchanceof asmaller lossor gettinga“gain”(stoppingahard attack)  Attackersavoidhardtargets andprefer repeatable / repackagable attacks(e.g.malicious macros vs. bypassingEMET) 15
  16. 16. What are the outcomes?  Criminallyunder-adoptedtools:EMET,2FA, canaries,white-listing  Criminallyover-adoptedtools:anti-APT,threat intelligence,IPS/IDS,dark-web anything 16
  17. 17. Incentive problems  Defenders can’teasilyevaluatetheir current securityposture, risklevel,probabilitiesand impacts of attack  Defenders onlyfeelpain in the massivebreach instance,otherwise“meh”  Attackersmostly can calculatetheirposition;their weaknessisthey feellosses 3x as muchas defenders 17
  18. 18. 18
  19. 19. Timeinconsistency  Peopleshouldchoosethebest outcomes, regardless oftime period  Inreality:rewards inthefutureare lessvaluable (followsa hyperbolicdiscount)  Classic example: kids withmarshmallows;have onenowor waitand get twolater (theychoose the marshmallownow)  Sometimesit canbegood, likewithfinancialrisk 19
  20. 20. Timeinconsistency in InfoSec  Technicaldebt: “We’llmake thisthing secure…later”  Preferring out-of-the-boxsolutionsvs. onesthat takeupfront investment(e.g.whitelisting)  Lookingonlyat currentattacks vs.buildingin resiliencefor thefuture (evenworse withstale reference points from Prospect Theory) 20
  21. 21. 21
  22. 22. Less-is-better effect  Evaluatingthingsseparately = lesser option  Evaluatingthingstogether= greater option  e.g.choose7 ozof icecream inanoverflowingcup vs. 8oz ina largercupwhenconsidered apart  Why?Peoplefocus onthingsthatareeasier to evaluatewhenjudgingseparately (attribute substitution) 22
  23. 23. Attribute substitution  Substituteanattributerequiring thinky-thinkyfor aheuristicattribute  Peopledo thisallthetime,andgenerallydon’t realizethey’redoingit(unconsciousbias)  Icecream example:cup isoverflowing=better  Socialexample: it’shard toevaluateintelligence, so judge people based onstereotypes ofrelative intelligenceof theirrace 23
  24. 24. Attribute substitution in InfoSec  Evaluatingtheefficacyof asecurityproduct is really,really hard(same withsecurityexpertise)  Easierto lookfor: — Socialproof (logosona page) — Representativeness (does it looklikeproducts we already use / attackswe’veseen) — Availability(abilityto recallanexample, e.g. recentlyhypedattacks) 24
  25. 25. Less-is-better in InfoSec  Anti-APTlookslikea gooddeal becauseit probablyappears lowcost relativeto the“high cost,”unclear-riskinessattacks it’sstopping  2FA,canaries,etallookless impressive since they’restopping most lowercost attacks,and risk youcanmore easilymeasure  Thisgetseven worse whenyoutakeProspect Theoryintoaccount –defenders are reallybad at estimatingprobabilities& impact ofattacks 25
  26. 26. 26
  27. 27. Mental accounting  Peoplethinkaboutvalueasrelativevs. absolute  Notjust aboutthe valueof an outcomeor good, butalsoits “quality”  Peoplealsothinkaboutmoneyindifferent ways, depending onthe amount,its originandits purpose 27
  28. 28. Sunk cost fallacy  You’veboughta $20movieticket.It starts storming andnowyou don’twantto go…  …butyoudo,becauseyou“alreadypaid for it”and “need toget your money’sworth”  Thisisirrational!Costs nowoutweighbenefits, butyou’retreatingthe costs of your time& inconvenienceina different mentalaccount 28
  29. 29. Sunk cost fallacy in InfoSec  Justbecauseyouspent $250konafancy blinky box,shouldn’tkeep usingit ifit doesn’t work  Throwinggood moneyafter bad strategiesrather thanpivotingtosomethingelse  Or, “wespent allthis money andstillgot breached,itisn’tworthitto spend more now” 29
  30. 30. 30
  31. 31. Dual-system theory  MindSystem 1:automatic,fast, non-conscious  MindSystem 2:controlled,slow,conscious  System1 is often dominantindecision-making, esp. withtime pressure, busyness,positivity  System2 ismore dominantwhenit’spersonaland / or the person isheld accountable 31
  32. 32. Dual-system theoryin InfoSec  System1 buysproducts basedonflashydemos at conferencesand sexy wordsalads  System1 prefers establishedvendors vs.taking the timeto evaluatealloptionsbased onefficacy  System1 prefers stickingwithknownstrategies and productcategories  System1 alsocares aboutego 32
  33. 33. 33
  34. 34. 34
  35. 35. Improving heuristics: industry-level  Only hype “legit” bugs / attacks (availability): very unlikely  Proportionally reflect frequency of different types of attacks (familiarity): unlikely, but easier  Publish accurate threat data and share security metrics (anchoring): more likely, but difficult  Talk more about 1) the “boring” part of defense / unsexy tech that really works 2) cool internally-developed tools (social proof): easy enough 35
  36. 36. Changing incentives: defender-level  Raisethe stakes ofattack+ decrease valueof outcome  Findcommonalitiesbetweentypes ofattacks& defend againstlowestcommondenominator1st  Erode attacker’sinformationadvantage  Data-drivenapproach to stay “honest” 36
  37. 37. Leveraging attacker weaknesses  Attackers are riskaverse andwon’tattackif: — Toomuchuncertainty — Costs toomuch — Payoffistoo low  Blocklow-costattacksfirst,minimizeabilityfor recon,stop lateralmovement and abilityto “one- stop-shop”for data 37
  38. 38. How to promoteSystem 2  Holddefenders extra accountablefor strategic and productdecisionstheymake  Make itpersonal:don’tjustcheckboxes,don’t settleforthe status quo, don’tbe a sheeple  Leveragethe “IKEAeffect” – people valuethings more whenthey’veput laborintothem(e.g.build internaltooling) 38
  39. 39. Inequity aversion  Peoplereallydon’tlikebeingtreated unfairly  e.g.A is given$10 andcanshare some portion$X withB, whowillget$X* 2. B thenhasthesame optionback — NashEquilibriumsays Agives $0 (self-interest) — Actualpeople send ~50% to playerB,andB generallysends more back to A thanreceived 39
  40. 40. Inequity aversion in infosec  Maymean defenders willbe willingto sharedata, metrics, strategies  Notnecessarilythe“aslongasI’mfaster than you”mentalitythatis commonlyassumed  Keyis toset expectationsofanongoing“game”; repeated interactionspromotes fairness  So,foster acloser-knitdefensive communitylike there exists for vulnresearchers 40
  41. 41. 41
  42. 42. Final thoughts  Stopwiththegame theory101 analyses– thereare ultimatelyflawed,irrationalpeople onbothsides  Understand yourbiasesto be vigilantin recognizing& counteringthem  Let’snot calldefenders stupid, let’s walkthem throughhowtheirdecision-makingcanbe improved 42
  43. 43. Questions?  Email:kelly@greywire.net  Twitter:@swagitda_  Prospect Theorypost: https://medium.com/@kshortridge/behavioral- models-of-infosec-prospect-theory- c6bb49902768 43

×