SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere Nutzervereinbarung und die Datenschutzrichtlinie.
SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere unsere Datenschutzrichtlinie und die Nutzervereinbarung.
Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen.×
Aktiviere deine kostenlose 30-tägige Testversion, um unbegrenzt zu lesen.
BUGS AREN’T RANDOM
Unifying Building and
Breaking In The Modern
Thank you DEFCON, for supporting me for
almost two decades
Thank you Baidu. Only DEFCON could go to
China, and you helped make that happen.
Thank you for coming :) This is my first time to
your lovely country!
This is a keynote, so I’m supposed to inspire you
This is a technical talk, so there’s going to be actual lines
of code on this actual screen (I promise)
The goal: Connect a series of concepts you may never
have thought were linked
Consider this a “Skydive”
− Start with a birds eye view
− Dive headfirst into the weeds
− Get ourselves a bugs eye view
60 Frames Per Second
I have a “hobby” around human perception
− Ask me about Color Blindness sometime
There’s a myth that people see at 60 frames per second
− Works OK for video games, makes people quite sick in VR
− Obviously mythological, we don’t see in frames at all, our eyes jiggle
around a lot and our brain dreams something up
− It’s why we can dream
− Lots of experiments show the average person seeing well past 60
But why 60?
My Traditional Answer,
“1940’s television technology, that’s just how fast
TV’s used to run.”
Correct, but incomplete.
I’m used to technology having its own clocks
Turns out you can just use the power lines as a
(Can != Should)
We didn’t make TV’s 60fps for human vision
We made TV’s 60fps because there was a 60hz
Why was it 60fps?
Power was 60hz
Because 1890’s and Physics Were
The induction motor was
found to work well on
frequencies around 50 to 60
Hz, but with the materials
available in the 1890s would
not work well at a frequency
of, say, 133 Hz.
So is 60fps nothing to do with human vision, and
everything to do with 1890’s technology?
“There is a fixed relationship between the number
of magnetic poles in the induction motor field, the
frequency of the alternating current, and the
60hz wasn’t just 1890’s tech.
It’s also physics.
We’re Made Of Physics Too
Human vision comes from the brain
The brain circulates electromagnetic signals
Gamma waves work at 25-100hz
PURE SPECULATION, but it’s sort of cool to go from:
− 60 fps = rate human brain implements human vision
− 60 fps = tricking human vision w/ television
− 60 fps = timing television with spinning magnets
− 60 fps = spinning magnetics at the same rate as the human brain
You might be thinking
What could this possibly have to do with bugs?
− We don’t necessarily know why things are the way they are
− Usually we do things because we’ve been doing them
− Sometimes what we’ve been doing is good enough, sometimes
what we’ve been doing is bad but nobody realizes where the
I like figuring out why
− Stay intellectually honest, and you’ll find cool stuff
− Know you’re speculating!
No Really, Speculative Execution
Spectre and Meltdown
“Did you go to the coffee shop?”
“Did you go to the bar?”
“Did you go to the club?”
“… … … No!”
Saying the same thing, at a different time, is not always
saying the same thing.
What Are These Bugs?
There are many variants (which is kind of the idea)
Try to read data you’re not allowed to
You’re told no, but at the wrong time
Try to run code you’re not allowed to
You’re not allowed to run it, but other things go faster or
slower based on what you weren’t allowed to run
What went wrong?
We Assumed You Could Only Detect
Cached/Uncached, not Content
Wrong because when you read memory, you can say
“Give me this information at (address plus a value
between 0 and 255)”
You’ll be told no, but now you can check:
“Do you have the value at address+0?”
“Do you have the value at address+1?”
“Do you have the value at address+2?”
Fuzzy, but you can check many, many times on a
gigahertz processor, and you can flush the cache and
We Assumed We Could Make
“Why do we have these bugs? Isn’t this just
Lots of nerd shaming
No community in all of technology proves the
mathematical correctness of their work more than
They are the industrial market for theorem provers and
But you have to prove the right things in the right
The Same Thing Might Be Predictable or
Random Based On Context
A corporation can be relatively predictable
An executive at that corporation can be erratic
Might quit tomorrow
His heartbeat however is relatively predictable
An individual heart cell in that heart might not
All four scales occupy the same point in time and space
The theorem provers did not fail when they
showed no leakage of information between
The right bits went to the right places
The theorem provers weren’t being asked to
show there were no timing variations
dependent on secrets
Most, if not all timing variation is defined to
not exist at the scale being proven
The Great Repurposing
We turned a stability boundary, into a security
boundary, and hoped it would work
Historically, most code would crash all the time
The game was making sure it only corrupted its own
The theory was that hackers were just a new source of
misbehavior, let’s just isolate them like we isolate
Even independent of time, that hasn’t worked amazingly
well, but in the context of time...
Hackers are better behaved
They change smaller things (from a computer’s
perspective) that are bigger things (from a
Spectre and Meltdown change time, which is
defined as nonexistent to the microprocessor
designer, and made to be information carrying to
Some of the exploits against
Spectre/Meltdown exploit the system
One tick every 15.6ms on many platforms
1000ms/15.6ms == 64fps :)
Spectre and Meltdown Leak Bits.
You can’t leak bits you do not have.
There is a hidden architectural choice behind these bugs
“We have one computer that must pretend to be many
computers, with many different security levels.”
There is another decision that can be made
If you want two security domains, get two computers
You know, computers are small now.
Yes, we had to write patches for
Yes, we’re putting these patches everywhere, whether
there’s a security boundary crossing or not
But yes, not every individual node has two security
Sometimes, the only user really is the administrator
Sometimes, the administrator is only not the administrator
when running a web browser
We are sort of getting this information down to where it needs
to be in the chip
There’s a fair amount of “impedence mismatch”, and a lot of
microcode patching right now is just trying to get even process
ID into the branch predictor++
Explicit Security Domains Will Come
Security domains are not users
Security domains are not processes
Security domains are not even constrained to a single
kernel or a single machine
They’re their own space. All the Spectre/Meltdown
goop going on is trying to give the microcode an idea
of whose context they’re working on. We’ll fake “what
security domain” with that...for a while.
Surprising Amount Of Activity Around
OS Design Out There
User/kernel is not only not always a real security
User/kernel is actually pretty slow
Everything fast gets rid of it
DPDK networking running entirely in userspace
Kernel Mode Linux from back in the day
“Rump Kernels” aren’t kernel-less – they just run full BSD (or
even Linux, w/ LKL) kernels in the same memory space as the
HPC is actively working here – mOS, Hermitcore, Kitten, etc.
Why Am I Telling You This?
Security that doesn’t care about the rest of IT is Security that
grows increasingly irrelevant
Computing in 2023 is not going to look like Computing in 2018
Computing in 2018 doesn’t look like what most people think
computing in 2018 is
“There’s no such thing as the cloud, there’s just other people’s
...with other people’s pagers.
The scale of computing has completely changed, how we fix
our security problems is going to require different viewpoints
If you’re just looking for bugs, look for the things people
think don’t matter
Bugs aren’t random because their source isn’t random
Developers write certain bugs based on what they aren’t thinking
Bug finders find certain bugs based on what they know
developers aren’t thinking about
This is not always conscious
It’s usually true, at least in anyone I’ve found that’s good at this
Right about now is a good time to
The Catchy Only Vaguely Correct Catchphrase
Designed To Spark Interest
What Do I Mean?
“We only make the car turn left. Those other guys
handle the car turning right.”
“It’s just my job to get the plane in the air. If when
and how it lands, not my problem.”
It’s not that there aren’t different teams – it’s that if
you don’t care if your work affects the other
guys...thing’s gonna crash
There’s no reverse engineering, there’s no
There’s just engineering.
There are cultural elements in engineering that
block the integration of forward and reverse.
The primary one seems to be...
“Hackers” like to talk about the former
We are a specific branch of the latter
The latter shouldn’t be split off, but it often has
− Everybody always sees their own code for what it
should be doing, not for what it actually is
Large amounts of tooling are isolated to the testers
Creates an enormous bias in developer knowledge, they
end up not using tools and patterns that are too “test-y”
Ends up biasing the code they think they can write
− More technically: Ends up biasing their transformations in a
Compile time influences runtime
Runtime doesn’t update the source to be compiled
− Less technically: Like a car that pulls right
Fortran is fast (still).
Python is slow (still).
Except if you use Numba.
− Finally, a practical environment for transforming standard-ish Python
into high perf CPU/GPU code
− Requires, like all optimizers, knowledge of what types of data it’s
supposed to optimize for
− Games of constraints: If I can constrain what’s coming, I can throw
state away and optimize only for those expectations
What happens if I constrain incorrectly? CVE numbers.
That’s why security is involved. Perf and sec are not separate universes.
Python is dynamically typed, integers or floating
point numbers or strings or whatever are
distinguished at runtime
Developer pain required to declare up front
what types might pass through
Another Competitor Enters The
PyAnnotate by Dropbox
Monitors types used during test or production,
updates the code in-place with annotations
Thus far, this hasn’t been extended to numeric
optimization for Numba
Solves the problem that developers don’t actually
know the right answers for expected types either
Considered appropriate only for “legacy” because
The Approach Seems Weird
“Isn’t this Profile Guided Optimization?”
− No, this actually changes the source code
− This is also not constrained to performance, i.e. could apply security constraints
(probably been some work here)
Pair programming with a machine?
− I mean, we lean on libraries a lot, to the point a lot of dev is figuring out what
legos to stick together
Developers are supposed to know what the system is supposed to do.
They’re not supposed to learn what the system is supposed to do by
watching it fail!
− A) That’s totally what they do
− B) That’s totally what we do
Developer tools usually assume the
developer is right
Optimization throws out information unneeded
by the present system
The present system is wrong, a new system is
needed, the information about how we deviated
from correct to suboptimal may have been
So that’s how test tools – hacker tools – differ
up front. We’re looking for that developer error.
But everyone’s tools are kind of bad
“The difference between reverse and “normal” engineering is
whether you have the source code”
− Assumes the source code is more comprehensible or predictable than
the compiled form
− I knew a guy (who will laugh when I send him this slide) who audited C++
from the compiled binary, because “who can dig through that mess, just
give me the binary and I’ll walk through the table myself”.
Ultimately the more we can monitor the operation of running code,
in the context of the source code generating it – the faster the loop
between misconception and correction can be – the better
software (or the more bugs) we’ll find.
Where I’m Going With This
1) Full system source debugging
If things are so open source, where’s the source? Why am I
specifically recompiling things I happen to be interested in, one
library at a time, including the kernel?
− Questions actually do have answers: Debugging tools don’t want to
take a hard dependency on source being available
− Went to an SSD developer some years ago, I don’t think there is a
single company on the planet with all source to one of those
− But I can compile Gentoo from source…
− Yes, it’s very nice.
− Apt-build on ubuntu also “kind of works” – good for individual targets
ADB is old and busted
ABD is the new hotness
Always Be Debugging :)
“But what about security boundaries? Am
I going to have to type sudo all the time?”
Ever get the feeling it’s easier to be root on…
someone else’s machine? :(
Attackers get root for years
You get root, one line at a time
It’s still me!
No really, still me.
And you get such a variety of software
This is just as silly as
−Can’t just switch effective user all the time, that’s part of why sudo is
−Present plan: Make permission checks pass, but otherwise keep
users what they are
−Use the switches to control precise parameters (they glow!)
This is actually common behavior, we just don’t notice it
−VM’s are fake roots
−Containers are fake roots
HUGE reason Docker succeeded
−Kali Linux is a real root
Jupyter is almost great
Web based reboot of interactive programming
No internal interface for adding packages
That would depend on root access!
(But VirtualEnv) SMACK It’s a polyglot environment,
supports lots of languages
What’s going on?
As usual, there’s actually a reason
There is an answer to why we’re moderately careful doling out root
access, and it’s actually not really “we’re afraid of hackers”
Users can actually break their machines pretty easily, and then come to
One class of fixes involves applying a talent bar, to be able to break the
machine, or making as much software as possible not put the machine
at risk by rewarding it with being immune to The Prompt
Another class involves...just making it easy to fix the thing
AKA “Fork The Universe”
Boot the running system, into a VM, with
the full existing configuation, knowing we
can’t break anything
Why It’s Interesting
Nothing Bad Can Happen
“Get two computers” is also
“manage your persistence”
A Hard Question
Why are we vulnerable to ransomware?
“Because the attackers can delete our data”
Why can attackers delete our data? Why can we? Isn’t storage cheap now?
Equivalent for ephemeral installs: Why do I have these difficult to protect,
expensive to replace persistent installations?
As debugging didn’t want to take a dependency on source, security may not
have wanted to take a dependency on zero persistent storage.
− But that might be the right design to work with, to allow arbitrary “damage” to be done
and always be able to return to a known safe state.
We should not separate development and testing
Our hardest problems in security require alignment
between how we build and how we verify
Our best solutions in technology will understand the past
to see the future
All that matters is how well we protect users, and provide the
services that they need.
Our personal development cultures are not as important as
actually getting the job done :)