6. E xa m p l e s
• Set of examples for web applications is enormous hence…
• My own story about XSS and RCE should suffice
7. S o l u t i o n s
• Usage of publicly acclaimed frameworks according to Linus’
motto: „Given a thousand eyes, all bugs are shallow.”
• Improving the process of software development
• Secure by Design: Secure SDLC / DevSecOps implementation
• Testing according to well-known (and accepted) methodologies
such as OWASP ASVS, OWASP TOP 10, SANS 25, et cetera
9. E xa m p l e s
• Neex and bug in a way that external tool (GraphicsMagick) is invoked
(OS Command Injection) – Imgur
• Chris Evans and his bugs in ImageMagick (memory disclosures) –
vulnerable versions found on servers from companies such as
Dropbox and Yahoo!
10. S o l u t i o n s
• Conscious choice of external components
• Smaller attack surface == reduced risk
• Least-privilege principle applied to external components (e.g.
sandboxing)
11. Re fe re n c e s
• https://scarybeastsecurity.blogspot.co.uk/2017/05/proving-missing-aslr-on-
dropboxcom-and.html
• https://scarybeastsecurity.blogspot.co.uk/2017/05/0day-proving-boxcom-fixed-aslr-
via.html
• https://scarybeastsecurity.blogspot.co.uk/2017/05/bleed-more-powerful-dumping-
yahoo.html
• https://scarybeastsecurity.blogspot.co.uk/2017/05/bleed-continues-18-byte-file-
14k-bounty.html
• https://hackerone.com/reports/212696
• https://4lemon.ru/2017-01-
17_facebook_imagetragick_remote_code_execution.html
• https://blog.sigsegv.pl/external-third-party-resources-and-your-web-application/
• https://onedrive.live.com/view.aspx?resid=2664E65DD698885E!120&ithint=file%2cp
ptx&app=PowerPoint&authkey=!AK39RoVxiJ5re8Y
15. E xa m p l e s
• Deserialization of a cookie and memory corruption within PHP’s
implementation of unserialize() function – PornHub
• „The worst bug bounty ever” – very expensive romance between
Shopify and mruby
• „Exposing Hidden Exploitable Behaviors in Programming Languages
Using Differential Fuzzing” – interesting (and dangerous) behaviours
of interpreters
• My own vulnerability research of popular interpreters (for fun and no
profit)
16.
17. S o l u t i o n s
• Least-privilege principle applied to the interpreter / VM (e.g.
sandboxing)
• Banning dangerous functions (e.g. disallow certain functions in your
code base)
18. Re fe re n c e s
• https://www.evonide.com/how-we-broke-php-hacked-pornhub-and-earned-20000-
dollar/
• https://www.evonide.com/fuzzing-unserialize/
• https://sean.heelan.io/2017/08/12/fuzzing-phps-unserialize-function/
• https://externals.io/message/100147
• https://bugs.php.net/bug.php?id=75006
• http://mruby.sh/201703261726.html
• https://www.blackhat.com/docs/eu-17/materials/eu-17-Arnaboldi-Exposing-Hidden-
Exploitable-Behaviors-In-Programming-Languages-Using-Differential-Fuzzing-wp.pdf
• https://github.com/dyjakan/interpreter-bugs
• https://github.com/rust-fuzz
• https://hackernoon.com/python-sandbox-escape-via-a-memory-corruption-bug-
19dde4d5fea5
20. E xa m p l e s
• ”Reflections on Trusting Trust” – Ken Thompson
• CVE-2018-1037 – .PDB Heap Memory Disclosure w Visual Studio
(j00ru (Project Zero) 🤘)
21. S o l u t i o n s
• There is no active and scalable protection
• Passive monitoring of your systems’ integrity
22. Re fe re n c e s
• https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
• https://twitter.com/j00ru/status/985894472478265344
• https://bugs.chromium.org/p/project-zero/issues/detail?id=1500
26. E xa m p l e s
• CVE-2016-5195 – DirtyCOW
• CVE-2010-0232 – KiTrap0D by Tavis Ormandy (Google)
• CVE-2018-8897 – POPSS/MOVSS
27. S o l u t i o n s
• Patch management policy along with actual implementation
• Hardening
• Best practices
• Additional defensive mechanisms
28. Re fe re n c e s
• https://dirtycow.ninja/
• http://seclists.org/fulldisclosure/2010/Jan/341
• https://www.cisecurity.org/cis-benchmarks/
• https://grsecurity.net/
• http://www.openwall.com/lkrg/
• https://support.microsoft.com/en-us/help/2458544/the-enhanced-mitigation-
experience-toolkit
• https://docs.microsoft.com/en-
us/powershell/module/processmitigations/?view=win10-ps
• https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-
defender-exploit-guard/windows-defender-exploit-guard
• https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8897
• http://everdox.net/popss.pdf
34. Re fe re n c e s
• https://en.wikipedia.org/wiki/Virtual_machine_escape
• https://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY/BHUSA09-
Kortchinsky-Cloudburst-PAPER.pdf
• https://vimeo.com/6595148
• https://blogs.vmware.com/security/2017/03/security-landscape-pwn2own-
2017.html
• https://www.blackhat.com/docs/eu-17/materials/eu-17-Mandal-The-Great-Escapes-
Of-Vmware-A-Retrospective-Case-Study-Of-Vmware-G2H-Escape-Vulnerabilities.pdf
• https://keenlab.tencent.com/en/2018/04/23/A-bunch-of-Red-Pills-VMware-
Escapes/
36. E xa m p l e s – C P U s 1 / 2
• Bugs
• Pentium FDIV bug – Intel - $$$ = 😢
• CVE-2012-0217 (and its younger brother CVE-2006-0744) – Intel
SYSRET found in 2012 by Rafał Wojtczuk (InvisibleThingsLab)
• AMD microcode security update – Robert Święcki while fuzzing
kernel on home station
• Meltdown & Spectre — Jann Horn (Project Zero) et al
37. E xa m p l e s – C P U s 2 / 2
• Features?
• sandsifter – Fuzzing CPU (BlackHat 2017) by Christopher Domas
• Intel-SA-00086 – bugs in Intel Management Engine (ME)
38.
39. E xa m p l e s – R A M
• RowHammer – original idea and research by Thomas Dullien et al
(Project Zero) with further research done by other academics
• Initial research (2015) – affecting only desktops (local)
• Later (2016) – affecting mobile devices (local) and VM-to-VM
attacks (“local”)
• Now (2018) – affecting mobile devices (remotely!) and cloud
servers (remotely!)
41. Re fe re n c e s
• http://scholar.harvard.edu/files/mickens/files/theslowwinter.pdf
• https://wiki.osdev.org/CPU_Bugs
• https://danluu.com/cpu-bugs/
• https://blog.xenproject.org/2012/06/13/the-intel-sysret-privilege-escalation/
• https://lists.debian.org/debian-security/2016/03/msg00084.html
• https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/
• https://meltdownattack.com/
• https://www.blackhat.com/docs/us-17/thursday/us-17-Domas-Breaking-The-x86-Instruction-Set-wp.pdf
• https://github.com/xoreaxeaxeax/sandsifter
• https://www.intel.com/content/www/us/en/support/articles/000025619/software.html
• https://blog.rapid7.com/2017/11/21/intel-sa-00086-security-bulletin-for-intel-management-engine-me-and-
advanced-management-technology-amt-vulnerabilities-what-you-need-to-know/
• https://www.blackhat.com/docs/eu-17/materials/eu-17-Goryachy-How-To-Hack-A-Turned-Off-Computer-Or-
Running-Unsigned-Code-In-Intel-Management-Engine.pdf
• https://en.wikipedia.org/wiki/Row_hammer
• https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
• https://www.vusec.net/projects/flip-feng-shui/
• https://www.vusec.net/projects/glitch/
42. Re fe re n c e s
• https://www.cs.vu.nl/~herbertb/download/papers/throwhammer_atc18.pdf
• https://arxiv.org/abs/1805.04956
43. S u m ma r y
• Software is broken
• Hardware is broken and we’re in early stage
• Best practices on each and every level reduce the risk but cannot
completely remove it
• Security is a process, not a product
I break things since my childhood years, professionally since 2010
I’ve worked in various roles, all were around application security or software engineering in general
Mostly offensive security, but at this point more and more defensive side (improving the state)
Application Security across entire technological stack — web applications are broken and they use boken external components. All of this is executed via broken interpreter (or VM) which is compiled with untrusted compiler on the broken operating system that is running within untrusted hypervisor. And underneath it all we have broken hardware.
Practical approach with real-world examples
Ok, imagine this: we got an excellent start-up idea which is an app for sharing cat pictures; we’ve made our MVP and our journey just started
Basic functionality: we have users (with avatars) and cat pictures (that are uploaded by the users). Additionally we have a simple admin panel for managing users with request logging functionality (UA, referrer, request payload)
Most of the vulnerabilities seen in the picture are in web apps
We can see growing trend
This data is not ideal
Source is non-profit
Iceberg case – we see only what’s above the surface (e.g. we don’t include vendor-proprietary web apps such as GMail or other SaaS)
During one of many engagements I found stored XSS in logging mechanism for user’s requests (application was sanitizing the paylod but not Referrer or UA)
Later in the code there was an RCE bug within upload functionality (no file type checking, PHP execution turned on for all directories)
Attack scenario: target users via XSS, use RCE to infiltrate the server – we have 2018 and these things are still there
Good examples is Ruby on Rails which out of the box gives us fairly good security posture
Things that can be automated, should be automated (e.g. SAST, DAST, vulnerable dependency scanners embedded into CI/CD pipeline)
Bummer: logical bugs will still be there
Let’s assume we’ve eliminated low-hanging fruits with Secure SDLC
However, for avatar and cat picutres themselves we use external modules: library provided by our package manager which additionaly uses ImageMagick as an actual tool that does the job
Now all problems within the library itself and ImageMagick are also OUR problems!
1 of params in the /edit/proces end-point was vulnerable to OS command injection because GraphicsMagick allows pipes | in the file names; Imgur paid 5k
Bugs found by Chris allowed to read memory from affected production servers, in practical term you could (at least!) read images from other users of the Dropbox or Yahoo! Mail; Yahoo! Paid 14k for this bug
Additionally bugs such as these can be helpful in beating the ASLR remotely
Also, think about all these dektop applications that use libraries left-and-right
Do you really need this external component? And if you do
Is it well-known widely used?
Do you know its security track-record? (how many CVEs?, how many issues?, etc
Sandboxing for binaries is a good practice
On a side note: Yahoo! Solved the issues with ImageMagick by removing it from its toolchain
Ok, we’ve removed ImageMagick from our toolchain by using GAvatar
However, we’re unserializing data from the user, we do validate the data though so it’s safe
Unless we have a bug in the implementation of unserialize() itself
From my experience developers that use high-level languages (Ruby, Python, etc) usually are surprised that the vulnertability can be in the native interpreter of their language (of course we’re taking into account only „interesting” parts of the language, e.g. JSON parser)
Which language interpreter are we looking at?
PHP (original ZEND implementation, not HHVM)
Easier one… ;)
First problem was a bug in deserialization of user’s data but… exploitation took an unusual route – attackers used fuzzing to discover vulnerability in PHP’s unserialize() function and attacked the interpreter itself gaining RCE
20k od PornHuba i 2k (1 per bug) od IBB-PHP
BTW. In 2017 PHP announced that it no longer see unserialize() bugs as security issues – they claim that input should be trusted. Smart move because there was many bugs in their implementation…
Shopify wanted to give their users ability to execute some subset of Ruby on their server; they used mruby (embedded ruby) for that purpose and started bug bounty to get rid of low hanging fruits
In 1 month they had so many reports they needed to scale-down payments to 10% of the original (20k for RCE = after scaling 2k). In half a year they spend HALF a milion of USD for bounties (including scaled-down reports)
Btw. Mruby is made and maintained by matz who is original creator of ruby and knows his C quite well
Targets were: Python / Ruby / PHP / JS
Apparently you can find Edge cases which (even though they are valid) lead to interesting behaviours that can be used for attacks
My own research into C implementations of Ruby, Python, PHP and HHVM
1M iterations each
Key take away: lots of crashes for only 1M iterations. Even if most is not exploitable it shows low quality of the software itself (bear in mind that some of the players are Facebook…)
Btw. Pythonauts — python has smaller numer of crashes but I didn’t fuzz it for 1M but only 100k
At this point (as far as I know) only Rust is actively fuzzing their own project
Shopify implemented as sandbox, hence each new bug must also affect the sandbox (lowers the number of payouts significantly but not completely eliminates them…)
For example on SCM-level before adding a commit, or more radical solution: nullify whole implementation on interpreter level (e.g. return void for unserialize() in PHP)
Ok, we’ve solved problems with inteprreters/VMs, our programmers are well trained but…
Our interpreter or VM still needs to be compiled (by us or our packages provider)
Can we guarantee integrity of our internal (or external) compiler? Hard to do in 100%, and this opens up doors for backdoors…
And of course we can have bugs in compilers themselves
PDBs are debugging info; j00ru found that on MS servers there are PDBs compiled with vulnerable version of VS that include leaked memory from compilation phase (around 0.5%) -- nothing big BUT very interesting case in point
On a certain level of sophistication there are no perfect solutions. We need to trust blindly that some elements of our chain are good
Ok, we’ve accepted the risk with compilers, we’ve eliminated bugs in our apps and their dependencies, our interpreters can be trusted
Now we want to deploy and see our MVP in action
We buy shared hosting with LAMP stack and roll-out
Change in threat-model:
CPU has different levels of trust when it comes to code execution
Modern OS-es use mainly 2 of them: Kernel-mode (ring0) and User-mode (ring3)
We assume that attacker is already inside (by any means) and game is about privilege escalation
Btw. Can we trust our neighbours when it comes to shared-hosting? vulnerabilities in their application affect all tenants on the particular server
But the number is bigger for sure (Linus and others in Linux Kernel community are well-known for mistreating security issues by claiming these are bugs until clearly proven otherwise)
They also don’t like applying for CVEs
We’re focusing on privilege escalation vulnerabilities hence because kernel is shared (ntoskernel) these are usually cross-version
Linux is king of the infrastructure, Windows is king of the end-points
DirtyCOW
Race condition in memory handling mechanism when kernel does Copy-on-Write
Key take-away: Since 2007 (kernel 2.6.22) until 2016 (kernel 4.x), 9 years
KiTrap0D
Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel Stack
Key take-away: Since original release of NT, 27.07.1993 until 2010
All 32bits versions of Windows: Windows 2000, Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7
POPSS, Cross-platform OS vuln
Zła interpretacja manuali Intela spowodowała złe implementacje mechanizmu w rózynch OS-ach co w efekcie daje możliwość podniesienia uprawnień procesu
Bad iterpretation of Intel’s manuals lead to wrong implementation of this mechanism across different OS-es, effectively giving a way to elevate privileges
Including: Windows, MacOS, FreeBSD, and Linux (at least crash)
Hardening can be split into 2 areas:
Implementation of best practices (e.g. CIS Benchmarks)
Implementation of additional defenses
grsecurity dla Linuxa
BONUS: LKRG od Adama Zabłockiego
EMET (Enhanced Mitigation Experience Toolkit) for older Windows, EOL July 2018
PMMT (Process Mitigation Management Tool) and WDEG (Windows Defender Exploit Guard)
Shared hosting is for old people, cool kids rock the cloud
Now, our applications live in a hostel with dynamically changing neighbours
Bar has been raised, attacker needs to enter, then elevate their privileges, and escape from the hypervisor – doubleplusgood so far…
First public exploit for Vmware with full technical advisory, it opened up eyes of many people (Kostya Korchinsky from IMMUNITY)
Piotr Bania wrote and exploit for that
What’s pwn2own
Virtualization included in the competition since 2016, with success
Defense in depth: robust chain of defenses including: firewalls / network IDS to control access to virtual machines, running anti-virus or other host IDS to block malicious software from running on a virtual machine, and the hypervisor itself to isolate virtual machines from each other.
At some point in time, any of these layers may miss something important or have a bug; the goal is to have enough layers that not all can be breached simultaneously.
Ok, ok, ok, we’ve rewritten everything in Rust and our problems were solved (trolololo).
Can we trust our hardware?
Side-channels are the new black
Good example of a side channel is a conversation. Main information is exchanged through spoken word BUT our micro-expressions are side-channels
FDIV is important because Intel lost money “In December 1994, Intel recalled the defective processors. (…) $475 million (…) total cost associated with replacement of the flawed processors.”
SYSRET interesting because it was fixed by Linux in 2006, however devs didn’t fully understood implications and didn’t notified other OS vendors until Rafał’s revelation
AMD microcode interesting because it was found ‘by accident’ while fuzzing software on AMD-based machine
Meltdown & Spectre interesting from both technical POV and social one – it was found by 3 independent teams around exact same time (without any information coordination), on top of that there was already ‘a hunch’ in the air introduced most notably by Anders Fogh and his ‘negative results’ blog post).
Fuzzing CPU interesting because new opcodes were found; question time: are those glitches or features?
Intel ME and MINIX inside (yes, academic microkernel) with highest level of trust within your computer, because what can go wrong?
Today’s RAMs are built very tightly, physically there are _very_ close to each other – technically it’s possible to Read memory in a certain way to affect neighbouring memory cells (flipping them) hence we’re able to affect integrity of a live system (we have 0-1 everywhere, if we can flip certain 1 to 0 we can change our privileges).
Hammering attacks require some kind of code execution hence they were local in the beginning. However, nowadays it’s possible to perform them remotely with e.g. WebGL in your browser (via GPU + Android on ARM) and remote direct memory access (RDMA) networking (cloud server)