1. The Rot Within
Why Application Security Starts With Tested, Reliable and Transparent Code
2. The Rot Within
My point today is that, if we wish to count lines of code,
we should not regard them as ‘lines produced’ but as
‘lines spent’: the current conventional wisdom is so
foolish as to book that count on the wrong side of the
ledger.
EdsgerW. Dijkstra
Companies spend millions of dollars on firewalls,
encryption and secure access devices, and it’s money
wasted, because none of these measures address the
weakest link in the security chain.
Kevin Mitnick
3. Topics
Introduction
Definitions
General Concepts – Areas of Concern
Presentation Core Theme
Security Development Lifecycle
Standards
Some Considerations in Detail
Conclusions
4. Definitions
Security – confidentiality, integrity, availability, authenticity, non-
repudiation (first 3 are CIA triad)
SecDLC – Security Development Lifecycle
SDLC - Software Development Lifecycle
Attack Surface –
Subset of software system resources that an attacker can use to attack
the system
Code that can be run by unauthenticated users
Vulnerability – weakness that can be used to cause harm to asset
Threat – anything that can cause harm
Risk – likelihood that a threat will use a vulnerability to cause harm
Control – how a risk is mitigated (my emphasis here is on logical/
technical controls)
6. Some Areas of Concern
Category/Class Category/Class
Authentication E-Commerce Payments
Authorization Web Services
DataValidation Phishing
Configuration Management Denial of Service Attacks
Session Management Error Handling
Sensitive Information Data Integrity
Logging & Auditing
Interpreter Injection
File System
Database Access
Cryptography
Administrative Interfaces
7. Core Theme
Software development is not simple; secure software
development is more difficult still.
Application security can’t be bolted on after the fact by
“security” developers.
All programmers must understand security.
Organization must be mature enough to field a working
SDLC before it can consider a SecDLC.
Secure applications are “self-defending”.
Security in a software application must be pervasive and
in depth.
Many of the highest priority risks are due to bad code,
not malicious attackers or acts of God.
8. Other Observations
Secure code starts with good code.
If code is riddled with defects, is poorly-documented and
poorly tested, and the implementation only loosely
corresponds to requirements & design, it is not possible
to secure it.
If the organization is not mature enough to support a
credible software development lifecycle, it cannot support
a security development lifecycle either.
No such thing as “sort of secure” or “partially secure”.
9. Requirements
Requirements: not only what an application must do, but
what it must not.
Define security objectives and requirements
An objective is fairly high-level
Requirements describe the objective in detail
Categories: identity, financial, reputation, privacy & regulatory,
availability (SLAs)
Keep security requirements separate from functional
requirements
Complement use cases with misuse cases.
Use knowledge of risks and mitigation strategies to start work
on security test plan
10. Design 1
Understand security policies and regulations
Establish components/layers/packages & boundaries
Includes shared and external components
Includes other applications on same server or accessing same
databases
Understand data flows and interconnections
Understand the security of single components
Identify attack surface
Perform threat analysis (risk modeling)
Principle of least privilege
11. Design 2
Choose a development methodology
Any will do provided that you’ve got one
Understand the security features and published guidelines
for the OS, managed platform, language, libraries/
frameworks etc
Establish/select coding standards & principles
Clearly identify design work that addresses security
requirements
Review source code control & configuration management
Complete the security test plan
12. Implementation
Secure implementation demands a higher quality of design than
what is commonly seen today.
Establish a philosophy of security:
Enforce least privilege as default.
All coding guidelines suggest this.
Assume that if design does not explicitly require use of another
component, then that use is not permitted.
This includes libraries and frameworks.
Don’t guess at design intent: if required design information is absent
make a formal request to have that corrected.
Frequent code reviews, tests, and static analysis.
Don’t change the understood system/component
interconnections inadvertently.
13. SDLC Testing
Normal software testing – despite the popular misconception
that it’s all about finding defects – is a QC measure used to
verify that a product fulfils the requirements.
Functional security testing is the security analog of this conventional
process.
Most security testing is the opposite – here we look for
functionality that’s not supposed to be present.
Negative requirements: what shouldn’t happen
Risk-based testing focuses on testing against negative requirements
Rank the risks before planning testing
Understand the assumptions of the developers
Testing of all types starts when there is code to test.
14. Developer Standards
All regulations, laws, organizational policies, e.g.
COBIT, ISO 27002, ISO 17799, PCI (DSS), HIPAA, SOX
Possibly TCSEC, ITSEC, CTCPEC -> Common Criteria
Coding Guidelines
By language, API, framework etc.
Secure Design Guidelines, e.g.
OWASP Security Design Guidelines
Threat Risk Modelling System documentation
Secure Coding Guidelines, e.g.
Secure Coding Guidelines for the Java Programming Language
OWASP Secure Coding Practices
Secure Testing Guidelines, e.g.
OWASP Testing Guide
15. Security Code Review
Single most effective technique for identifying security problems.
Use together with automated tools and manual penetration testing.
Security code review is a way of ensuring that the application is “self-
defending”:
Verify that security controls are present;
Verify that the controls work as intended;
Verify that the controls are used where needed.
Reviewer(s) need to be familiar with:
Code – language(s) and technologies used
Context – need threat model
Audience – intended users of application, other actors
Importance – required availability of application
Define a checklist
Varying levels of review formality – pick the one that suits the moment
Build review phases into the Software Development Lifecycle
Understand the attack surface
16. Enforcing Authorizations 1
Assumption: web pages are secured (e.g. web.xml, Web.Config). Now we
want to secure actions/methods, using either declarative or programmatic
methods.
Example 1:ASP.NET MVC authorization filter –
[Authorize(Roles=“Admin”)]
Public ActionResult DoAdminAction() { …various code… }
Example 2: Java EE – JSF Web Tier Programmatic
FacesContext.getCurrentInstance().getExternalContext
().isUserInRole(“role”)
Example 3: Java EE – JSF Web Tier Rendering
Seam s:hasRole EL, ICEFaces renderedOnUserRole, or custom
user code
Example 4: J2EE/Java EE – EJBs
J2EE 1.4 and prior has declarative authorizations
Java EE 5/6 have @DenyAll, @PermitAll, @RolesAllowed, @DeclareRoles,
@RunAs annotations for classes/methods.
17. Enforcing Authorizations 2
The Authorization Disconnect: only the correct roles can
execute specific code…but there are limited or no
controls on what that code is or does.
Consider platform/language security managers if available
Follow the detailed design; don’t stray.
Code reviews during detailed design and implementation
are essential.
Static analysis can be used to help identify both calling,
and called, code.
Defense in depth
18. Database Access
Many J2EE/JavaEE and .NET applications use a common
database login
This can work if the application and schema are rigorously
architected to implement proper security (roles wrt data
access) and auditing;
Enforcing access permissions can be simplified in code if a
database access layer (DAL) is designed.
Other alternatives include:
Each application user has own database login;
Proxy authentication to provide user context;
Row-level access (e.g. pgacl, OracleVirtual Private Databases).
19. Logging
Who did what when
What:
Authentication attempts;
Authorization requests;
CRUD operations on data – SQL or similar is often sufficient;
consider with DB auditing;
Other events of security import.
Should be possible to form audit trail of user actions.
Protect logs as you would other data.
Do not log confidential data.
Logs must be useful: analysis and reporting tools.
Test logs through incident response team exercises.
20. Errors & Exceptions
Fail securely –
Application should not fail into an insecure state
Assess when user sessions should be invalidated
Error handling should not provide attacker with
information.This includes “human” information that could
be used in a social exploit
Use generic error pages
Leverage the framework error-handling
Keep debugging information in secure logs
Centralize error handling to help prevent information
leakage
21. Conclusions
Build security in from the start
Appraise the risks realistically
The greatest security risk you have could be your
software developers
Corrupted or missing data doesn’t care who did it or
how it happened
Secure code is reliable code
Every software developer must be a security developer