Securing the SDLC: Dynamic Testing Java Web Apps

 
Editors Note: Today’s post is from Gregory Leonard. Gregory is an application security consultant at Optiv Security, Inc and a SANS instructor for DEV541 Secure Coding in Java/JEE.

Introduction
The creation and integration of a secure development lifecycle (SDLC) can be an intimidating, even overwhelming, task. There are so many aspects that need to be covered, including performing risk analysis and threat modeling during design, following secure coding practices during development, and conducting proper security testing during later stages.

Established Development Practices
Software development has begun evolving to account for the reality of security risks inherent with web application development, while also adapting to the increased pressure of performing code drops in the accelerated world of DevOps and other continuous delivery methodologies. Major strides have been made to integrate static analysis tools into the build process, giving development teams instant feedback about any vulnerabilities that were detected from the latest check-in, and giving teams the option to prevent builds from being deployed if new issues are detected.

Being able to perform static analysis on source code before performing a deployment build gives teams a strong edge in creating secure applications. However, static analysis does not cover the entire realm of possible vulnerabilities. Static analysis scanners are excellent at detecting obvious cases of some forms of attack, such as basic Cross-Site Scripting (XSS) or SQL injection attacks. Unfortunately, they are not capable of detecting more complex vulnerabilities or design flaws. If a design decision is made that results in a security vulnerability, such as the decision to use a sequence generator to create session IDs for authenticated users, it is highly likely that a static analysis scan won’t catch it.

Dynamic Testing Helps Close the Gaps
Dynamic testing, the process of exploring a running application to find vulnerabilities, helps cover the gaps left by the static analysis scanners. Teams with a mature SDLC will integrate the use of dynamic testing near the end of the development process to shore up any remaining vulnerabilities. Testing is generally limited to a single time period near the end of the process due to two reasons:

  • It is generally ineffective to perform dynamic testing on an application that is only partially completed as many of the vulnerabilities reported may be due to unimplemented code. These vulnerabilities are essentially false positives as the fixes may already be planned for an upcoming sprint.
  • Testing resources tend to be limited due to budget or other constraints. Most organizations, especially larger ones with dozens or even hundreds of development projects, can’t afford to have enough dedicated security testers to provide testing through the entire lifecycle of every project.

Putting the Power into Developer’s Hands
One way to help relieve the second issue highlighted above is to put some of the power to perform dynamic testing directly into the hands of the developers. Just as static analysis tools have become an essential part of the automated build process that developers work with on a daily basis, wouldn’t it be great if we could allow developers to perform some perfunctory dynamic scanning on their applications before making a delivery?

The first issue that needs to be addressed is which tools to start with? There is a plethora of different testing tools out there. Multiple options exist for every imaginable type of test, from network scanners to database testers to web application scanners. Trying to select the right tool, learn how to configure and use it, and interpret the results generated by the tools can be a daunting task.

Fortunately, many testing tools include helpful APIs that allow you to harness the power of a testing tool and integrate it into a different program. Often these APIs allow security testers to create a Python scripts which will perform multiple scans with a single script execution.

Taking the First Step
With so many tools to choose from, where should a developer who wants to start performing dynamic testing start? To help answer that question, Eric Johnson and I set out to create a plugin that would allow a developer to utilize some of the basic scanning functionality of OWASP’s Zed Attack Proxy (ZAP) within the more familiar confines of their IDE. Being the most familiar with the Eclipse IDE for Java, I decided to tackle that plugin first.

The plugin is designed to perform two tasks: perform a spider of a URL provided by the developer, followed by an active scan of all of the in-scope items detected during the spider. After the spider and scan are complete, the results are saved into a SecurityTesting project within Eclipse. More information about the SecurityTesting plugin, including installation and execution instructions, can be found at the project’s GitHub repository:

https://github.com/polyhedraltech/SecurityTesting

Using this plugin, developers can begin performing some basic security scans against their application while they are still in development. This gives the developer feedback about any potential vulnerabilities that may exist, while reducing the workload for the security testers later on.

Moving Forward
Once you have downloaded the ZAP plugin and started getting comfortable with what the scans produce and how it interacts with your web application, I encourage you to begin working more directly with ZAP as well as other security tools. When used correctly, dynamic testing tools can provide a wealth of information. Just be aware, dynamic testing tools are prone to the same false positives and false negatives that affect static analysis tools. Use the vulnerability reports generated by ZAP and other dynamic testing tools as a guide to determine what may need to be fixed in your application, but be sure to validate that a finding is legitimate before proceeding.

Hopefully you or your development team can find value in this plugin. We are in the process of enhancing it to support more dynamic testing tools as well as introducing support for more IDEs. Keep an eye on the GitHub repository for future progress!

To learn more about secure coding in Java using a Secure Development Lifecycle, sign up for DEV541: Secure Coding in Java/JEE!

About the Author
Gregory Leonard (@appsecgreg) has more than 17 years of experience in software development, with an emphasis on writing large-scale enterprise applications. Greg’s responsibilities over the course of his career have included application architecture and security, performing infrastructure design and implementation, providing security analysis, conducting security code reviews, performing web application penetration testing, and developing security tools. Greg is also co-author for the SANS DEV541 Secure Coding in Java/JEE course as well as a contributing author for the security awareness modules of the SANS Securing the Human Developer training program. He is currently employed as an application security consultant at Optiv Security, Inc.

Breaking CSRF: Spring Security and Thymeleaf

As someone who spends half of their year teaching web application security, I tend to give a lot of presentations that include live demonstrations, mitigation techniques, and exploits. When preparing for a quality assurance presentation earlier this year, I decided to show the group a demonstration of Cross-Site Request Forgery (CSRF) and how to fix the vulnerability.

A CSRF Refresher
If you’re not familiar with Cross-Site Request Forgery (CSRF), check out the article Steve Kosten wrote earlier this year about the attack, how it works, and how to defend your applications using synchronization tokens:

The Demo
My favorite way to demonstrate the power of CSRF is by exploiting a vulnerable change password screen to take over a user’s account. The attack goes like this:

  • Our unsuspecting user signs into a web site, opens a new tab, and visits my evil web page
  • My evil web page submits a forged request
  • The server changes the user’s password to a value that I know, and
  • I own their account!

I know, I know. Change password screens are supposed to ask for the user’s previous password, and the attacker wouldn’t know what that is! Well, you’re absolutely right. But, you’d be shocked at the number of web applications that I test each year that are missing this simple security mechanism.

Now, let’s create our vulnerable page.

The Change Password Page
My demo applications in the Java world use Spring MVC, Spring Security, Thymeleaf, and Spring JPA. If you’re not familiar with Thymeleaf, see the References section, and check out the documentation. This framework provides an easy-to-use HTML template engine that allows us to quickly create views for a web application. I created the following page called changepassword.html containing inputs for new password and confirm password, and a submit button:

[sourcecode]
<form method=”post” th:action=”@{/content/changepassword}”
th:object=”${changePasswordForm}”>
<table cellpadding=”0″>
<tr>
<td align=”right”>New Password:</td>
<td>
<input id=”newPassword” type=”password” th:field=”*{newPassword}” />
</td>
</tr>
<tr>
<td align=”right”>Confirm Password:</td>
<td>
<input id=”confirmPassword” type=”password” th:field=”*{confirmPassword}” />
</td>
</tr>
<tr>
<td>
<input type=”submit” value=”Change Password” id=”btnChangePassword” />
</td>
</tr>
</table>
</form>
[/sourcecode]

The Change Password HTML
After creating the server-side controller to change the user’s password, I fired up the application, signed into my demo site, and browsed to the change password screen. In the HTML source I found something very interesting. A mysterious hidden field, “_csrf”, had been added to my form:

[sourcecode light=”true”]
<input type=”hidden” name=”_csrf” value=”aa1688de-3984-448a-a56e-43bfa027790c” />
[/sourcecode]

When submitting the request to change my password, the raw HTTP request sent the _csrf request parameter to the server and I received a successful 200 response code.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler&_csrf=bd729b66-8d90-46c8-94ff-136cd1188caa
[/sourcecode]

“Interesting,” I thought to myself. Is this token really being validated on the server-side? I immediately fired up Burp Suite, captured a change password request, and sent it to the repeater plug-in.

For the first test, I removed the “_csrf” parameter, submitted the request, and watched it fail.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler
[/sourcecode]

For the second test, I tried again with an invalid “_csrf” token.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler&_csrf=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
[/sourcecode]

Both test cases returned the same 403 Forbidden HTTP response code:

[sourcecode light=”true”]
HTTP/1.1 403 Forbidden
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 2464
Date: Thu, 27 Aug 2015 11:48:08 GMT
[/sourcecode]

Spring Security
As an attacker trying to show how powerful CSRF attacks can be, this is troubling. As a defender, this tells me that my application’s POST requests are protected from CSRF attacks by default! According to the Spring documentation, CSRF protection has been baked into Spring Security since v3.2.0 was released in August 2013. For applications using the Java configuration, this is enabled by default and explains why my change password demo page was already being protected.

However, applications using XML-based configuration on versions before Spring v4.0 still need to enable the feature in the http configuration:

[sourcecode light=”true”]
<http>

<csrf />
</http>
[/sourcecode]

Applications on Spring v4.0+ have this protection enabled by default, regardless of how they are configured, making it easy for code reviewers and static analyzers to find instances where it is disabled. Of course this is exactly what I had to do for my presentation demo to work, but it’s a security anti-pattern so we don’t discuss how to do this here. See the Reference section for more details on Spring’s CSRF protections.
 
Caveats
Before we run around the office saying our applications are safe because we’re using Thymeleaf and Spring Security, I’m going to identify a few weaknesses in this defense.

  • Thymeleaf and Spring Security only place CSRF tokens into “form” elements. This means that only form actions submitting POST parameters are protected. If your application makes data modifications via GET requests or AJAX calls, you still have work to do.
  • Cross-Site Scripting (XSS) attacks can easily extract CSRF tokens to an attacker’s remote web server. If you have XSS vulnerabilities in your application, this protection does you no good.
  • My demo application is using Thymeleaf 2.1.3 and Spring 4.1.3. Older versions of these frameworks may not provide the same protection. As always, make sure you perform code reviews and security testing on your applications to ensure they are secure before deploying them to production.

References:

To learn more about securing your Java applications, sign up for DEV541: Secure Coding in Java/JEE!
 
About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

How to Prevent XSS Without Changing Code

To address security defects developers typically resort to fixing design flaws and security bugs directly in their code. Finding and fixing security defects can be a slow, painstaking, and expensive process. While development teams work to incorporate security into their development processes, issues like Cross-Site Scripting (XSS) continue to plague many commonly used applications.

In this post we’ll discuss two HTTP header related protections that can be used to mitigate the risk of XSS without having to make large code changes.

HttpOnly
One of the most common XSS attacks is the theft of cookies (especially session ids). The HttpOnly flag was created to mitigate this threat by ensuring that Cookie values cannot be accessed by client side scripts like JavaScript. This is accomplished by simply appending “; HttpOnly” to a cookie value. All modern browsers support this flag and will enforce that the cookie cannot be accessed by JavaScript, thereby preventing session hijacking attacks.

Older versions of the Java Servlet specification did not provide a standard way to define the JSESSIONID as “HttpOnly”. As of Servlet 3.0, the HttpOnly flag can be enabled in web.xml as follows:

<session-config>
  <cookie-config>
    <http-only>true</http-only>
  </cookie-config>
</session-config>

Aside from this approach in Servlet 3.0, older versions of Tomcat allowed the HttpOnly flag to be set with the vendor-specific “useHttpOnly” attribute in server.xml. This attribute was disabled by default in Tomcat 5.5 and 6. But, starting with Tomcat 7, the “useHttpOnly” attribute is enabled by default. So, even if you configure web.xml to be “false” in Tomcat 7, your JSESSIONID will still be HttpOnly unless you change the default behavior in server.xml as well.

Alternatively, you can programmatically add an HttpOnly cookie directly to the request using the following code:
String cookie = "mycookie=test; Secure; HttpOnly";
response.addHeader("Set-Cookie", cookie);

Fortunately, the Servlet 3.0 API was also updated to with a convenience method called setHttpOnly for adding the HttpOnly flag:
String cookie = "mycookie=test; Secure; HttpOnly";
response.addHeader("Set-Cookie", cookie);

Content Security Policy
Content Security Policy (CSP) is a mechanism to help mitigate injection vulnerabilities like XSS by defining a list of approved sources of content. For example, approved sources of JavaScript can be defined to be from certain origins and any other JavaScript would not be allowed to execute. The official HTTP header used to define CSP policies is Content-Security-Policy and can be used like this:

Content-Security-Policy:
default-src 'none';
script-src https://*.sanscdn.org
           https://ssl.google-analytics.com;
style-src 'self'
img-src https://*.sanscdn.org
font-src https://*.sanscdn.org

In this example you can see that CSP can be used to define a whitelist of allowable content sources using default-src 'none' and the following directives:
script-src – scripts can only be loaded from a specfic content delivery network (CDN) or from Google Analytics
style-src – Styles can only be loaded from the current origin
img-src and font-src – images and fonts can only be loaded from the CDN

By using the Content-Security-Policy header with directives like this you can easily harden your application against XSS. It sounds easy enough but there are some important limitations. CSP requires that there are no inline scripts or styles in your application. This means that all JavaScript in your application has to be externalized into .js files. Basically, you can’t have inline <script> tags or call any functions which allow JavaScript execution from strings (e.g. eval, setTimeout, setInterval). This can be a problem for legacy applications that need to be refactored.

As a result, the CSP specification also provides a Content-Security-Policy-Report-Only header that does not block any scripts in your application from running but simply sends JSON formatted reports so that you can be alerted when something might have broken as a result of CSP. By enabling this header, you can test and update your app slowly over time while gaining visibility into areas that need to be refactored.

If you’d like to learn more about protecting your Java and web-based applications sign up for DEV541: Secure Coding in Java being held in Reston, VA starting March 9.

WhatWorks in AppSec: Log Forging

Help!!! Developers are going blind from Log Files!

This is a post by Sri Mallur, an instructor with the SANS Institute for   SANS DEV541: Secure Coding in Java EE: Developing Defensible ApplicationsSri is a security consultant at a major healthcare provider who has over 15 years of experience in software development and information security. He has designed and developed applications for large companies in the insurance, chemical, and healthcare industries. He has extensive consulting experience from working with one of the big 5. Sri currently focuses on security in SDLC by working with developers, performing security code reviews and consulting on projects. Sri holds a Masters in industrial engineering from Texas Tech University, Lubbock, TX and an MBA from Cal State University-East Bay, Hayward, CA.

Recently one of my consultants called me frantically saying that a log forging fix had rendered log files useless. Apparently the developers were going blind trying to look at the log files.

First let us at least get a baseline about what log forging is and its importance. Generally most developers roll their eyes when a security consultant talks to them about log forging. Trust me, log forging is important, maybe, not to you (read developer), but, it is to the business. By the end of this post, hopefully you will see why.

What is Log Forging – and Why is it Important?

Simply put, log forging is the ability of an adversary to exploit an application vulnerability and inject bogus lines in application logs. “Why would he want to do this?”, you ask. Let’s put the security issue aside for a minute. Imagine, you a developer just got assigned a bug on code running in production. Where would you go first to start your triage process? The logs. But what if the logs you are relying on aren’t real and have bogus entries? You see random lines about Database Down and then Database Up and other events that don’t make sense. You get the picture. It’s possible that you’ll climb up the wrong tree trying to solve the production mystery or pursue such a convoluted path that it leads you to hitting the bottle to bury you ineptitude. You know this to be true 😉

Now let’s get serious. A hacker will create log entries for the same reasons that I just spoke of: to confuse the security police (read forensics, incident handlers or what not). Security folks refer to logs as “discovery control”: they use logs to discover bad things, to figure out what happened and the mechanics of how it happened so we can plug the hole. Without the ability to discover the mode of attack, our applications will remain vulnerable.

There’s another thing that we security folks like to do once in a while and we call that “attribution”. We would like to catch the person(s) who did something bad and seek justice. And, yes log files are legally admissible documents that lawyers use to argue the case. But, the fact that our log files are tampered does not stand the intense scrutiny of defense lawyers and this evidence is thrown out, making the case less strong. Adversaries generally go unpunished because we failed to protect our log files.

So, I hope I have convinced the developer in you that log forging is not just some obscure security issue that security consultants approach you about so they can stay busy. They in fact approach you because it matters to you and the business (remember triage, false entry). We want to make sure that when you use log files to triage an issue, you can depend on the logs.

Preventing Log Forging.. and Going a Little Too Far

Now that we have dealt with what log forging and why it matters, let’s continue with my story. An engineer had decided to write his own adapter to prevent log forging (it’s a long story why he had to write his own, I won’t go into it here, needless to say he had to.) Keep in mind that to prevent log forging all he has to do is prevent new line characters. Preventing new line characters will ensure that the adversary cannot create new lines of forged entry. Generally this is what I would expect to see in code to prevent a new line character:

[sourcecode language=”java”]

String clean = message.replace( ‘\n’, ‘_’ ).replace( ‘\r’, ‘_’ );

[/sourcecode]

The above line of code will sanitize the input by replacing CRLF with underscores. This is enough to prevent log forging. (BTW: OWASP ESAPI‘s Log Forging class already has this built in).

But the the developer got a little fancy and added the following line in his class:

 

[sourcecode language=”java”]

clean = ESAPI.encoder().encodeForHTML(message);

[/sourcecode]

The above line of code in addition to sanitizing the input also htmlencoded the input before writing to the log file. Security conscious developers know that we encode our output to prevent Cross-Site Scripting. This extra line of code would have worked well if the developers were viewing the log files in a browser and the browser interpreted the htmlencoded data appropriately. But in our case, those poor developers using the logs were viewing the text files directly and we were asking them to manually process all those html encodings!!!!. No wonder they were going blind.

Imagine a line like this:
[Error] 04-09-2013 Login failed for Admin

became
&#5xb;ERROR] 04-09-2013 Login Failed for Admin (Encoded)

You can already see that this simple line almost becomes unreadable and imagine a file full of such hash/html encodes.

The fix was of course to comment out the line that was encoding the input before writing to the file. I want to make sure I clarify the fix. We would want to encode the input  if the file was being viewed in a browser (like some log aggregators do) to prevent cross-site scripting (running scripts from the log file while viewing them). But in our case, developers were viewing the log files as a text so the encoding was not required.

This is a case of incomplete understanding of the issue and therefore the fix was incorrect. Our engineer of course is a smart guy and wished well but this was a great learning moment for all of us.

How much do developers care about security?

3%

That’s about how much developers care about security.

Starting last year I made a concerted effort to speak at developer conferences. The idea was to go directly to people who write actual code and help spread the word about application security. By speaking at technical conferences that appeal to top developers the goal was to reach out to people who really care about development and want to learn and apply everything they can. By getting these developers interested in security my hope was that they would, in some small way, lead by example since many of them are the ones that build the tools and frameworks that other developers rely upon.

It started last year at ÜberConf which is held in the Denver area. I knew it would be great show when I heard that, on some nights, sessions are attended until 11pm by scores of exhausted developers. Not only that, the conference organizer, Jay Zimmerman, makes sure there’s plenty of great food on hand for breakfast, lunch and dinner. Highly recommended.

Then there’s Devoxx which is held in Antwerp, Belgium and is known as the Java Community Conference. It’s a large show with over 3000 attendees. Plus the cost is low enough that it sells out every year. The organizer, Stephan Janssen, does an incredible job.

And of course there’s JavaOne which is held every year in San Francisco. As far as I can tell it’s still the largest Java development conference in terms of attendance. Unfortunately, the folks from Google only speak at Devoxx given the recent legal issues between Oracle and Google.

As you can see I really avoided any security specific events in an attempt to get outside of the security echo chamber. And I think (or hope anyway) that the results were very good. I’ve had hundreds of developers in my talks who were extremely excited to be learning new things and about security in general. The feedback has been overwhelmingly positive.

But, if the feedback from speaking at developer conferences has been so great then why do developers, in general, not seem to care that much about security? Why do I say that developers only care about security 3% of the time?

Well, this percentage comes from a comparison of the number of security sessions to the total number of speaking slots at all the developer conferences I’ve spoken at (or am scheduled to speak at). As you can see below, on average, only about 3% of the talks are security related.

Conference # of
speaking
slots
# of
security
sessions
% of sessions
about security
Comments
ÜberConf 2011 150 3 2.0%
Jazoon 2011 107 3 2.8%
JavaOne 2011 400 15 3.75% See note [1]
Devoxx 2011 88 2 2.3%
ÜberConf 2012 135 4 2.96%
JAX Conf 2012 40 2 5% See note [2]
JavaOne 2012 442 15 3.4%
Øredev 2012 120 2 1.67% See note [3]
Total 1482 46 3.1%

While developers do find security interesting they just don’t care about it as much as other stuff. This isn’t a big surprise. In many organizations they’re not incented to care about security. And while security is an interesting domain there are lots of other things that interest most developers even more. Developers like learning new things, using new tools and APIs, and building something that matters. Usually, security doesn’t “matter”. Heck, when I was focused solely on building web apps in the first half of my career I didn’t care that much about security either!

Looking back over the conference agendas, the vast majority of the sessions covered cool new technologies like mobile, NoSQL, cloud, REST, functional programming, Scala, new APIs, and the list goes on. There just isn’t as much room for security in the list of things that “matters”. There’s nothing wrong with this. In fact, this is as it should be.

We have to recognize that security isn’t going to come first. We need to understand and care about the things that development teams care about and advocate security on their terms. Focus on the problems and technologies that they care about and give them tools and ideas on how to use them better by providing simple and effective ways to do things in a secure way.

At the same time we have to keep reaching out to developers because they’re the ones who care enough and are smart enough to make a difference. We can’t change what they care about but maybe we can work together so they care more than 3% of the time.

 

[1] I can’t find the 2011 JavaOne content catalog online and can’t get exact figures. However, I do remember there being a fair number of security talks so I put a low estimate of 400 speaking slots and kept the number of security sessions the same as the 2012.

[2] One of the two talks was not security focused but touched on security so I added it here.

[3] The schedule is not final so the counts are estimates based on the currently published agenda.

Real and useful security help for software developers

There’s lots of advice on designing and building secure software. All you need to do is: Think like an attacker. Minimize the Attack Surface. Apply the principles of Least Privilege and Defense in Depth and Economy of Mechanism. Canonicalize and validate all input. Encode and escape output within the correct context. Use encryption properly. Manage sessions in a secure way….
But how are development teams actually supposed to do all of this? How do they know what’s important, and what’s not? What frameworks and libraries should they use? Where are code samples that they can review and follow? How can they test the software to see if they did everything correctly?

There aren’t as many resources to help developers answer these questions. Here are the best that I have found so far.

Cheat Sheets

First, there are the OWASP Prevention Cheat Sheets, which provide clear, practical advice from experts on how to do authentication properly, how to prevent CSRF and XSS and SQL injection attacks. How to properly validate input. How to encrypt data at rest and how to implement transport layer security. The right way to do session management and how to write a “forgot password” feature that doesn’t give the password away to bad guys. Practical security advice on HTML5 and application security architecture. And coming soon, more cheat sheets on threat modeling and canonicalization.

Clear (well, mostly – some of the cheat sheets deal with hard technical problems like crypto and DOM-based XSS attacks which are not easy to explain or easy to take care of)  and concrete and free advice that developers can follow to solve hard security problems correctly. This is good stuff.

Checklists that programmers will actually use

I’ve written before about the value of checklists for developing good, secure code. Checklists are important to make sure that you focus on what is important and that you don’t forget small – but critical – details as you work. SD Elements is a SaaS platform that provides customized, active online checklists that your team can follow to build secure software.  You describe your project’s high-level architecture (web app, web services, mobile app, fat client, …) and platform (language and technology stack) and compliance environment, and SD Elements creates a set of checklists and other resources to help the team deal with the security issues that they will face, from secure requirements through to design, coding and testing. You can track related checklist tasks across different SDLC stages, and easily see what has been done and what is still outstanding.

SD Elements doesn’t just tell the team what they need to do – it also explains how to do it. This includes how-to examples and language-specific code snippets that developers can copy and extend, design patterns and coding idioms they can follow, guidelines on what functions and libraries to use and how to use them, and instructions on how to test that your app is secure, including step-by-step videos demonstrating how to use Open Source tools to test for specific issues. You can customize the requirements and project settings, and create your own checklist items and add your own standards and guidance.

SD Elements is like having an experienced security specialist helping your development team and keeping them honest. A lot of thought and work has gone into this product.  I first looked at a beta of SD Elements in April. It’s come a long way since then and is already being used by some big companies. Today it works for Java, .Net and Ruby on Rails apps, with support for more platforms coming. To make it even easier to use, the SD Elements team is getting ready to release an Eclipse plug-in so that developers can see checklists that apply directly to the specific area of code that they are working on.

Secure Coding Standards from CERT

CERT has just released a Secure Coding Standard for Java to go with their earlier C Secure Coding Standard. The Java guide explains input validation and sanitization, concurrency (locking and threading and thread safety), Java platform security and run-time security, and goes over basic rules of good programming . There are lots of code samples showing common coding mistakes and how to write the same code correctly. It won’t be easy getting developers to read these guides, but they are good references, offering detailed and clear guidance on how to write secure and reliable code.

I would like to hear from other people: what have you found useful to help teams with the detailed work of building secure software?

Safer Software through Secure Frameworks

We have to make it easier for developers to build secure apps, especially Web apps. We can’t keep forcing everybody who builds an application to understand and plug all of the stupid holes in how the Web works on their own – and to do this perfectly right every time. It’s not just wasteful: it’s not possible.

What we need is implementation-level security issues taken care of at the language and framework level. So that developers can focus on their real jobs: solving design problems and writing code that works.

Security Frameworks and Libraries

One option is to get developers to use secure libraries that take care of application security functions like authentication, authorization, data validation and encryption. There are some good, and free, tools out there to help you.

If you’re a Microsoft .NET developer, there’s Microsoft’s Web Protection Library which provides functions and a runtime engine to protect .NET apps from XSS and SQL Injection.

Java developers can go to OWASP for help. As a starting point, someone at OWASP wrote up a summary of available Java security frameworks and libraries.  The most comprehensive, up-to-date choice for Java developers is OWASP’s ESAPI Enterprise Security API especially now that the 2.0 release has just come out. There are some serious people behind ESAPI, and you can get some support from the OWASP forums, or pay Aspect Security to get help in implementing it. ESAPI is big, and as I’ve pointed out before unfortunately it’s not easy to understand or use properly, it’s not nicely packaged, and the documentation is incomplete and out of date – although I know that people are working on this.

Or there’s Apache Shiro (FKA JSecurity and then later as Apache Ki), another secure framework for Java apps. Although it looks simpler to use and understand than ESAPI and covers most of the main security bases (authentication, authorization, session management and encryption), it doesn’t help take care of important functions like input validation and output encoding. And Spring users have Spring Security (Acegi) a comprehensive, but heavyweight authorization and authentication framework.

If you aren’t using ESAPI’s input data validation and output encoding functions to help protect your app from XSS attacks and other injection attacks, you could try OWASP’s Stinger declarative validators (although unfortunately Stinger isn’t an active project any more). Or maybe OWASP’s AntiSamy, an un-cool name for a cool library to sanitize rich HTML input data; or the newer, faster Java HTML Sanitizer built with technology contributed by Google: handy if you’re writing a bulletin board or social network.

Or you could check out the new auto-escaping technology that Jim Manico presented at this year’s SANS AppSec conference including Facebook’s XHP (for PHP programmers), OWASP’s JXT Java XML Templates , Google’s Context-Sensitive Auto-Sanitization (CSAS),and Javascript sandboxing from Google (CAJA – used in the new OWASP HTML Sanitizer) or from OWASP (JSReg).  And JQuery programmers can use Chris Schmid’ts JQuery Encoder to safely escape untrusted data.

To add CSRF protection to your app, try OWASP’s CSRFGuard.  If all you want to do is solve some encryption problems, there’s Jasypt – the Java Simplified Encryption library – and of course The Legion of the Bouncy Castle: a comprehensive encryption library for Java (and C#).

For developers working in other languages like Ruby and PHP and Objective C, there are ports or implementations of ESAPI available, although these aren’t as up-to-date or complete as the Java implementation. Python developers working with the Django framework can check out Django Security a new library from Rohit Sethi at SD Elements.

There are other security frameworks and libraries out there – if you know where to look. If you use this code properly, some or maybe even a lot of your security problems will be taken care of for you. But you need to know that you need it in the first place, then you need to know where to find it, and then you need to figure out how to use it properly, and then test it to make sure that everything works. Fast-moving Agile teams building new apps using simple incremental design approaches or maintenance teams fixing up legacy systems aren’t looking for auto-escaping templating technology or enterprise security frameworks like ESAPI because this stuff isn’t helping them to get code out of the door – and many of them don’t even know that they need something like this in the first place.

Frameworks and Libraries that are Secure

To reach application developers, to reduce risk at scale, we need to build security into the application frameworks and libraries, especially the Web frameworks, that developers already use. Application developers rely on these frameworks to solve a lot of fundamental plumbing problems – and they should be able to rely on these frameworks to solve their problems in a secure way. A good framework should protect developers from SQL Injection and other injection attacks, and provide strong session management including CSRF protection, and auto-escaping protection against XSS. Out of the box. By default.

Ruby on Rails is one example: a general-purpose Web framework that tries to protect developers from CSRF, SQL Injection and now auto-escaping protection against XSS in release 3. Ruby and Rails have security problems but at least the people behind this platform have taken responsible steps to protect developers from making the kind of fundamental and dangerous mistakes that are so common in Web apps.

Solving security problems in frameworks isn’t a new idea. Jeremiah Grossman at Whitehat Security talked about this back in 2006:

The only way I see software security improving significantly is if “security” is baked into the modern development frameworks and be virtually transparent. Remember, the primary developers mission is to pump out code to drive business and that’s what they’ll do not matter what. When developers find that its WAY easier and WAY better to do RIGHT by security, then we’ll get somewhere.

Unfortunately, we haven’t made a lot of progress on this over the last 5 years. A handful of people involved in OWASP’s Developer Outreach are trying to change this, working within some of the framework communities to help make frameworks more secure. It seems hopeless – after all, Java developers alone have dozens of frameworks to choose from for Web apps. But Jim Manico made an important point: to start making a real difference, we don’t need to secure every framework. We just need to make common, important frameworks safer to use, and secure by default: Spring, Struts, Rails, the main PHP and Python frameworks. This will reach a lot of people and protect a lot of software. Hopefully more people will stick with these frameworks because they are feature-rich, AND because they are secure by default. Other teams building other frameworks will respond and build security in, in order to compete. Building safer frameworks could become self-reinforcing as more people get involved and start understanding the problems and solving them. Even if this doesn’t happen, at least developers using some of the more popular frameworks will be protected.

Building secure frameworks and frameworks that are secure isn’t going to solve every application security problem. There are security problems that frameworks can’t take care of. We still need to improve the tools and languages that we use, and the way that we use them, the way that we design and build software. And the frameworks themselves will become targets of attack of course. And we’re all going to have to play catch up. Changing the next release of web application frameworks isn’t going to make a difference to people stuck maintaining code that has already been written. Developers will have to upgrade to take advantage of these new frameworks and learn how to use the new features. But it will go a long way to making the software world safer.

There are only a handful of people engaged in this work so far – most of them on a part-time basis. Such a small group of people, even smart and committed people, aren’t going to be able to change the world. This is important work: I think some of the most important work in application security today. If a government or a major corporation (unfortunately, I don’t work for either) is really looking for a way to help make the Web safer, this it – getting the best and brightest and most dedicated people from the appsec and application development communities to work together and solve foundational problems with wide-reaching impact.

To find out more, or to engaged, join the OWASP Developer Outreach.

Checklists, software and software security

It took me too long to finally read Dr. Atul Gawande’s Checklist Manifesto, a surprisingly interesting story of how asking surgeons and nurses to follow a simple list of checks, adding just a little bit of red tape, can save lives and save lots of money. It starts with the pioneering work done by Dr. Peter Provonost at Johns Hopkins, where they used checklists to catch stupid mistakes in essential ICU practices, proving that adding some trivial structure could make a real difference in life-critical situations. The book walks through how checklists are relied on by pilots, and how experts at Boeing and other companies continuously perfect these checklists to handle any problem in the air. The rest of the book focuses on the work that Dr. Gawande led at the World Health Organization to develop and test a simple, 3-section, 19-step, 2-minute checklist for safe surgery to be used in hospitals around the world.

A few points especially stood out for me:

First, that so many problems and complications at hospitals are not caused by doctors and nurses not knowing what to do – but because they forget to do what they already know how to do, because they forget trivial basic details, or ”lull themselves into skipping steps even when they remember them”. And that this applies to complex problems in many other disciplines.

Second, that following a simple set of basic checks, just writing them down and making sure that people double-check that they are doing what they know they are supposed to be doing, can make such a dramatic difference in the quality of complex work, work that is so dependent on expert skill.

It made we think more about how checklists can help in building and maintaining software. If they can find ways to make checklists work in surgery, then we can sure as hell find ways to make them work in software development.

There are already places where checklists are used or can be used in software development. Steve McConnell’s Code Complete is full of comprehensive checklists that programmers can follow to help make sure that they are doing a professional job at every step in building software. These checklists act as reminders of best practices, a health check on how a developer should do their job – but there’s too much here to use on a day-to-day basis. In his other book Software Estimation: Demystifying the Black Art he also recommends using checklists for creating estimates, and offers a short checklist to help guide developers and managers through choosing an estimation approach.

One place where checklists are used more often is in code reviews, to make sure that reviewers remember to check for what the team (or management, or auditors) have agreed is important. There are a lot of arguments over what should be included in a code review checklist, what’s important to check and what’s not, and how many checks are too many.

In the “Modern Code Review” chapter of Making Software, Jason Cohen reinforces Dr Gawande’s finding that long checklists are a bad idea. If review checklists are too long, programmers will just ignore them. He recommends that code review checklists should be no more than 10 items at the most. What surprised me is that according to Cohen, the most effective checklists are even shorter, with only 2 or 3 items to check: micro-checklists that focus in on the mistakes that the team commonly makes, or the mistakes that have cost them the most. Then, once people on the team stop making these mistakes, or if more important problems are found, you come up with a new checklist.

Cohen says that most code review checklists contain obvious items that are a waste of time like “Does the code accomplish what it is meant to do?” and “Can you understand the code as written?” and so on. Most items on long checklists are unnecessary (of course the reviewer is going to check if the code works and that they can understand it) or fuss about coding style and conventions, which can be handled through static analysis checkers. Checklists should only include common mistakes that cause real problems.

He also recommends that rather than relying on general-purpose checklists, programmers should build their own personal code review checklists, taking an idea from the SEI Personal Software Process (PSP). Since different people make different mistakes, each programmer should come up with their own short checklist of the most serious mistakes that they make on a consistent basis, especially the things that they find that they commonly forget to do, and share this list with the people reviewing their code.

There are still places for longer code review checklists, for areas where reviewers need more hand holding and guidance. Like checking for security vulnerabilities in code. OWASP provides a simple and useful secure coding practices quick reference guide which can be used to build a checklist for secure code reviews. This is work that programmers don’t do every day, so you’re less concerned about being efficient than you are about making sure that the reviewer covers all of the important bases. You need to make sure that security code reviews are comprehensive and disciplined, and you may need to provide evidence of this, making a tool like Agnitio interesting. But even in a detailed secure code review, you want to make sure that every check is clearly needed, clearly understood and essentially important.

Checklists help you check for what isn’t there…

In 11 Best Practices for Peer Code Review, Jason Cohen hilights that checklists are important in code reviews because they help remind reviewers to look beyond what is in the code for what isn’t there:

“Checklists are especially important for reviewers, since if the author forgot it, the reviewer is likely to miss it as well”.

But can we take this one step farther (or one step back) in software development? Can we go back to fundamental work where programmers commonly make important mistakes,  and come up with a simple checklist that will stop people from making these mistakes in the first place? Like the ICU central intravenous line problem that Dr. Provonost started with – a basic but important practice where adding simple checks can save big.

The first problem that comes to my mind is data validation, the cause of roughly half of all software security problems according to Michael Howard at Microsoft. Data validation, like a lot of problems in software security, is one of the worst kinds of problems. It’s fundamentally, conceptually simple: everyone understands (or they should) that you need to validate and filter input data, and escape output data. It’s one of the basic issues covered in AppSec training from SANS, in books on secure development. But it’s like surgery: you have to take all of it seriously, and you need to do it right, every little thing, every time. There are a lot of finicky details to get right, a lot of places to go wrong, and you can’t afford to make any mistakes.

This is the kind of work that cries out for a checklist, a clear, concrete set of steps that programmers can follow. Or like pilots, a set of different checklists that programmers can follow in different situations. So that we build software right in the first place. It works in other industries. Why can’t it work in ours?

Taming the Beast – The Floating Point DoS Vulnerability

Originally posted as Taming the Beast

The recent multi-language numerical parsing DOS bug has been named the “Mark of the Beast“. Some claim that this bug was first reported as early as 2001.This is a significant bug in (at least) PHP and Java. Similar issues have effected Ruby in the past. This bug has left a number of servers, web frameworks and custom web applications vulnerable to easily exploitable Denial of Service.

Oracle has patched this vuln but there are several non-Oracle JVM’s that have yet to release a patch. Tactical patching may be prudent for environment.

Here are three approaches that may help you tame this beast of a bug.

1) ModSecurity Rules

Ryan Barnett deployed a series of ModSecurity rules and documented several options at http://blog.spiderlabs.com/2011/02/java-floating-point-dos-attack-protection.html

2) Java-based Blacklist Filter

Bryan Sullivan from Adobe came up with the following Java-based blacklist filter.

This rule is actually quite accurate in *rejecting input* in the DOSable JVM numeric range. This fix, while simple, does indeed reject a series of normally good values.

[sourcecode language=”java”]
public static boolean containsMagicDoSNumber(String s) {
return s.replace(“.”, “”).contains(“2225073858507201”);
}
[/sourcecode]

3) Double Parsing Code

The following Java “safe” Double parsing code was based on a proof of concept by Brian Chess at HP/Fortify. This approach detects the evil range before trying to call parseDouble and returns the IEEE official value  ( 2.2250738585072014E-308 ) for any double in the dangerous range.
[sourcecode language=”java”]
private static BigDecimal bigBad;
private static BigDecimal smallBad;

static {
BigDecimal one = new BigDecimal(1);
BigDecimal two = new BigDecimal(2);
BigDecimal tiny = one.divide(two.pow(1022));

// 2^(-1022) 2^(-1076)
bigBad = tiny.subtract(one.divide(two.pow(1076)));

//2^(-1022) 2^(-1075)
smallBad = tiny.subtract(one.divide(two.pow(1075)));
}

public static Double parseSafeDouble(String input) throws InvalidParameterException {

if (input == null) throw new InvalidParameterException(“input is null”);

BigDecimal bd;
try {
bd = new BigDecimal(input);
} catch (NumberFormatException e) {
throw new InvalidParameterException(“cant parse number”);
}

if (bd.compareTo(smallBad) >= 0 && bd.compareTo(bigBad) <= 0) {
// if you get here you know you’re looking at a bad value. The final
// value for any double in this range is supposed to be the following safe #
//return safe number
System.out.println(“BAD NUMBER DETECTED – returning 2.2250738585072014E-308”);
return new Double(“2.2250738585072014E-308”);
}

//safe number, return double value
return bd.doubleValue();
}
[/sourcecode]

Five Key Design Decisions That Affect Security in Web Applications

By Krishna Raja and Rohit Sethi (@rksethi)

Senior developers and architects often make decisions related to application performance or other areas that have significant ramifications on the security of the application for years to come. Some decisions are obvious: How do we authenticate users? How do we restrict page access to authorized users? Others, however, are not so obvious. The following list comes from empirical data after seeing the architecture of many enterprise applications and the downstream effects of security.

1. Session Replication

Load balancing is a must have for applications with a large user base. While serving static content in this way is relatively easy, challenges start to arise when your application maintains state information across multiple requests. There are many ways to tackle session replication– here are some of the most common:

  • Allow the client to maintain state so that servers don’t have to
  • Persist state data in the database rather than in server memory
  • Use application server’s built-in session replication technology
  • Use third party products, such as Terra-Cotta
  • Tying each session to a particular server by modifying the session cookie

Out of these, maintaining state on the client is often the easiest to implement. Unfortunately, this single decision is often one the most serious you can make for the security of any client-server application. The reason is that clients can modify any data that they send to you. Inevitably, some of the state data shouldn’t be modifiable by an end user such as a price for a product, user permissions, etc. Without sufficient safeguards, client-side state can leave your application open to parameter manipulation at every transaction. Luckily, some frameworks provide protection in the form of client-side state encryption; however, as we’ve seen with the recent Oracle Padding attacks, this method isn’t always foolproof and can leave you with a false sense of security. Another technique involves hashing and signing read-only state data (i.e. the parameters that the client shouldn’t modify), however trying to decide which parameters should be modifiable and which ones shouldn’t can be particularly time consuming – often to the point that developers just ignore it altogether when deadlines become pressing. If you have the choice, elect to maintain state on the server and use one of the many techniques at your disposal to handle session replication.

2. Authorization Context

Many senior developers and architects we’ve spoken to understand that authorization is a challenging topic. Enterprise applications often perform a basic level of authorization: ensuring that the user has sufficient access rights to view a certain page. The problem is that authorization is a multi-layer, domain-specific problem that you can’t easily delegate to the application server or access management tools. For example, an accounting application user has access to the “accounts payable” module but there’s no server-side check to see which accounts the user should be able to issue payments for. Often the code that has sufficient context to see a list of available accounts is so deep in the call stack that it doesn’t have any information about the end user. The workarounds are often ugly: for example, tightly coupling presentation & business logic such that the application checks the list of accounts in a view page where it does have context information about the end user.

A more elegant solution is to anticipate the need for authorization far into the call stack and design appropriately. In some cases this means explicitly passing user context several layers deeper than you normally would; other approaches include having some type of session / thread-specific lookup mechanism that allows any code to access session-related data. The key is to think about this problem upfront so that you don’t waste unnecessary time down the road trying to hack together a solution. See our pattern-level security analysis of Application Controller for more details on this idea.

3. Tags vs. Code in Views

Over the years, most web application development frameworks have made it practical to code entire views/server-pages completely with tags. Dot Net’s ASPX or Java’s JSF pages are examples of this. Building exclusively with tags can sometimes be frustrating when you need to quickly add functionality inside of a view and you don’t have a ready-made tag for that function at your disposal. Some architects and lead developers impose a strict decision that all views must be composed entirely of tags; other architects and lead developers are more liberal in their approach. Inevitably the applications that allow developers to write in-line coding (e.g. PHP, classic ASP, or Scriptlets in Java) have an incredibly tough time eradicating Cross Site Scripting. Rather than augmenting tags with output encoding, developers need to manually escape every form of output in every view. A single decision can lead to tedious, error-prone work for years to come. If you do elect to offer the flexibility of one-off coding, make sure you use static analysis tools to find potential exposures as early as possible.

4. Choice of Development Framework

Call us biased, but we really believe that the framework you choose will dramatically affect the speed at which you can prevent and remediate security vulnerabilities. Building anti-CSRF controls in Django is a matter of turning on adding “@csrf_protect” to your view method. In most Java frameworks you need to build your own solution or use a third party library such as OWASP’s CSRFguard. Generally speaking, the more security features built into the framework the less time you have to spend adding these features into your own code or trying to integrate third party components. Choosing a development framework that takes security seriously will lead to savings down the road. The Secure Web Application Framework Manifesto is an OWASP project designed to help you make that decision.

5. Logging and Monitoring Approach

Most web applications implement some level of diagnostic logging.  From a design perspective, however, it is important to leverage logging as a measure of self defense rather than purely from a debugging standpoint.  The ability to detect failures and retrace steps can go a long way towards first spotting and then diagnosing a breach.  We’ve found that security-specific application logging is not standardized and, as a result, any security-relevant application logging tends to be done inconsistently.  When designing your logging strategy, we highly recommend differentiating security events from debugging or standard error events to expedite the investigative process in the event of compromise.  We also recommend using standard error codes for security events in order to facilitate monitoring. OWASP’s  ESAPI logging allows for event types to distinguish security events from regular logging events.  The AppSensor project allows you to implement intrusion detection and automated responses into your application.