How to Prevent XSS Without Changing Code

To address security defects developers typically resort to fixing design flaws and security bugs directly in their code. Finding and fixing security defects can be a slow, painstaking, and expensive process. While development teams work to incorporate security into their development processes, issues like Cross-Site Scripting (XSS) continue to plague many commonly used applications.

In this post we’ll discuss two HTTP header related protections that can be used to mitigate the risk of XSS without having to make large code changes.

One of the most common XSS attacks is the theft of cookies (especially session ids). The HttpOnly flag was created to mitigate this threat by ensuring that Cookie values cannot be accessed by client side scripts like JavaScript. This is accomplished by simply appending “; HttpOnly” to a cookie value. All modern browsers support this flag and will enforce that the cookie cannot be accessed by JavaScript, thereby preventing session hijacking attacks.

Older versions of the Java Servlet specification did not provide a standard way to define the JSESSIONID as “HttpOnly”. As of Servlet 3.0, the HttpOnly flag can be enabled in web.xml as follows:


Aside from this approach in Servlet 3.0, older versions of Tomcat allowed the HttpOnly flag to be set with the vendor-specific “useHttpOnly” attribute in server.xml. This attribute was disabled by default in Tomcat 5.5 and 6. But, starting with Tomcat 7, the “useHttpOnly” attribute is enabled by default. So, even if you configure web.xml to be “false” in Tomcat 7, your JSESSIONID will still be HttpOnly unless you change the default behavior in server.xml as well.

Alternatively, you can programmatically add an HttpOnly cookie directly to the request using the following code:
String cookie = "mycookie=test; Secure; HttpOnly";
response.addHeader("Set-Cookie", cookie);

Fortunately, the Servlet 3.0 API was also updated to with a convenience method called setHttpOnly for adding the HttpOnly flag:
String cookie = "mycookie=test; Secure; HttpOnly";
response.addHeader("Set-Cookie", cookie);

Content Security Policy
Content Security Policy (CSP) is a mechanism to help mitigate injection vulnerabilities like XSS by defining a list of approved sources of content. For example, approved sources of JavaScript can be defined to be from certain origins and any other JavaScript would not be allowed to execute. The official HTTP header used to define CSP policies is Content-Security-Policy and can be used like this:

default-src 'none';
script-src https://*
style-src 'self'
img-src https://*
font-src https://*

In this example you can see that CSP can be used to define a whitelist of allowable content sources using default-src 'none' and the following directives:
script-src – scripts can only be loaded from a specfic content delivery network (CDN) or from Google Analytics
style-src – Styles can only be loaded from the current origin
img-src and font-src – images and fonts can only be loaded from the CDN

By using the Content-Security-Policy header with directives like this you can easily harden your application against XSS. It sounds easy enough but there are some important limitations. CSP requires that there are no inline scripts or styles in your application. This means that all JavaScript in your application has to be externalized into .js files. Basically, you can’t have inline <script> tags or call any functions which allow JavaScript execution from strings (e.g. eval, setTimeout, setInterval). This can be a problem for legacy applications that need to be refactored.

As a result, the CSP specification also provides a Content-Security-Policy-Report-Only header that does not block any scripts in your application from running but simply sends JSON formatted reports so that you can be alerted when something might have broken as a result of CSP. By enabling this header, you can test and update your app slowly over time while gaining visibility into areas that need to be refactored.

If you’d like to learn more about protecting your Java and web-based applications sign up for DEV541: Secure Coding in Java being held in Reston, VA starting March 9.

HTML5: Risky Business or Hidden Security Tool Chest?

I was lucky to be allowed to present about how to use HTML5 to improve security at the recent OWASP APPSEC USA Conference in New York City. OWASP now made a video of the talk available on YouTube for anybody interested.


The Security Impact of HTTP Caching Headers

[This is a cross post from ]

Earlier this week, an update for Media-Wiki fixed a bug in how it used caching headers [2]. The headers allowed authenticated content to be cached, which may lead to sessions being shared between users using the same proxy server. I think this is a good reason to talk a bit about caching in web applications and why it is important for security.

First off all: If your application is https only, this may not apply to you. The browser does not typically cache HTTPS content, and proxies will not inspect it. However, HTTPS inspecting proxies are available and common in some corporate environment so this *may* apply to them, even though I hope they do not cache HTTPS content.

It is the goal of properly configured caching headers to avoid having personalized information stored in proxies. The server needs to include appropriate headers to indicate if the response may be cached.

Caching Related Response Headers


This is probably the most important header when it comes to security. There are a number of options associated with this header. Most importantly, the page can be marked as “private” or “public”. A proxy will not cache a page if it is marked as “private”. Other options are sometimes used inappropriately. For example the “no-cache” option just implies that the proxy should verify each time the page is requested if the page is still valid, but it may still store the page. A better option to add is “no-store” which will prevent request and response from being stored by the cache. The “no-transform” option may be important for mobile users. Some mobile providers will compress or alter content, in particular images, to save bandwidth when re-transmitting content over cellular networks. This could break digital signatures in some cases. “no-transform” will prevent that (but again: doesn’t matter for SSL. Only if you rely on digital signatures transmitted to verify an image for example).The “max-age” option can be used to indicate how long a response can be cached. Setting it to “0” will prevent caching.

A “safe” Cache-Control header would be:

Cache-Control: private, no-cache, no-store, max-age=0


Modern browsers tend to rely less on the Expires header. However, it is best to stay consistent. A expiration time in the past, or just the value “0” will work to prevent caching.


The ETag will not prevent caching, but will indicate if content changed. The Etag can be understood as a serial number to provide a more granular identifcation of stale content. In some cases the ETag is derived from information like file inode numbers that some administrators don’t like to share. A nice way to come up with an Etag would be to just send a random number, or not to send it at all. I am not aware of a way to randomize the Etag.


Thie is an older header, and has been replaced by the “Cache-Control” header. “Pragma: no-cache” is equivalent to “Cache-Control: no-cache”.


The “vary” header is used to ignore certain header fields in requests. A Cache will index all stored responses based on the content of the request. The request consist not just of the URL requested, but also other headers like for example the User-Agent field. You may decide to deliver the same content independent of the user agent, and as a result, “Vary: User-Agent” would help the proxy to identify that you don’t care about the user agent. For out discussion, this doesn’t really matter because we never want the request or response to be cached so it is best to have no Vary header.

In summary, a safe set of HTTP response headers may look like:

Cache-Control: private, no-cache, no-store, max-age=0, no-transform
Pragma: no-cache
Expires: 0

The “Cache-Control” header is probably overdone in this example, but should cover various implementations.

A nice tool to test this is ratproxy, which will identify inconsistent cache headers [3]. For example, ratproxy will alert you if a “Set-Cookie” header is sent with a cachable response.

Anything I missed? Any other suggestions for proper cache control?



WhatWorks in AppSec: Log Forging

Help!!! Developers are going blind from Log Files!

This is a post by Sri Mallur, an instructor with the SANS Institute for   SANS DEV541: Secure Coding in Java EE: Developing Defensible ApplicationsSri is a security consultant at a major healthcare provider who has over 15 years of experience in software development and information security. He has designed and developed applications for large companies in the insurance, chemical, and healthcare industries. He has extensive consulting experience from working with one of the big 5. Sri currently focuses on security in SDLC by working with developers, performing security code reviews and consulting on projects. Sri holds a Masters in industrial engineering from Texas Tech University, Lubbock, TX and an MBA from Cal State University-East Bay, Hayward, CA.

Recently one of my consultants called me frantically saying that a log forging fix had rendered log files useless. Apparently the developers were going blind trying to look at the log files.

First let us at least get a baseline about what log forging is and its importance. Generally most developers roll their eyes when a security consultant talks to them about log forging. Trust me, log forging is important, maybe, not to you (read developer), but, it is to the business. By the end of this post, hopefully you will see why.

What is Log Forging – and Why is it Important?

Simply put, log forging is the ability of an adversary to exploit an application vulnerability and inject bogus lines in application logs. “Why would he want to do this?”, you ask. Let’s put the security issue aside for a minute. Imagine, you a developer just got assigned a bug on code running in production. Where would you go first to start your triage process? The logs. But what if the logs you are relying on aren’t real and have bogus entries? You see random lines about Database Down and then Database Up and other events that don’t make sense. You get the picture. It’s possible that you’ll climb up the wrong tree trying to solve the production mystery or pursue such a convoluted path that it leads you to hitting the bottle to bury you ineptitude. You know this to be true 😉

Now let’s get serious. A hacker will create log entries for the same reasons that I just spoke of: to confuse the security police (read forensics, incident handlers or what not). Security folks refer to logs as “discovery control”: they use logs to discover bad things, to figure out what happened and the mechanics of how it happened so we can plug the hole. Without the ability to discover the mode of attack, our applications will remain vulnerable.

There’s another thing that we security folks like to do once in a while and we call that “attribution”. We would like to catch the person(s) who did something bad and seek justice. And, yes log files are legally admissible documents that lawyers use to argue the case. But, the fact that our log files are tampered does not stand the intense scrutiny of defense lawyers and this evidence is thrown out, making the case less strong. Adversaries generally go unpunished because we failed to protect our log files.

So, I hope I have convinced the developer in you that log forging is not just some obscure security issue that security consultants approach you about so they can stay busy. They in fact approach you because it matters to you and the business (remember triage, false entry). We want to make sure that when you use log files to triage an issue, you can depend on the logs.

Preventing Log Forging.. and Going a Little Too Far

Now that we have dealt with what log forging and why it matters, let’s continue with my story. An engineer had decided to write his own adapter to prevent log forging (it’s a long story why he had to write his own, I won’t go into it here, needless to say he had to.) Keep in mind that to prevent log forging all he has to do is prevent new line characters. Preventing new line characters will ensure that the adversary cannot create new lines of forged entry. Generally this is what I would expect to see in code to prevent a new line character:

[sourcecode language=”java”]

String clean = message.replace( ‘\n’, ‘_’ ).replace( ‘\r’, ‘_’ );


The above line of code will sanitize the input by replacing CRLF with underscores. This is enough to prevent log forging. (BTW: OWASP ESAPI‘s Log Forging class already has this built in).

But the the developer got a little fancy and added the following line in his class:


[sourcecode language=”java”]

clean = ESAPI.encoder().encodeForHTML(message);


The above line of code in addition to sanitizing the input also htmlencoded the input before writing to the log file. Security conscious developers know that we encode our output to prevent Cross-Site Scripting. This extra line of code would have worked well if the developers were viewing the log files in a browser and the browser interpreted the htmlencoded data appropriately. But in our case, those poor developers using the logs were viewing the text files directly and we were asking them to manually process all those html encodings!!!!. No wonder they were going blind.

Imagine a line like this:
[Error] 04-09-2013 Login failed for Admin

&#5xb;ERROR] 04-09-2013 Login Failed for Admin (Encoded)

You can already see that this simple line almost becomes unreadable and imagine a file full of such hash/html encodes.

The fix was of course to comment out the line that was encoding the input before writing to the file. I want to make sure I clarify the fix. We would want to encode the input  if the file was being viewed in a browser (like some log aggregators do) to prevent cross-site scripting (running scripts from the log file while viewing them). But in our case, developers were viewing the log files as a text so the encoding was not required.

This is a case of incomplete understanding of the issue and therefore the fix was incorrect. Our engineer of course is a smart guy and wished well but this was a great learning moment for all of us.

Ask the Expert – Jim Manico

Jim Manico is the VP of Security Architecture for WhiteHat Security, a web security firm. Jim is a participant and project manager of the OWASP Developer Cheatsheet series. He is also the producer and host of the OWASP Podcast Series.

1. Although SQL Injection continues to be one of the most commonly exploited security vulnerabilities in the wild, Cross Site Scripting (XSS) is still the most common security problem in web applications. Why is this still the case? What makes XSS so difficult for developers to understand and to protect themselves from?

Mitigation of SQL Injection, from a developer point of view, is very straight forward. Parameterize your queries and bind your variables!

Unfortunately, mitigating XSS can be very difficult. You need to do contextual output encoding in 5 or more contexts as you are dynamically creating HTML documents on the server. You also need validate untrusted HTML that is submitted from widgets like TinyMCE. You also need to parse JSON using safe APIs such as JSON.parse. And then you need to deal with the very challenging issue of DOM Based XSS, a challenge that even tools have a problem discovering. And this problem is getting worse in the era of rich internet application development.

2. What’s the real risk of XSS – what can attackers do if they find an XSS vulnerability? How seriously should developers take XSS?

Attackers can use XSS to set up keyloggers, deface a website, steal session cookies or other sensitive data, redirect the user to an untrusted website, and circumvent CSRF protections … just to get started. If developers want to build secure web applications they NEED to take XSS defense seriously. And it can be quite difficult to accomplish this – especially for modern RIA/AJAX applications.

3. Where can developers and testers and security analysts go to understand XSS? What tools can people use to prevent XSS today and where can they find them?

One of the best XSS prevention guides is the OWASP XSS Prevention Cheat Sheet. Over 500,000 hits and counting.

For advanced practitioners there is the OWASP DOM XSS Prevention Cheatsheet as well.

4. When is XSS going to be solved for good, or will we have to keep on living with the risk of XSS exploits for a long time?

If developers are forced to manually output encode ever variable, I feel XSS will always be with us.

But there is hope in standards.

Content Security Policy 1.1 is a W3C draft  which promised to make XSS defense a great deal easier on developers. There is only mixed browser support for CSP today, but in 2-3 years when all browsers full support the CSP standard, there will be a browser-based highly effective AntiXSS methodology available to all.

I’m also fond of the HTML5 iframe sandboxing mechanism for XSS defense.

How much do developers care about security?


That’s about how much developers care about security.

Starting last year I made a concerted effort to speak at developer conferences. The idea was to go directly to people who write actual code and help spread the word about application security. By speaking at technical conferences that appeal to top developers the goal was to reach out to people who really care about development and want to learn and apply everything they can. By getting these developers interested in security my hope was that they would, in some small way, lead by example since many of them are the ones that build the tools and frameworks that other developers rely upon.

It started last year at ÜberConf which is held in the Denver area. I knew it would be great show when I heard that, on some nights, sessions are attended until 11pm by scores of exhausted developers. Not only that, the conference organizer, Jay Zimmerman, makes sure there’s plenty of great food on hand for breakfast, lunch and dinner. Highly recommended.

Then there’s Devoxx which is held in Antwerp, Belgium and is known as the Java Community Conference. It’s a large show with over 3000 attendees. Plus the cost is low enough that it sells out every year. The organizer, Stephan Janssen, does an incredible job.

And of course there’s JavaOne which is held every year in San Francisco. As far as I can tell it’s still the largest Java development conference in terms of attendance. Unfortunately, the folks from Google only speak at Devoxx given the recent legal issues between Oracle and Google.

As you can see I really avoided any security specific events in an attempt to get outside of the security echo chamber. And I think (or hope anyway) that the results were very good. I’ve had hundreds of developers in my talks who were extremely excited to be learning new things and about security in general. The feedback has been overwhelmingly positive.

But, if the feedback from speaking at developer conferences has been so great then why do developers, in general, not seem to care that much about security? Why do I say that developers only care about security 3% of the time?

Well, this percentage comes from a comparison of the number of security sessions to the total number of speaking slots at all the developer conferences I’ve spoken at (or am scheduled to speak at). As you can see below, on average, only about 3% of the talks are security related.

Conference # of
# of
% of sessions
about security
ÜberConf 2011 150 3 2.0%
Jazoon 2011 107 3 2.8%
JavaOne 2011 400 15 3.75% See note [1]
Devoxx 2011 88 2 2.3%
ÜberConf 2012 135 4 2.96%
JAX Conf 2012 40 2 5% See note [2]
JavaOne 2012 442 15 3.4%
Øredev 2012 120 2 1.67% See note [3]
Total 1482 46 3.1%

While developers do find security interesting they just don’t care about it as much as other stuff. This isn’t a big surprise. In many organizations they’re not incented to care about security. And while security is an interesting domain there are lots of other things that interest most developers even more. Developers like learning new things, using new tools and APIs, and building something that matters. Usually, security doesn’t “matter”. Heck, when I was focused solely on building web apps in the first half of my career I didn’t care that much about security either!

Looking back over the conference agendas, the vast majority of the sessions covered cool new technologies like mobile, NoSQL, cloud, REST, functional programming, Scala, new APIs, and the list goes on. There just isn’t as much room for security in the list of things that “matters”. There’s nothing wrong with this. In fact, this is as it should be.

We have to recognize that security isn’t going to come first. We need to understand and care about the things that development teams care about and advocate security on their terms. Focus on the problems and technologies that they care about and give them tools and ideas on how to use them better by providing simple and effective ways to do things in a secure way.

At the same time we have to keep reaching out to developers because they’re the ones who care enough and are smart enough to make a difference. We can’t change what they care about but maybe we can work together so they care more than 3% of the time.


[1] I can’t find the 2011 JavaOne content catalog online and can’t get exact figures. However, I do remember there being a fair number of security talks so I put a low estimate of 400 speaking slots and kept the number of security sessions the same as the 2012.

[2] One of the two talks was not security focused but touched on security so I added it here.

[3] The schedule is not final so the counts are estimates based on the currently published agenda.

Seven Tips for Picking a Static Analysis Tool

Stephen J, who is a member of our software security mailing list, asked a while back, “Do you have any recommendations on static source code scanners?” James Jardine and I started talking and came up with the following tips.

There are so many commercial static analysis tools from vendors like Armorize, Checkmarx, Coverity, Fortify (HP), Klocwork, IBM, and Veracode that it’s hard to recommend a specific product. Instead we’d like to focus on seven tips that can help you maximize your selection.

1) Test before you buy

This probably sounds obvious but, assuming you haven’t purchased anything yet, definitely do a bake off and have the vendor run the code against your actual apps. Do *not* simply run the tool on a vendor supplied sample app as the quality of the results, surprisingly, can vary quite a bit across different tools and code bases. Just keep in mind that some vendors will try to avoid this so they can better control the test or, simply, to keep you from getting a free scan.

2) Determine your budget

It’s really important to understand your budget. The cost of commercial tools varies greatly between vendors and you should be aware of how much you are willing to spend. This includes the cost of the tool, ongoing maintenance, and even things like additional RAM for your build servers (yes, it can take a lot of RAM). And don’t forgot to budget for the time people spend setting up, configuring, and reviewing results from the tool.

3) Prepare to spend a lot of time on it

Usually, anything worth doing takes time. Test-driven development and writing thorough unit tests don’t happen on their own. Similarly, using a static analysis tool to find security issues is an investment. It needs both management and developer buy-in to succeed when time consuming issues like false positives, tweaking rules, and product upgrades come up.

4) Cloud or Internal

Are you ok with uploading your source code, or binaries, to a third party site to do the analysis? This is only a question you and your company can answer, but it is an important one. Not only do you have to consider sharing your code, but if the solution is in the cloud, what about integration with your build process. Do you want to have a solution that can be built into your nightly build process? Tools that are housed internally usually lend themselves better to customization. Do you have the staff and capabilities to extend the tool so that it works better for your applications?

5) What type of support do you need?

Some tools come with much more support than others. One tool may have support personnel available to go over the results of your scan with you, so you understand them, whereas others may not have this type of support. Depending on your in-house skill set this could be a big deciding factor.

6) Check references

Ask other colleagues (not the vendors) who use the tool and get their feedback. These are the ones that will give you honest answers. Keep in mind that even though a specific vendor works well for one application or company it doesn’t mean that it’s the right fit for your situation.

7) Research the products

Make sure that the solution you are considering is the best fit for you. You will definitely want to make sure that the vendor/product you choose supports your development platform and your development process.

Every application is different and it’s impossible for an outsider to recommend the correct static analysis tool without having intimate knowledge of your application and processes. Given the investment in time and money, choosing a vendor and product is a difficult yet important decision. Do your due diligence and research before making a choice.

Take some time to lay out your current landscape and answer some of these questions:

– What language(s) do you develop in?
– What source control system do you use?
– What other types of technologies are used in your application (Ajax, jQuery, Silverlight, Flash, mobile, etc)?
– What skill sets do you have in house when it comes to application security?
– Do you have people that can quickly learn how to operate the scanner? Can they understand the output?
– What type of support do you want from the vendor?
– Are you willing to submit your code to an external site, or does your policy require that it not be shared outside the company boundaries?
– What is your budget for a static analyzer?
– How will the technology work with the people and processes that you have in house?

Hopefully, all these questions don’t change your opinion about needing a static analysis tool and make it a little easier to actually select a tool to use.

Thanks for reading!

Frank Kim & James Jardine

Safer Software through Secure Frameworks

We have to make it easier for developers to build secure apps, especially Web apps. We can’t keep forcing everybody who builds an application to understand and plug all of the stupid holes in how the Web works on their own – and to do this perfectly right every time. It’s not just wasteful: it’s not possible.

What we need is implementation-level security issues taken care of at the language and framework level. So that developers can focus on their real jobs: solving design problems and writing code that works.

Security Frameworks and Libraries

One option is to get developers to use secure libraries that take care of application security functions like authentication, authorization, data validation and encryption. There are some good, and free, tools out there to help you.

If you’re a Microsoft .NET developer, there’s Microsoft’s Web Protection Library which provides functions and a runtime engine to protect .NET apps from XSS and SQL Injection.

Java developers can go to OWASP for help. As a starting point, someone at OWASP wrote up a summary of available Java security frameworks and libraries.  The most comprehensive, up-to-date choice for Java developers is OWASP’s ESAPI Enterprise Security API especially now that the 2.0 release has just come out. There are some serious people behind ESAPI, and you can get some support from the OWASP forums, or pay Aspect Security to get help in implementing it. ESAPI is big, and as I’ve pointed out before unfortunately it’s not easy to understand or use properly, it’s not nicely packaged, and the documentation is incomplete and out of date – although I know that people are working on this.

Or there’s Apache Shiro (FKA JSecurity and then later as Apache Ki), another secure framework for Java apps. Although it looks simpler to use and understand than ESAPI and covers most of the main security bases (authentication, authorization, session management and encryption), it doesn’t help take care of important functions like input validation and output encoding. And Spring users have Spring Security (Acegi) a comprehensive, but heavyweight authorization and authentication framework.

If you aren’t using ESAPI’s input data validation and output encoding functions to help protect your app from XSS attacks and other injection attacks, you could try OWASP’s Stinger declarative validators (although unfortunately Stinger isn’t an active project any more). Or maybe OWASP’s AntiSamy, an un-cool name for a cool library to sanitize rich HTML input data; or the newer, faster Java HTML Sanitizer built with technology contributed by Google: handy if you’re writing a bulletin board or social network.

Or you could check out the new auto-escaping technology that Jim Manico presented at this year’s SANS AppSec conference including Facebook’s XHP (for PHP programmers), OWASP’s JXT Java XML Templates , Google’s Context-Sensitive Auto-Sanitization (CSAS),and Javascript sandboxing from Google (CAJA – used in the new OWASP HTML Sanitizer) or from OWASP (JSReg).  And JQuery programmers can use Chris Schmid’ts JQuery Encoder to safely escape untrusted data.

To add CSRF protection to your app, try OWASP’s CSRFGuard.  If all you want to do is solve some encryption problems, there’s Jasypt – the Java Simplified Encryption library – and of course The Legion of the Bouncy Castle: a comprehensive encryption library for Java (and C#).

For developers working in other languages like Ruby and PHP and Objective C, there are ports or implementations of ESAPI available, although these aren’t as up-to-date or complete as the Java implementation. Python developers working with the Django framework can check out Django Security a new library from Rohit Sethi at SD Elements.

There are other security frameworks and libraries out there – if you know where to look. If you use this code properly, some or maybe even a lot of your security problems will be taken care of for you. But you need to know that you need it in the first place, then you need to know where to find it, and then you need to figure out how to use it properly, and then test it to make sure that everything works. Fast-moving Agile teams building new apps using simple incremental design approaches or maintenance teams fixing up legacy systems aren’t looking for auto-escaping templating technology or enterprise security frameworks like ESAPI because this stuff isn’t helping them to get code out of the door – and many of them don’t even know that they need something like this in the first place.

Frameworks and Libraries that are Secure

To reach application developers, to reduce risk at scale, we need to build security into the application frameworks and libraries, especially the Web frameworks, that developers already use. Application developers rely on these frameworks to solve a lot of fundamental plumbing problems – and they should be able to rely on these frameworks to solve their problems in a secure way. A good framework should protect developers from SQL Injection and other injection attacks, and provide strong session management including CSRF protection, and auto-escaping protection against XSS. Out of the box. By default.

Ruby on Rails is one example: a general-purpose Web framework that tries to protect developers from CSRF, SQL Injection and now auto-escaping protection against XSS in release 3. Ruby and Rails have security problems but at least the people behind this platform have taken responsible steps to protect developers from making the kind of fundamental and dangerous mistakes that are so common in Web apps.

Solving security problems in frameworks isn’t a new idea. Jeremiah Grossman at Whitehat Security talked about this back in 2006:

The only way I see software security improving significantly is if “security” is baked into the modern development frameworks and be virtually transparent. Remember, the primary developers mission is to pump out code to drive business and that’s what they’ll do not matter what. When developers find that its WAY easier and WAY better to do RIGHT by security, then we’ll get somewhere.

Unfortunately, we haven’t made a lot of progress on this over the last 5 years. A handful of people involved in OWASP’s Developer Outreach are trying to change this, working within some of the framework communities to help make frameworks more secure. It seems hopeless – after all, Java developers alone have dozens of frameworks to choose from for Web apps. But Jim Manico made an important point: to start making a real difference, we don’t need to secure every framework. We just need to make common, important frameworks safer to use, and secure by default: Spring, Struts, Rails, the main PHP and Python frameworks. This will reach a lot of people and protect a lot of software. Hopefully more people will stick with these frameworks because they are feature-rich, AND because they are secure by default. Other teams building other frameworks will respond and build security in, in order to compete. Building safer frameworks could become self-reinforcing as more people get involved and start understanding the problems and solving them. Even if this doesn’t happen, at least developers using some of the more popular frameworks will be protected.

Building secure frameworks and frameworks that are secure isn’t going to solve every application security problem. There are security problems that frameworks can’t take care of. We still need to improve the tools and languages that we use, and the way that we use them, the way that we design and build software. And the frameworks themselves will become targets of attack of course. And we’re all going to have to play catch up. Changing the next release of web application frameworks isn’t going to make a difference to people stuck maintaining code that has already been written. Developers will have to upgrade to take advantage of these new frameworks and learn how to use the new features. But it will go a long way to making the software world safer.

There are only a handful of people engaged in this work so far – most of them on a part-time basis. Such a small group of people, even smart and committed people, aren’t going to be able to change the world. This is important work: I think some of the most important work in application security today. If a government or a major corporation (unfortunately, I don’t work for either) is really looking for a way to help make the Web safer, this it – getting the best and brightest and most dedicated people from the appsec and application development communities to work together and solve foundational problems with wide-reaching impact.

To find out more, or to engaged, join the OWASP Developer Outreach.

Agile Security for Product Owners – Requirements

Much of cumulative application security knowledge and tools are aimed at detection, rather than prevention, of vulnerabilities. This is a natural consequence of the fact that the primary job of many information security analysts is to look for security vulnerabilities and provide high level remediation suggestions rather than be involved in detailed remediation efforts. Another reason is that most organizations want to get a grip on what security exposures they currently have before focusing their efforts on preventing future exposures.

The consequence for application owners is that many of the tools that we have at our disposal are focused on vulnerability detection:

  • Static analysis tools
  • Runtime vulnerability scanning tools
  • Verification standards such as the ASVS

Luckily, application security experts have long argued the case for focusing on prevention rather than relying solely relying on detection of security vulnerabilities. Building-in security early in the SDLC is a matter of cost effectiveness. Efforts such as OWASP’s Enterprise Security API deal with this issue head-on at the code level. At a higher level, some organizations have developed Secure Software Development Life Cycle models such as the Software Assurance Maturity Model, Building Security In Maturity Model, or Microsoft’s SDL.

Comprehensiveness Bias

Most SDLC security initiatives treat security requirements engineering as a foundational activity. The challenge in an Agile/Scrum context is that requirements (i.e. product backlog) are constantly shifting. Iterations are short, intense, and there is little time to introduce new processes such as an architectural analysis, threat risk assessments, or week-long penetration testing.Moreover, if we start with a minimal functionality system as Agile methodologies suggest then we can rarely, in practice, introduce all security requirements at one time. As an agile product owner, you are confronted with mountains of secure SDLC best practices that work best when you have a complete set of requirements up front: the antithesis of agile processes.

The Security Burden

Suppose you’re near the inception of a new product or major re-release and you have short iterations. Looking at your product backlog, you likely have several months of tickets/detects/stories that represent mostly functional requirements. Ideally you’ve carefully groomed the backlog and prioritized items according to their importance in building a minimally functional system and their perceived benefit to end users.

Now let’s say you want to add security requirements. Based on personal experience consulting at Security Compass and researching for SD Elements, I’d estimate that most large-scale enterprise web applications will have security requirements with a cumulative total of several months of effort. Simply put, you’ll be unable to fulfill all security requirements in one shot. In addition, excessive focus on a non-functional concern like security makes it challenging to show incremental value to stakeholders (unless your stakeholders care deeply about security). As a result, you’ll be faced with a somewhat ugly reality of agile development on new projects: Given sufficient visibility into security requirements, an agile application will have known security gaps in its early iterations.

You could defy the odds by focusing on security and not releasing the product until all known security gaps have been addressed. The reality, particularly for anyone operating in a competitive environment where security is not one of the top organizational priorities, is that time-to-market pressures make this infeasible.

Incremental Security

Like everything else in the product backlog you must prioritize security requirements. The Software Assurance Maturity Model spells this fact out explicitly:

“It is important to not attempt to bring in too many best-practice requirements into each development iteration since there is a time trade-off with design and implementation. The recommended approach is to slowly add best-practices over successive development cycles to bolster the software’s overall assurance profile over time.”

There are a couple of caveats you should keep in mind when prioritizing security requirements:

  1. Choose risk reduction over perceived customer utility: When developers demonstrate their increment of progress at the end of the iteration, the security requirements will generally be about as visible as refactoring class names unless you actually perform penetration testing. I say generally because there are some cases where security is visible. For example, you might be tempted to add OAuth support in the names of both security and usefulness for end users. The problem is that while OAuth might help improve your security posture, you may have a much more pressing security risk such as ensuring your confidential data is sent over SSL. If you’ve resolved to build security into your product, choose risk reduction over customer utility for security requirements.
  2. Differentiate between features: In the nomenclature of security defects, Dr. Gary McGraw provides a useful distinction between flaws and bugs: “Bug (implementation) A software security defect that can be detected locally through static analysis [code language=”review”][/code]

    [/code]. Flaw (design). A software security defect at the architecture or design level. Flaws may not be apparent given only source code of a software system.”The way you treat the corresponding controls is important:

    • a) Bug mitigations generally don’t belong on your product backlog because they don’t have one-time acceptance criteria; these mitigations have to be addressed in every iteration. You ought to be thinking of them at all times. These are issues like proper output escaping, safe library usage, variable binding in SQL statements, or making sure you don’t log confidential data; they are cross-cutting concerns that amount to following coding conventions. Developer education and code review / static analysis are the best ways to eradicate bugs.
    • b) Flaw mitigations do belong in a product backlog. They have acceptance criteria and they need to be prioritized amongst all other requirements. These include issues such as authentication, centralized authorization routines, and session management.Once you’ve figured out which requirements are flaw mitigations you can perform a second level of differentiation: which requirements prevent an exploitable vulnerability versus which requirements provide defense in depth. Block brute force attacks prevents an exploitable vulnerability. Hiding verbose error messages is a measure of defense in depth. You should set your sights on the first group as a priority. Strive for your application to not be vulnerable to any remotely exploitable vulnerabilities as soon as it stores, processes, or transmits production data. Depending on your risk profile, you may also need to consider locally exploitable vulnerabilities to the server and/or compliance-driven requirements that may not accurately reflect risk. If you’re lucky enough to start with non-production data then you have the latitude to build these controls across several iterations. However, once your application has production data you are knowingly putting your organization and (if applicable) your clients data at risk if the application lacks known security controls. If you must accept that you’ll have exploitable security vulnerabilities, always compare the levels of risk that each security control provides and prioritize for the most significant risk reduction.
In Summary

As a product owner you have scores of tools and processes to help you find vulnerabilities after they’ve been coded. Adopting security throughout the SDLC is cost effective, but difficult in practice to implement for agile shops. Security requirements, in particular, are difficult to prioritize or scope in early iterations. You can make the task more manageable by prioritizing requirements that prevent exploitable flaws, sorted by risk.

Five Key Design Decisions That Affect Security in Web Applications

By Krishna Raja and Rohit Sethi (@rksethi)

Senior developers and architects often make decisions related to application performance or other areas that have significant ramifications on the security of the application for years to come. Some decisions are obvious: How do we authenticate users? How do we restrict page access to authorized users? Others, however, are not so obvious. The following list comes from empirical data after seeing the architecture of many enterprise applications and the downstream effects of security.

1. Session Replication

Load balancing is a must have for applications with a large user base. While serving static content in this way is relatively easy, challenges start to arise when your application maintains state information across multiple requests. There are many ways to tackle session replication– here are some of the most common:

  • Allow the client to maintain state so that servers don’t have to
  • Persist state data in the database rather than in server memory
  • Use application server’s built-in session replication technology
  • Use third party products, such as Terra-Cotta
  • Tying each session to a particular server by modifying the session cookie

Out of these, maintaining state on the client is often the easiest to implement. Unfortunately, this single decision is often one the most serious you can make for the security of any client-server application. The reason is that clients can modify any data that they send to you. Inevitably, some of the state data shouldn’t be modifiable by an end user such as a price for a product, user permissions, etc. Without sufficient safeguards, client-side state can leave your application open to parameter manipulation at every transaction. Luckily, some frameworks provide protection in the form of client-side state encryption; however, as we’ve seen with the recent Oracle Padding attacks, this method isn’t always foolproof and can leave you with a false sense of security. Another technique involves hashing and signing read-only state data (i.e. the parameters that the client shouldn’t modify), however trying to decide which parameters should be modifiable and which ones shouldn’t can be particularly time consuming – often to the point that developers just ignore it altogether when deadlines become pressing. If you have the choice, elect to maintain state on the server and use one of the many techniques at your disposal to handle session replication.

2. Authorization Context

Many senior developers and architects we’ve spoken to understand that authorization is a challenging topic. Enterprise applications often perform a basic level of authorization: ensuring that the user has sufficient access rights to view a certain page. The problem is that authorization is a multi-layer, domain-specific problem that you can’t easily delegate to the application server or access management tools. For example, an accounting application user has access to the “accounts payable” module but there’s no server-side check to see which accounts the user should be able to issue payments for. Often the code that has sufficient context to see a list of available accounts is so deep in the call stack that it doesn’t have any information about the end user. The workarounds are often ugly: for example, tightly coupling presentation & business logic such that the application checks the list of accounts in a view page where it does have context information about the end user.

A more elegant solution is to anticipate the need for authorization far into the call stack and design appropriately. In some cases this means explicitly passing user context several layers deeper than you normally would; other approaches include having some type of session / thread-specific lookup mechanism that allows any code to access session-related data. The key is to think about this problem upfront so that you don’t waste unnecessary time down the road trying to hack together a solution. See our pattern-level security analysis of Application Controller for more details on this idea.

3. Tags vs. Code in Views

Over the years, most web application development frameworks have made it practical to code entire views/server-pages completely with tags. Dot Net’s ASPX or Java’s JSF pages are examples of this. Building exclusively with tags can sometimes be frustrating when you need to quickly add functionality inside of a view and you don’t have a ready-made tag for that function at your disposal. Some architects and lead developers impose a strict decision that all views must be composed entirely of tags; other architects and lead developers are more liberal in their approach. Inevitably the applications that allow developers to write in-line coding (e.g. PHP, classic ASP, or Scriptlets in Java) have an incredibly tough time eradicating Cross Site Scripting. Rather than augmenting tags with output encoding, developers need to manually escape every form of output in every view. A single decision can lead to tedious, error-prone work for years to come. If you do elect to offer the flexibility of one-off coding, make sure you use static analysis tools to find potential exposures as early as possible.

4. Choice of Development Framework

Call us biased, but we really believe that the framework you choose will dramatically affect the speed at which you can prevent and remediate security vulnerabilities. Building anti-CSRF controls in Django is a matter of turning on adding “@csrf_protect” to your view method. In most Java frameworks you need to build your own solution or use a third party library such as OWASP’s CSRFguard. Generally speaking, the more security features built into the framework the less time you have to spend adding these features into your own code or trying to integrate third party components. Choosing a development framework that takes security seriously will lead to savings down the road. The Secure Web Application Framework Manifesto is an OWASP project designed to help you make that decision.

5. Logging and Monitoring Approach

Most web applications implement some level of diagnostic logging.  From a design perspective, however, it is important to leverage logging as a measure of self defense rather than purely from a debugging standpoint.  The ability to detect failures and retrace steps can go a long way towards first spotting and then diagnosing a breach.  We’ve found that security-specific application logging is not standardized and, as a result, any security-relevant application logging tends to be done inconsistently.  When designing your logging strategy, we highly recommend differentiating security events from debugging or standard error events to expedite the investigative process in the event of compromise.  We also recommend using standard error codes for security events in order to facilitate monitoring. OWASP’s  ESAPI logging allows for event types to distinguish security events from regular logging events.  The AppSensor project allows you to implement intrusion detection and automated responses into your application.