Securing the SDLC: Dynamic Testing Java Web Apps

 
Editors Note: Today’s post is from Gregory Leonard. Gregory is an application security consultant at Optiv Security, Inc and a SANS instructor for DEV541 Secure Coding in Java/JEE.

Introduction
The creation and integration of a secure development lifecycle (SDLC) can be an intimidating, even overwhelming, task. There are so many aspects that need to be covered, including performing risk analysis and threat modeling during design, following secure coding practices during development, and conducting proper security testing during later stages.

Established Development Practices
Software development has begun evolving to account for the reality of security risks inherent with web application development, while also adapting to the increased pressure of performing code drops in the accelerated world of DevOps and other continuous delivery methodologies. Major strides have been made to integrate static analysis tools into the build process, giving development teams instant feedback about any vulnerabilities that were detected from the latest check-in, and giving teams the option to prevent builds from being deployed if new issues are detected.

Being able to perform static analysis on source code before performing a deployment build gives teams a strong edge in creating secure applications. However, static analysis does not cover the entire realm of possible vulnerabilities. Static analysis scanners are excellent at detecting obvious cases of some forms of attack, such as basic Cross-Site Scripting (XSS) or SQL injection attacks. Unfortunately, they are not capable of detecting more complex vulnerabilities or design flaws. If a design decision is made that results in a security vulnerability, such as the decision to use a sequence generator to create session IDs for authenticated users, it is highly likely that a static analysis scan won’t catch it.

Dynamic Testing Helps Close the Gaps
Dynamic testing, the process of exploring a running application to find vulnerabilities, helps cover the gaps left by the static analysis scanners. Teams with a mature SDLC will integrate the use of dynamic testing near the end of the development process to shore up any remaining vulnerabilities. Testing is generally limited to a single time period near the end of the process due to two reasons:

  • It is generally ineffective to perform dynamic testing on an application that is only partially completed as many of the vulnerabilities reported may be due to unimplemented code. These vulnerabilities are essentially false positives as the fixes may already be planned for an upcoming sprint.
  • Testing resources tend to be limited due to budget or other constraints. Most organizations, especially larger ones with dozens or even hundreds of development projects, can’t afford to have enough dedicated security testers to provide testing through the entire lifecycle of every project.

Putting the Power into Developer’s Hands
One way to help relieve the second issue highlighted above is to put some of the power to perform dynamic testing directly into the hands of the developers. Just as static analysis tools have become an essential part of the automated build process that developers work with on a daily basis, wouldn’t it be great if we could allow developers to perform some perfunctory dynamic scanning on their applications before making a delivery?

The first issue that needs to be addressed is which tools to start with? There is a plethora of different testing tools out there. Multiple options exist for every imaginable type of test, from network scanners to database testers to web application scanners. Trying to select the right tool, learn how to configure and use it, and interpret the results generated by the tools can be a daunting task.

Fortunately, many testing tools include helpful APIs that allow you to harness the power of a testing tool and integrate it into a different program. Often these APIs allow security testers to create a Python scripts which will perform multiple scans with a single script execution.

Taking the First Step
With so many tools to choose from, where should a developer who wants to start performing dynamic testing start? To help answer that question, Eric Johnson and I set out to create a plugin that would allow a developer to utilize some of the basic scanning functionality of OWASP’s Zed Attack Proxy (ZAP) within the more familiar confines of their IDE. Being the most familiar with the Eclipse IDE for Java, I decided to tackle that plugin first.

The plugin is designed to perform two tasks: perform a spider of a URL provided by the developer, followed by an active scan of all of the in-scope items detected during the spider. After the spider and scan are complete, the results are saved into a SecurityTesting project within Eclipse. More information about the SecurityTesting plugin, including installation and execution instructions, can be found at the project’s GitHub repository:

https://github.com/polyhedraltech/SecurityTesting

Using this plugin, developers can begin performing some basic security scans against their application while they are still in development. This gives the developer feedback about any potential vulnerabilities that may exist, while reducing the workload for the security testers later on.

Moving Forward
Once you have downloaded the ZAP plugin and started getting comfortable with what the scans produce and how it interacts with your web application, I encourage you to begin working more directly with ZAP as well as other security tools. When used correctly, dynamic testing tools can provide a wealth of information. Just be aware, dynamic testing tools are prone to the same false positives and false negatives that affect static analysis tools. Use the vulnerability reports generated by ZAP and other dynamic testing tools as a guide to determine what may need to be fixed in your application, but be sure to validate that a finding is legitimate before proceeding.

Hopefully you or your development team can find value in this plugin. We are in the process of enhancing it to support more dynamic testing tools as well as introducing support for more IDEs. Keep an eye on the GitHub repository for future progress!

To learn more about secure coding in Java using a Secure Development Lifecycle, sign up for DEV541: Secure Coding in Java/JEE!

About the Author
Gregory Leonard (@appsecgreg) has more than 17 years of experience in software development, with an emphasis on writing large-scale enterprise applications. Greg’s responsibilities over the course of his career have included application architecture and security, performing infrastructure design and implementation, providing security analysis, conducting security code reviews, performing web application penetration testing, and developing security tools. Greg is also co-author for the SANS DEV541 Secure Coding in Java/JEE course as well as a contributing author for the security awareness modules of the SANS Securing the Human Developer training program. He is currently employed as an application security consultant at Optiv Security, Inc.

Security Testing: Less, but More Often can make a Big Difference

Late last year SANS conducted a survey on application security practices in enterprises. One of the questions asked in the survey was how often organizations are doing security testing. The responses were:

  • No security testing policy for critical apps: 13.5%
  • Only when applications are updated, patched or changed: 21.3%
  • Annually: 14.3%
  • Every 3 months: 18.0%
  • Once a month: 9.5%
  • Ongoing: 23.3%

What was most interesting to me is that almost ¼ of organizations are doing security testing on an ongoing, near-continuous basis – testing applications as they are being developed or changed.

The only way to test this frequently, and the  effective way to scale security testing in large enterprises with thousands of applications and hundreds of web sites, is by relying heavily on automation, including automated functional testing of security features (authentication, access control, password management, activity auditing which can all be unit tested like any other code),  and Dynamic Analysis Security Testing (DAST) and/or Static Analysis Security Testing (SAST) tools.

Frequent, automated security testing is made easier through on demand, testing-as-a service offerings in the Cloud. These services, available from  companies like NT OBJECTives, Qualys, Veracode, WhiteHat, as well as HP and IBM and others, are attractive to to mid-size companies and even enterprises who don’t have the skilled people and testing infrastructure to do the work themselves.

There’s still an important place for manual security testing, including penetration testing. Even with proper setup and customization, dynamic testing tools won’t find the kinds of problems that a good pen tester will, but automated testing will find important security problems quickly. Deeper and more expensive manual testing can then be done on a risk-basis, focusing on critical apps and critical issues, which means that companies can get more value out of both kinds of testing.

Finding security bugs faster means that they will get fixed

For most organizations the challenge isn’t finding all of the vulnerabilities, it’s getting the ones that you already know about fixed and preventing them from happening in the future.

Continuous, or at least frequent, automated testing can make it easier for developers to understand and fix security bugs, by providing developers with fast feedback on their work. Fixing a bug in code that you just wrote is a no-brainer. It’s much cheaper and easier to understand and fix this kind of bug than it is to figure out a bug in code that your wrote a month ago or somebody else wrote a year ago. Nobody needs to make a business case for opening up the code and figuring out what needs to be fixed. There’s less chance of breaking something and introducing a regression if you’re working on code that you know well. And with fast feedback, you can get confirmation quickly that you made the right change and fixed the problem properly so that you can go on.

This means that security vulnerabilities are more likely to get fixed, and security vulnerability windows shortened. But even more important over the long term, if developers can get this kind of continuous feedback, if they can see the results of their work quickly, they will learn more and learn faster. If you write some code and find out quickly that there is a bug, it’s easier to understand what kind of code you should be writing. You will be less likely to make the same mistakes going forward, which means that your code will be more secure. It’s a self-reinforcing cycle, and a good one.

If automated security testing can be built in from the beginning, as software is being developed, then the returns can be significant. Developers will learn quickly and start writing more secure software naturally, especially if they are using Agile methods, building and delivering software continuously. Automated testing is what these teams depend on already, and the only way for appsec to keep up with the rapid pace of change in Agile development.

The challenges for automated security testing vendors

For this to really work, automated testing vendors have to offer solutions that:

  1. keep up with changing technology – RIA, HTML5, new protocols, Cloud and Mobile apps(for DAST) and new languages and frameworks (for SAST)
  2. provide high quality feed back – low false positives, clear context for what needs to get fixed and how severe the problem is
  3. integrate nicely into existing development workflows, into the testing platforms and bug reporting platforms that developers already use
  4. run fast, really fast. Static Analysis can be added easily to Continuous Integration, which means that developers can get feedback on the same day or the next day. But this isn’t fast enough. What is really needed is to create automated testing pipelines, starting with incremental testing to provide immediate feedback on what a developer just changed, followed by less frequent but more comprehensive scanning.  Grammatech (a spin off from Cornell), for example does incremental static analysis, and some static analysis vendors such as Coverity and Klocwork are trying to provide close-to-immediate feedback directly in the developer’s IDE as developers write code.Dynamic Testing takes more work to setup and is more expensive and time-consuming to run, but if this testing can be targeted and run incrementally based on what developers last changed it can radically reduce the time and costs to fix bugs.
  5. When security testing can be done automatically and continuously, it becomes just another part of the way that developers work. Security bugs are found and tracked in the same way as other bugs and fixed with other bugs. Every step in this direction will help developers take ownership of software security, and change how we build secure software.
  6. BTW, if you leave a comment, please be patient. All comments on this blog need to be manually reviewed, which means that there will be a delay in comments appearing, hopefully not a long one…

What’s the point of application pen testing?

Penetration testing is one of the bulwarks of an application security program: get an expert tester to simulate an attack on your system, and see if they can hack their way in. But how effective is application penetration testing, and what should you expect from it?

Gary McGraw in Software Security: Building Security In says that

Passing a software penetration test provides very little assurance that an application is immune to attack…

This is because
 

It’s easy to test whether a feature works or not, but it is very difficult to show whether or not a system is secure enough under malicious attack. How many tests do you do before you give up and decide “secure enough”?

Just as you can’t “test quality in” to a system, you can’t “test security in” either. It’s not possible to exhaustively pen test a large system – it would take too long and it wouldn’t be cost effective to even try. The bigger the system, the larger the attack surface, the more end points and paths through the code, the less efficient pen testing is – there’s too much ground to cover, even for a talented and experienced tester with a good understanding of the app and the help of automated tools (spiders and scanners and fuzzers and attack proxies). Because most pen testers are hired in and given only a few weeks to get the job done, they may not even have enough time to come up to speed on the app and really understand how it works – as a result they are bound to miss testing important paths and functions, never mind complete a comprehensive test.

This is not to say that pen testing doesn’t find real problems – especially if a system has never been tested before. As Dr. McGraw points out, pen testing is especially good for finding problems with configuration (like failing to restrict URL access in a web app) and with the deployment stack. Pen tests are also good for finding weaknesses in error handling, information leaks, and naïve mistakes in authentication and session management, using SSL wrong, that kind of thing. Pen testers have a good chance of finding SQL injection vulnerabilities and other kinds of injection problems, and  authorization bypass and privilege escalation problems if they test enough of the app.

One clear advantage of pen testing is that whatever problems the pen tester finds are real. You don’t have to fight the “exploitability” argument with developers for bugs found in pen testing – a pen tester found them, ergo they are exploitable, so now the argument will turn to “how hard it was to exploit”…. But there are many kinds of problems that are much harder to test for at the system level like this – flaws in secure design, back-end problems with encryption or privacy, concurrency bugs. The best way (maybe the only way) to find these problems is through a design review, code review, or static analysis.

In the end, no matter how good the test was, there’s no way to know what problems might still be there and weren’t found – you can only hope that the pen testers found at least some of the important ones. So if pen testing by itself can’t prove that the system is safe and secure, what’s the point?

Is it more important to pass the test?

Pen testing is often done as a late-stage release gate – an operational acceptance test required before launching a new system. Everyone is under pressure to get the system live. Teams work hard to pass the test and get the checkmark in order to make everyone happy and get the job done. But, just like in school, focusing on just passing the test doesn’t always encourage the right behavior or results.

In this situation, it’s natural for a development team to offer as little cooperation and information as possible to the pen testers – it’s clearly not in the development team’s interest to make it easy for a pen tester to break the system. While this may more closely the mimic “real world” – you wouldn’t give real attackers information on how to attack your system, at least not knowingly – you lose most of the value of pen testing the app. Passing a time-boxed black box pen test like this a few weeks before launch doesn’t mean much. It’s what Gary McGraw calls “pretend security” – going through the motions, making it look like you’ve done the responsible thing.

Or really learn…?

The real point of pen testing is to learn: to get expert help in understanding more about the security strengths and weaknesses in your system of course, but more importantly, to learn more about the security posture of your development team and the effectiveness of the security controls and practices in the SDLC that you are following.

You’ve done everything that you can think of to design and build a secure system. Now you want to see what will happen when the system is under attack. You want to learn more about the bad guys, the black hats: how they think, what tools they use, what information and problems they look for, the soft spots. What they see when they attack your system. And what you see when the system is under attack – can you detect that an attack is in progress, do your defenses hold up? You also want to learn more about changes in the threat landscape – new vulnerabilities, new tools, and new attacks that you need to protect against.

To do this properly, you need to get the best pen testing help that you can find and afford – whether from inside your company or from a consultancy. Then be completely transparent. Don’t waste their time or your money. Give the pen testers whatever information and help that they need. Information on the architecture and platform, the deployment configuration, on how the app works. Train them so that they understand the core application functions and navigation. Show them the ways in and where to go when they get in. Share information on past pen test results if you have them, on problems found in your own testing and reviews, what parts of the app that you are worried about. If they get stopped by upfront protection (a WAF or IPS or…) good for you. Then disable this protection for them if you can to see what weaknesses they can find behind.

Some pen testers want the source code too, so that they can review the code and look for other problems. This can be a problem for many organizations (it’s your, or your customers’, intellectual property and you need to protect it), but if you can’t trust the pen testers with the source code, what are you doing letting them try to break into your system?

Look carefully at the problems that they find

Whatever problems they find are real, exploitable vulnerabilities – bugs that need to be taken seriously. Make sure that you understand the findings – what they mean, how and where they found the problems, how to test for them or check for them yourself. How easy was it – did you fail basic checks in an automated scanner, or did they have to do some super-ninja karate to break the code? What findings surprised you? Scared you?

Then get the pen testers and development team to agree on risks. If you think of risks in DREAD terms (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) the pen tester can explain Exploitability, Reproducibility and Discoverability – after all, they discovered and exploited the bugs, and they should know how easy the problems are to reproduce. Then you need to understand and assess Damage and the impact on Affected Users to decide how important each bug is and what needs to be fixed.

Track everything that the pen testers found – preferably in the same bug tracking system that you use to track the other bugs that the team needs to fix. Fix serious vulnerabilities of course as soon as you can, as you would with any other serious bug. But take some time to think about why the problem was there to be found in the first place – what did you miss in selecting and deploying the platform, in design, coding, reviews and testing. And don’t just look at each specific finding and stop there – after all, you know that the pen tester didn’t and couldn’t find every bug in the system. If they found one or two or more cases of a vulnerability, there are bound to be more in the code: it’s your job to find them.

Feed all of this back into how you design and build and deploy software. Not just this system, but all the systems that you are responsible for. You’ve been given a chance to get better, so make the best of it. Before an attack happens for real.

Exchanging and sharing of assessment results

[Cross posted from SANS ISC]

Penetration tests and vulnerability assessments are becoming more common across the whole industry as organizations found that it is necessary to prove a certain level of security for infrastructure/application. The need to exchange test result information is also increasing substantially. External parties ranging from business partners, clients to regulators may ask for prove of tests being done and also results of the test (aka. Clean bill of health).

The sharing of pentest information can create a huge debate, just how much do you want to share? There are at least a couple ways to get this done. The most seemingly easy way to do this is to share the whole report including the summary and also the detailed findings. While this seems easy, the party sharing out the report may be exposing too much information. Pentest reports can be like treasure map to attack an infrastructure and/or application. The detailed report usually include ways to reproduce the attack and effectively documenting a potential attack path in a step by step manner. It is true that vulnerabilities should be fixed as soon as possible after the pentest is done. Consider this scenario, the day after pentest is done, the regulators shows up and ask for the most recent test result. If you are not above the law, you should be yielding the latest report that is full of unfixed flaws.

Another way to share pentest result is to only share the executive summary portion. This portion of the test report usually gives a good overall view to what was done in the test and what sort of overall security posture the test subject is in. While this protects the party sharing out the test result, this may not grant the reviewer the right kind of information. Some executive summary does not contain sufficient information especially those ones done by less competent testers. Aside from that, one of the trend I am noticing is the less experience the receiver of test result, the more him/her want to see the whole report, they just don’t know how to determine the security posture based on the executive summary alone.

There is no current industry standard for this kind of communication, it seems that all the exchange and sharing currently done are on ad-hoc basis. Some like it one way and others like it another way. I consider the current baseline for this kind of communication to be a well written executive summary containing actual summary information of the test with the methodologies used and also the high level view of the vulnerabilities that was found to be sufficient for giving a decent view into overall security posture. This obviously can escalate into a full report sharing if the quality of the executive summary just isn’t there.

If you have any opinions or tips on how to communicate this kind of information, let us know.

Response: Pentesting Coverage

The following is a response to a preview article I wrote on pentesting coverage.  The person I had the IM discussion with was Daniel Miessler. He responded in his own blog, and sent me the excerpt below as a response. Thanks for the offline and online comments to far.  Certainly an interesting topic to discus!

Continue reading Response: Pentesting Coverage

Pentesting: Do you need “coverage” ?

Last week, I had a discussion via instant messenger.  The discussion essentially evolved around the need for coverage in a pentest. It has always been my conviction that a pentest should find as many problems as possible.  In my opinion, the pentest is not over once I got root on the system. In many ways, this is where it starts.

I think it is a matter of pentesting philosophy.  I see myself primarily as a coder.  A pentest is, for me, part of my software testing regimen. Like my other tests, code coverage is of utmost importance. I don’t consider my application working until it has been fully and thoroughly tested.  A pentest is just another, very special, test I subject my code to. As a result, I am not a big fan of “black box” testing, or “early dawn raids” as I call them.  They may find a problem, but the tester will typically waste a lot of time finding and exploiting vulnerabilities, which may have been trivial to exploit with a bit of source code, or after talking to the developer. I rather have the pentester spend their precious time on going after yet another possible vulnerability.

I do understand that a pentest does not find all bugs, but the aim should be to do just that. I strongly believe that a pentester should be held to the same standards as a developer.  Would you accept software that works once?  Code that only has one bug fixed?  Just because it is hard, doesn’t mean that we shouldn’t attempt it.  In many ways, this is what I enjoy about being a developer:  Hard problems.

Is it possible to come close to the goal?  I think it is.  All of this is based on a good framework and to apply this framework thoroughly.  You do your recognizance, and don’t stop at the first web application you find, unless the scope of the project limits you.  Next, you map out the web application, and try to find not just all URLs or “pages,” but all features.  A feature you miss in the mapping phase will be a feature you miss in your test.  Next, you discover vulnerabilities.  And finally, you try to exploit them.  Once you manage to exploit a vulnerability, you are likely to find more content and your process starts all over again.

Sounds boring and tedious?  Yes it is!  If you don’t know how to script.  I say this a lot:  If you don’t script, you will soon be replaced by a script.  If you want to distinguish yourself as a pentester, you will have to understand your tools well enough to extend them and build upon them.

Web application penetration testing VS vulnerability assessment

I deal with infrastructure and application security testing on a regular basis. On the infrastructure/network side, the consulting and testing market is much more mature, definition of pentest and vulnerability assessment are industry accepted. It is easy to communicate with other folks about the work involved. On the application side, things are not as well defined. It will be at least a couple more years before the definition or an “application pentest” is accepted.

What is vulnerability assessment?

According to Wikipedia, “A vulnerability Assessment is the process of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system.” In short, it involves anything to determine if there is a weakness or vulnerability in the system subjected to the assessment, then report on it. For application testing, you would throw some test input at the application or try a number of test cases and see if it is vulnerable to any of the vulnerabilities you are testing for.

In general real world terms, the tester for a VA (vulnerability assessment) is expected to perform the reconnaissance phase which allows the tester to understand the application well enough and determine if there are any short cuts to compromising the system. Also, gathering enough data about the application (such as platform it is running on or what other virtual hosts are running) to allow later testing phases.

Then, the tester is expected to map out the application and understand the application flow and relationship between objects in the application. Some of the vulnerabilities such as business logic flaws may also be revealed at this phase. Following mapping is the discovery of vulnerabilities, for input related flaws it might involve automated tools and manual validation test. There are also various other test cases that need to be manually test,  especially on the session related and access controls related flaws that are not easily automated.

In general, testers follow a common testing framework such as the OWASP Testing Guide, to ensure sufficient coverage of vulnerabilities during the process. After the discovery, the vulnerabilities are evaluated and usually manually verified again. Then a risk rating is given to each vulnerability to be included in a report.

Running a vulnerability scanner against a web application is a form of vulnerability assessment. It is also a form of assessment that is not very complete or thorough, in general, an automated scanner covers about 50-70% of the vulnerabilities in a given application.

What is penetration testing?

Penetration testing or “pentesting” includes all of the process in vulnerability assessment plus an important extra step, which is to exploit the vulnerabilities found in the discovery phase. You may ask, “Just a one step difference?” Pretty much, but this one step could separate the boys from the men. I often tell the students in my pentest class that it is common for a pentester to spend 20% of his/her time locating a single vulnerability and then 80% of the time is spent exploiting that vulnerability. The process of exploitation usually involves a lot of trial and error and may not work the first time. Depending on the type of vulnerability being exploited, some other system general knowledge maybe required to aid the exploitation process.

The better pentesters don’t usually stop at exploiting one single vulnerability. For example, a single CSRF vulnerability can be somewhat limited, bundle that with a XSS vulnerability and you have a much bigger problem at hand. In a lot of cases, an expert pentester can leverage two or three low to medium  risk vulnerabilities and turn the result into a critical exposure.

The added benefit of a pentest is able to see the vulnerabilities being put into active exploitation and show the actual maximum effect.Due to the nature of pentesting, the exploitation does not really have any established framework. The exploitation is highly dependent on the skillset of the invidual/team performing the test.

An example to show the difference

Let’s use an example to illustrate the difference. Let’s say the tester is testing for SQL injection and a single quote (‘) is put into all input field. In a particular field, when a quote is put to the field, a SQL error is generated in the resulting page like this, “You have an error in your SQL syntax near ‘\’0’ at line 1”  This is tell-tale sign of error SQL injection. A vulnerability assessment might just do a bit further validation such as trying to dump current user name to validate the vulnerability and then goes into reporting.

A pentest on the other hand would likely be taking a lot more time on this error alone. The pentester would figure out how to tag on extra logic or command structure into the current SQL statement so that the tester can control the SQL database. If possible, the tester will enumerate the database structure and possibly dump the whole database content. If the permission is not set properly, the pentester may also be able to jump into OS command context and start executing commands in the OS. Obviously, all these attacks requires patience and takes a lot of time to succeed.

What’s more popular?

If there is such as difference and pentesting is so much more in demonstration, why don’t we just do pentesting then? Well, there is always a costing difference making pentesting significantly more expensive than a vulnerability assessment. In fact the market is currently leaning towards pentesting; those who are concerned about web app sec are willing to spend the money to get what they think is the best. (cost more, it must be better)  In the next few years, as the general public are more educated about security testing for web applications, I am sure the market will adopt both services – vulnerability assessment and penetration testing. Until then, I have to be very careful about listing requirements and looking at quotation for security testing consulting work.