Exploring the DevSecOps Toolchain

 

The authors of the SANS Institute’s DEV540 Secure DevOps & Cloud Application Security course created the Secure DevOps Toolchain poster to help security teams create a methodology for integrating security into the DevOps workflow. As you can see, the poster breaks DevOps down into 5 key phases and includes a massive list of open source tools to help DevOps teams level up their security.

In a new presentation titled Continuous Security: Exploring the DevSecOps Toolchain (Phases 1-2), I spend the entire hour walking the attendees through the first 2 phases of the DevOps workflow:

  • Pre-Commit: Security controls that take place as code is written and before code is checked into version control.
  • Commit: Fast, automated security controls that are invoked by continuous integration tools during the build.

I had the opportunity to present this talk at the Australia Information Security Association’s National Cyber Conference 2018 event this month in Melbourne, as well as a local SANS Community event in Sydney. For those that asked, the presentation slides can be found here:

Continuous Security: Exploring the DevSecOps Toolchain (Phases 1-2)

In the next round, we’ll pick up where we left off and explore Phases 3-4. Until then, cheers!

To learn more about DevSecOps and Cloud Security, check out the DEV540: Secure DevOps and Cloud Application Security course!
 

About the Author

 
Eric Johnson is a co-founder and principal security engineer at Puma Security focusing on modern static analysis product development and DevSecOps automation. His experience includes application security automation, cloud security reviews, static source code analysis, web and mobile application penetration testing, secure development lifecycle consulting, and secure code review assessments.

Previously, Eric spent 5 years as a principal security consultant at an information security consulting firm helping companies deliver secure products to their customers, and another 10 years as an information security engineer at a large US financial institution performing source code audits.

As a Certified Instructor with the SANS Institute, Eric authors information security courses on DevSecOps, cloud security, secure coding, and defending mobile apps. He serves on the advisory board for the SANS Security Awareness Developer training program, delivers security training around the world, and presents security research at conferences including SANS, BlackHat, OWASP, BSides, JavaOne, UberConf, and ISSA.

Eric completed a bachelor’s degree in computer engineering and a masters degree in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, GSSP-Java, and AWS Developer certifications.
 

Your Secure DevOps Questions Answered

 

As SANS prepares for the 2nd Annual Secure DevOps Summit, Co-Chairs Frank Kim and Eric Johnson are tackling some of the common questions they get from security professionals who want to understand how to inject security into the DevOps pipeline, leverage leading DevOps practices, and secure DevOps technologies and cloud services.

If you are considering attending the Secure DevOps Summit or have already registered, this post is a good warmup, as these questions will be covered in-depth at the Summit on October 22nd-23rd in Denver, CO.
 

1. What do security professionals need to do to implement Secure DevOps processes?

SecDevOps requires security and compliance teams to integrate their processes, policies, and requirements into the same workflows being used by Development & Operations. This does not happen without the Security team understanding the current development and operations engineering process, technologies, and tools.

The Continuous Integration (CI) tool, is one of the most important pieces in the DevOps process, manages the workflow, executes steps, and integrates the entire toolchain together. Common examples include Jenkins, Visual Studio Team Services (VSTS), Travis, Bamboo, TeamCity.

Leveraging the CI tool, SecDevOps teams will start to integrate important security touch points such as:

  • Pre-Commit Security Hooks – Code checkers to scan for secrets (private keys, system passwords, cloud access keys, etc.) before committing code to version control.
  • Static Source Code Scanning – Scanning source code files (infrastructure templates, application source code) for security vulnerabilities in the build pipeline.
  • Security Unit Tests – Custom security unit tests written by the security and engineering teams to provide coverage for abuser stories and negative test cases.
  • Dynamic Application Security Testing – Scanning a running application for common security vulnerabilities (OWASP Top 10) and misconfigurations and enforcing acceptance criterial using tools such as Gauntlt / BDD Security.
  • Secrets Management – Automating the provisioning and storage of secrets using tools such as Vault, Conjure, AWS KMS, Azure Key Vault.
  • Continuous Monitoring – Monitoring the production environment for vulnerabilities, anomalies, and in progress attacks using tools such as statsd, graphite, graphana, etc.

 

2. What Secure DevOps tools should I use?

The authors of the SANS Institute’s DEV540 Secure DevOps & Cloud Application Security course put together a Secure DevOps Toolchain poster. Our SecDevOps Toolchain breaks the SecDevOps process down into 5 key phases with a detailed list of associated tools.

  • Pre-Commit: Security controls that can take place before code is checked into version control.
  • Commit: Fast, automated security checks that are invoked by continuous integration tools during the build.
  • Acceptance: Automated security acceptance tests, functional tests, and out of band security scans that are invoked by continuous integration tools as artifacts are delivered to testing / staging environments.
  • Production: Security smoke tests and configuration checks that are invoked by continuous integration tools as artifacts are delivered to the production environment.
  • Operations: Continuous security monitoring, testing, auditing, and compliance checks running in production and providing feedback to SecDevOps teams.

 

3. How important is culture in an organization moving towards SecDevOps?

John Wills, Damon Edwards, and Jez Humble came up with the CALMS to understand and describe DevOps:

  • Culture – People come first
  • Automation – Rely on tools for efficiency + repeatability
  • Lean – Apply Lean engineering practices to continuously improve
  • Measurement – Use data to drive decisions and improvements
  • Sharing – Share ideas, information and goals across organizational silos

There is a reason that Culture is first on the list. DevOps creates an open, diverse working environment that enables working together to solve problems, empower teams, fail fast, and continuously improve.

Here is the biggest challenge integrating the “Sec” into “DevOps”: traditional security cultures are always ready to say “no”, fail to share information across the organization, and do not tolerate failure.

This is also evidenced in how CISOs lead their organizations and interact with development teams and the rest of the business. There are three different types of CISOs: the dictator, the collaborator, and the partner.

By saying “no” the dictator doesn’t fully consider how security introduces unnecessary friction into existing development and business processes. For SecDevOps to be successful we need to move to be more of a collaborator by understanding engineering practices, process, and tools. Even better, security leaders can become partners by understanding business goals to earn a seat at the executive table.
 

4. How do you measure the success of SecDevOps?

SecDevOps teams often measure success by monitoring change frequency, deployment success / failure rates, and the mean time to recover (MTTR) from a failure.

For the “Sec” component, assume that a vulnerability assessment or penetration test uncovers a critical patch missing from a system. How long does it take the feedback from the security team to enter the operations workflow, build a new gold image, run automated acceptance tests to validate the changes, and roll the gold image into production? Mean time to recover (MTTR) is a critical metric for measuring success.

Measuring the # of vulnerabilities discovered via automated testing in the pipeline versus vulnerabilities escaping to production. This shows the effectiveness of the pipeline and improvements over time.

Tracking the window of exposure, or how long vulnerabilities remain open in production, provides important metrics for management to see progress.

These metrics also help handle managerial issues as you ask above because can show the process working and helping make the organization more secure.
 

5. What does “Security as code” mean?

A prerequisite for automation requires all steps to be written into code and checked into version control. Development engineering teams have been writing code since the beginning. Modern operations teams are now writing “infrastructure as code” using tools like Chef, Puppet, and Ansible to create and configure cloud infrastructure, on-premise infrastructure, gold images, and network devices.

Security as code takes this approach a step further by converting manual security and compliance steps into automated, repeatable scripts that can be executed inside a CI pipeline. Security tools are quickly evolving to have APIs and command line interfaces to support “security as code” instead of manually configuring a scanner and pressing a button.

In a SecDevOps world, security tools that cannot be automated through an API or from the command line are not worth purchasing.
 

Summary

Many security teams jump in too quickly and disrupt the engineering workflows. It is important to tackle one step at a time and start slowly. Security teams must ensure new security steps are working properly, evaluate results, fine tune scanners, and minimize false positives before even considering disrupting the workflow.

To learn more about DevOps and Cloud Security, check out DEV540: Secure DevOps and Cloud Application Security!
 

About the Authors
 
Eric Johnson is a co-founder and principal security engineer at Puma Security focusing on modern static analysis product development and DevSecOps automation. His experience includes application security automation, cloud security reviews, static source code analysis, web and mobile application penetration testing, secure development lifecycle consulting, and secure code review assessments.

Previously, Eric spent 5 years as a principal security consultant at an information security consulting firm helping companies deliver secure products to their customers, and another 10 years as an information security engineer at a large US financial institution performing source code audits.

As a Certified Instructor with the SANS Institute, Eric authors information security courses on DevSecOps, cloud security, secure coding, and defending mobile apps. He serves on the advisory board for the SANS Security Awareness Developer training program, delivers security training around the world, and presents security research at conferences including SANS, BlackHat, OWASP, BSides, JavaOne, UberConf, and ISSA.

Eric completed a bachelor’s degree in computer engineering and a masters degree in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.
 


Frank Kim is the founder of ThinkSec, a security consulting and CISO advisory firm. Previously, as CISO at the SANS Institute, Frank led the information risk function for the most trusted source of computer security training and certification in the world. With the SANS Institute, Frank continues to lead the management and software security curricula, helping to develop the next generation of security leaders.

Frank was also executive director of cybersecurity at Kaiser Permanente where he built an innovative security program to meet the unique needs of the nation’s largest not-for-profit health plan and integrated health care provider with annual revenue of $60 billion, 10 million members, and 175,000 employees.

Frank holds degrees from the University of California at Berkeley and is the author and instructor of popular courses on strategic planning, leadership, application security, and DevOps. For more, visit frankkim.net.

2017 Application Security Survey is Live!

 
Our 2016 application security survey, led by Dr. Johannes Ullrich, saw AppSec Programs continuously improving. In this year’s 2017 survey led by Jim Bird, we will be looking at how AppSec is keeping up with rapidly increasing rates of change as organizations continue to adopt agile development techniques and DevOps.

The survey is officially live and will be open until July 21st. Take the survey now, and enter a drawing for a $400 Amazon gift card OR a free pass to the SANS Secure DevOps Summit in Denver, CO this October:

http://www.surveymonkey.com/r/2017SANSAppSecSurvey

The survey results will be presented by Eric Johnson in a webcast on Tuesday, October 24, 2017. Register for the webcast to be among the first to receive a copy of the 2017 Application Security Survey report: https://www.sans.org/webcasts/application-security-go-survey-results-105210

Taking Control of Your Application Security

 
Application security is hard. Finding the right people to perform application security work and manage the program is even harder. The application security space has twice as many job openings as candidates. Combined that with the fact that for every 200 software engineers there is only 1 security professional, how do we staff a secure software group and integrate software security across an organization?

There are many approaches for solving this problem being used across the various industry verticals. Most include hiring expensive consultants or conducting long job searches for candidates that rarely exist. Coming from a software development background personally and working with software engineers in various training / consulting engagements across the world has led me to this: capable software security engineers are already working on your engineering teams. Management just needs to put a program in place to educate, empower, and build strong application security champions.

Over the years, I’ve had this conversation with several folks from many different organizations. With many having the same questions and looking for guidance, it became clear to me that this topic would make a great talk.

I had the privilege of presenting the “Taking Control of Your Application Security” talk during an evening session at SANS Orlando 2017 last week. Of course, we had a little fun with a case study and discussed some actionable takeaways for building a culture of secure software engineering. But, most importantly I walked the audience through my journey as a software engineer to an application security consultant.

A number of attendees have asked for a copy of the slides, which can be found here: Taking Control of Your Application Security

For more information on application security training, check out the following resources:

In-Depth Online / Classroom Training: SANS Application Security Curriculum

Developer Security Awareness Training: STH.Developer Training
 
About the Author
Eric Johnson is a Principal Security Consultant at Cypress Data Defense. At Cypress, he leads web and mobile application penetration testing, secure development lifecycle consulting, secure code review assessments, static source code analysis, security research, and security tool development. Eric has presented his security research at conferences around the world including SANS, BlackHat, OWASP AppSecUSA, BSides, JavaOne, UberConf, and ISSA. He has contributed to several open source projects including Puma Scan, AWS Critical Security Control Automation, and the OWASP Secure Headers project. Eric is also a Certified Instructor with the SANS Institute where he authors several application security courses, serves on the advisory board for the SANS Securing the Human Developer awareness training program, and delivers security training around the world. Eric completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

Continuous Integration: Live Static Analysis with Roslyn

Early in 2016, I had a conversation with a colleague about the very, very limited free and open-source .NET security static analysis options. We discussed CAT.NET, which released back in 2009 and hasn’t been updated since. Next came FxCop, which has a few security rules looking for SQL Injection and Cross-Site Scripting included in the Visual Studio Code Analysis Microsoft Security Rules. Plus, the various configuration analyzer tools that run checks against web.config files, such as the Web Configuration Security Analyzer (WCSA) tools written by Troy Hunt and James Jardine.

Combining these scans all together leaves .NET development teams with a few glaring problems:

  • From past experience, I know that CAT.NET and FxCop have very poor coverage of secure coding issues covered by OWASP, SANS / CWE Top 25, etc. See the presentation slides below for stats from CAT.NET and FxCop scans against a vulnerable .NET target app. The results are very concerning.
     
  • Aside from the Visual Studio Code Analysis rules, none of the tools actually run inside Visual Studio where engineers are actually writing code. This means more add-on tools that need to be run by the security / development leads, resulting in more reports to manage and import into bug tracking systems. For this reason, it is very unlikely these scans are actually run before committing code, which in my opinion is too late in the development process.

There has to be a better way, right?
 
Roslyn – The .NET Complier Platform
At the end of the conversation, my colleague asked me if I had tried writing any Roslyn rules yet? Wait, what’s Roslyn? After some quick searching, I discovered that Roslyn, aka the .NET Compiler Platform, is a set of code analysis APIs for the C# and Visual Basic compilers. Using Roslyn, Microsoft provides a way to query and report diagnostics on just about any piece of code. These diagnostic warnings are displayed in two different places:

  • Inside a code document, diagnostic warnings show as spell check style squiggles as code is being written! See the following screenshot for an example:
    Puma Scan diagnostic warning in code
     
  • In the Error List, diagnostic warnings are added to the standard list of compiler errors. Roslyn provides rule authors the ability to create custom warnings (or errors if you want to be THAT disruptive security team that slows down dev teams) in the list. See the following screenshot for an example:
    Puma Scan diagnostics added to the error list.
     

After reading Alex Turner’s MSDN Magazine article, I realized that the Roslyn team at Microsoft had provided me exactly what I needed to build a powerful, accurate secure coding engine that runs inside of Visual Studio. During development. As code is written. Before code enters the repository. Bingo!

With that, I decided to help address the gap in the lack of free .NET static analysis tools.
 
The Puma Scanner
Once we figured out the Roslyn basics, my team and I spent a small part of our consulting hours over the past 4 months learning about the various types of analyzers and building security rules. Of course we had some help, as the folks on the Roslyn team at Microsoft and the Roslyn Gitter channel are very helpful with newbie questions.

Last week, I had the pleasure of presenting our progress at the OWASP AppSec USA conference in Washington, DC. While there is a lot of work left to be done, we are happy to say that Puma Scan has roughly 40 different rules looking for insecure configuration, cross-site scripting, SQL injection, unvalidated redirects, password management, and cross-site request forgery. At this point, I am confident it is worthwhile running the scanner in your development environment and MSBuild pipeline to gain more advanced static analysis compared the free options that exist today.

If you were unable to attend AppSec USA 2016, don’t worry. The hard working volunteers at OWASP record all of the sessions, which are posted on the OWASP YouTube channel. You can watch the presentation here: Continuous Integration: Live Static Analysis using Visual Studio & the Roslyn API session.

For now, as I promised those that attended the presentation, the presentation slides are available on Slideshare. Additionally, links to the Puma Scan web site, installation instructions, rules documentation, and github repository are available below. Let me know if you’re interested in contributing to the project!

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!
 
References
Puma Scan Web Site: https://www.pumascan.com/
AppSec USA Presentation: Continuous Integration: Live Static Analysis using Visual Studio & the Roslyn API
Github Repository: https://github.com/pumasecurity
Visual Studio Extension: https://visualstudiogallery.msdn.microsoft.com/80206c43-348b-4a21-9f84-a4d4f0d85007
NuGet Package: https://www.nuget.org/packages/Puma.Security.Rules/
 
About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is an author for DEV544 Secure Coding in .NET and DEV531 Defending Mobile App Security Essentials, as well as an instructor for DEV541 Secure Coding in Java/JEE and DEV534: Secure DevOps. Eric’s experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

HTTP Verb Tampering in ASP.NET

We’re only a few days into 2016, and it didn’t take long for me to see a web application vulnerability that has been documented for over 10 years: HTTP Verb Tampering. This vulnerability occurs when a web application responds to more HTTP verbs than necessary for the application to properly function. Clever attackers can exploit this vulnerability by sending unexpected HTTP verbs in a request (e.g. HEAD or a FAKE verb). In some cases, the application may bypass authorization rules and allow the attacker to access protected resources. In others, the application may display detailed stack traces in the browser and provide the attacker with internal information about the system.

This issue has been recognized in the J2EE development stack for many years, which is well supported by most static analysis tools. But, this particular application wasn’t using J2EE. I’ve been doing static code analysis for many years, and I must admit – this is the first time I’ve seen this issue in an ASP.NET web application. Let’s explore this verb tampering scenario and see what the vulnerability looks like in ASP.NET.

Authorization Testing
Consider the following example. A web page named “DeleteUser.aspx” accepts one URL parameter called “user”. Logging in as an “Admin”, the following snippet shows a simple GET request to delete the user account for “bob”. In this example, the application correctly returns a 200 OK response code telling me the request was successful.

[sourcecode]
GET /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: .ASPXAUTH=0BD6E325C92D9E7A6FF307BBE3B4BA7E97FB49B44B518D9391A43837DE983F6E7C5EC42CD6AB5
Connection: keep-alive
[/sourcecode]

Obviously, this is an important piece of functionally that should be restricted to privileged users. Let’s test the same request again, this time without the Cookie header. This makes an anonymous (i.e. unauthenticated request) to delete the user account for “bob”. As expected, the application correctly returns a 302 response code to the login page, which tells me the request was denied.

[sourcecode]
GET /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
[/sourcecode]

Similarly, I logged in as a non-admin user and tried the same request again with the non-admin’s cookie. As expected, the application responds with a 302 response code to the login page. At this point, the access control rules seem to be working perfectly.

Hunt the Bug
During the security assessment, I noticed something very interesting in the application’s web.config file. The following code example shows the authorization rule for the delete user page. Can you find the vulnerability in this configuration?

[sourcecode]
<location path=”DeleteUser.aspx”>
<system.web>
<authorization>
<allow roles=”Admin” verbs=”GET” />
<deny users=”*” verbs=”GET” />
</authorization>
</system.web>
</location>
[/sourcecode]

Recognizing this vulnerability requires an understanding of how the ASP.NET authorization rules work. The framework starts at the top of the list and checks each rule until the first “true” condition is met. Any rules after the first match are ignored. If no matching rules are found, the framework’s default allow all rule is used (e.g. <allow users=”*” />).

In the example above, the development team intended to configure the following rules:

  • Allow users in the “Admin” role access to the “DeleteUser.aspx” page
  • Deny all users access to the “DeleteUser.aspx” page
  • Otherwise, the default rule allows all users

But, the “verbs” attribute accidentally creates a verb tampering issue. The “verbs” attribute is a rarely used feature in the ASP.NET framework that restricts each rule to a set of HTTP verbs. The configuration above actually enforces the following rules:

  • If the verb is a GET, then allow users in the “Admin” role access to the “DeleteUser.aspx” page
  • If the verb is a GET, then deny all users access to the “DeleteUser.aspx” page
  • Otherwise, the default rule allows all users

Verb Tampering

Now that we know what the authorization rules actually enforce, are you wondering the same thing I am? Will this web site really allow me to delete a user if I submit a verb other than GET? Let’s find out by submitting the following unauthenticated request using the HEAD verb. Sure enough, the server responds with a 200 OK response code and deletes Bob’s account.

[sourcecode]
HEAD /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
[/sourcecode]

For what it’s worth, we could also submit the following POST request with any parameter and achieve the same result. Both verbs bypass the broken authorization rules above.

[sourcecode]
POST /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 3

x=1
[/sourcecode]

Secure Configuration
Luckily for our development team, the HTTP verb tampering issue is very simple to correct. By removing the “verb” attributes in the new configuration shown below, ASP.NET enforces the access control constraints for all HTTP verbs (e.g. GET, POST, PUT, DELETE, HEAD, TRACE, DEBUG).

[sourcecode]
<location path=”DeleteUser.aspx”>
<system.web>
<authorization>
<allow roles=”Admin” />
<deny users=”*” />
</authorization>
</system.web>
</location>
[/sourcecode]

It is also a best practice to also disable HTTP verbs that are not needed for the application to function. For example, if the application does not need PUT or DELETE methods, configure the web server to return a 405 Method Not Allowed status code. Doing so will reduce the likelihood of HTTP verb tampering vulnerabilities being exploited.

It should be noted that the sample tests in this blog were run against ASP.NET 4.5 and IIS 8.0. Older versions may react differently, but the fundamental problem remains the same.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References:
[1] http://www.kernelpanik.org/docs/kernelpanik/bme.eng.pdf
[2] http://www.aspectsecurity.com/research-presentations/bypassing-vbaac-with-http-verb-tampering
[3] http://www.hpenterprisesecurity.com/vulncat/en/vulncat/java/http_verb_tampering.html
[4] https://msdn.microsoft.com/en-us/library/8d82143t(vs.71).aspx

About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

Breaking CSRF: Spring Security and Thymeleaf

As someone who spends half of their year teaching web application security, I tend to give a lot of presentations that include live demonstrations, mitigation techniques, and exploits. When preparing for a quality assurance presentation earlier this year, I decided to show the group a demonstration of Cross-Site Request Forgery (CSRF) and how to fix the vulnerability.

A CSRF Refresher
If you’re not familiar with Cross-Site Request Forgery (CSRF), check out the article Steve Kosten wrote earlier this year about the attack, how it works, and how to defend your applications using synchronization tokens:

The Demo
My favorite way to demonstrate the power of CSRF is by exploiting a vulnerable change password screen to take over a user’s account. The attack goes like this:

  • Our unsuspecting user signs into a web site, opens a new tab, and visits my evil web page
  • My evil web page submits a forged request
  • The server changes the user’s password to a value that I know, and
  • I own their account!

I know, I know. Change password screens are supposed to ask for the user’s previous password, and the attacker wouldn’t know what that is! Well, you’re absolutely right. But, you’d be shocked at the number of web applications that I test each year that are missing this simple security mechanism.

Now, let’s create our vulnerable page.

The Change Password Page
My demo applications in the Java world use Spring MVC, Spring Security, Thymeleaf, and Spring JPA. If you’re not familiar with Thymeleaf, see the References section, and check out the documentation. This framework provides an easy-to-use HTML template engine that allows us to quickly create views for a web application. I created the following page called changepassword.html containing inputs for new password and confirm password, and a submit button:

[sourcecode]
<form method=”post” th:action=”@{/content/changepassword}”
th:object=”${changePasswordForm}”>
<table cellpadding=”0″>
<tr>
<td align=”right”>New Password:</td>
<td>
<input id=”newPassword” type=”password” th:field=”*{newPassword}” />
</td>
</tr>
<tr>
<td align=”right”>Confirm Password:</td>
<td>
<input id=”confirmPassword” type=”password” th:field=”*{confirmPassword}” />
</td>
</tr>
<tr>
<td>
<input type=”submit” value=”Change Password” id=”btnChangePassword” />
</td>
</tr>
</table>
</form>
[/sourcecode]

The Change Password HTML
After creating the server-side controller to change the user’s password, I fired up the application, signed into my demo site, and browsed to the change password screen. In the HTML source I found something very interesting. A mysterious hidden field, “_csrf”, had been added to my form:

[sourcecode light=”true”]
<input type=”hidden” name=”_csrf” value=”aa1688de-3984-448a-a56e-43bfa027790c” />
[/sourcecode]

When submitting the request to change my password, the raw HTTP request sent the _csrf request parameter to the server and I received a successful 200 response code.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler&_csrf=bd729b66-8d90-46c8-94ff-136cd1188caa
[/sourcecode]

“Interesting,” I thought to myself. Is this token really being validated on the server-side? I immediately fired up Burp Suite, captured a change password request, and sent it to the repeater plug-in.

For the first test, I removed the “_csrf” parameter, submitted the request, and watched it fail.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler
[/sourcecode]

For the second test, I tried again with an invalid “_csrf” token.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler&_csrf=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
[/sourcecode]

Both test cases returned the same 403 Forbidden HTTP response code:

[sourcecode light=”true”]
HTTP/1.1 403 Forbidden
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 2464
Date: Thu, 27 Aug 2015 11:48:08 GMT
[/sourcecode]

Spring Security
As an attacker trying to show how powerful CSRF attacks can be, this is troubling. As a defender, this tells me that my application’s POST requests are protected from CSRF attacks by default! According to the Spring documentation, CSRF protection has been baked into Spring Security since v3.2.0 was released in August 2013. For applications using the Java configuration, this is enabled by default and explains why my change password demo page was already being protected.

However, applications using XML-based configuration on versions before Spring v4.0 still need to enable the feature in the http configuration:

[sourcecode light=”true”]
<http>

<csrf />
</http>
[/sourcecode]

Applications on Spring v4.0+ have this protection enabled by default, regardless of how they are configured, making it easy for code reviewers and static analyzers to find instances where it is disabled. Of course this is exactly what I had to do for my presentation demo to work, but it’s a security anti-pattern so we don’t discuss how to do this here. See the Reference section for more details on Spring’s CSRF protections.
 
Caveats
Before we run around the office saying our applications are safe because we’re using Thymeleaf and Spring Security, I’m going to identify a few weaknesses in this defense.

  • Thymeleaf and Spring Security only place CSRF tokens into “form” elements. This means that only form actions submitting POST parameters are protected. If your application makes data modifications via GET requests or AJAX calls, you still have work to do.
  • Cross-Site Scripting (XSS) attacks can easily extract CSRF tokens to an attacker’s remote web server. If you have XSS vulnerabilities in your application, this protection does you no good.
  • My demo application is using Thymeleaf 2.1.3 and Spring 4.1.3. Older versions of these frameworks may not provide the same protection. As always, make sure you perform code reviews and security testing on your applications to ensure they are secure before deploying them to production.

References:

To learn more about securing your Java applications, sign up for DEV541: Secure Coding in Java/JEE!
 
About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

2015 State of Application Security: Closing the Gap

The 2015 SANS State of Application Security Analyst Paper and webcasts are complete. This year, Jim Bird, the lead author of the SANS Application Security Survey series, Frank Kim, and I all participated in writing the questions, analyzing the results, drafting the paper, and preparing the webcast material.

In the 2015 survey, we split the survey into two different tracks: defenders and builders. The first track focused on the challenges facing the defenders who are responsible for risk management, vulnerability assessment, and monitoring. The second track focused on the challenges facing the builders responsible for application development, peer reviews, and production support.

Overall, we had 435 respondents, 65% representing the defenders and 35% representing the builders. Based on the results, the communication barriers between defenders and builders are shrinking. But, there is still work that needs to be done:

Defenders and builders are focused on where the greatest security risks are today: 79% web applications, 62% mobile applications, and 53% private cloud applications.

Managers are becoming more aware of how important – and how hard – it is to write secure
software. Today, application security experts are reaching out to builders and speaking at their conferences. As a result, builders are more aware of risks inherent in the same applications that defenders are concerned with.

Management needs to walk the talk and provide developers with the time, tools and training to do a proper job of building secure systems.

For more analysis, the webcasts and analyst paper can be found below:

2015 State of Application Security Analyst Paper: Closing the Gap

Webcast Part 1: Defender Issues

Webcast Part 2: Builder Issues

Thank you to all of the sponsors for bringing this content to the SANS community: HP, Qualys, Veracode, Waratek, and WhiteHat Security.

Also, a special thank you goes out to our webcast panel: Will Bechtel (Qualys), Robert Hanson (WhiteHat Security), Bruce Jenkins (HP Fortify), Maria Loughlin (Veracode), and Brian Maccaba (Waratek).

Happy reading!

About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

Secure Software Development Lifecycle Overview

STH-WaterfallModel1b
In a previous post, we received a question asking, “what is a secure software development lifecycle”? This is an excellent question, and one that I receive quite often from organizations during an application security assessment.

Let’s quickly review the Software Development Lifecycle, also known as the SDLC. The goal of an SDLC is to provide a process for project teams to follow when developing software. A series of steps are completed, each one with a different deliverable, eventually leading to the deployment of functioning software to the client.

Several different SDLC models exist, including Waterfall, Spiral, Agile, and many more. While each of these models is very different, they were all designed without software security in mind. Failing to include software security in the development lifecycle has many consequences:

  •  Releasing critical vulnerabilities to production
  • Putting customer data at risk
  • Costly follow-up releases to secure the application
  • Development teams believing security is someone else’s job
  • And the list goes on…

Requirements:

The requirements phase traditionally allows time for meeting with the customer, understanding the expectations for the product, and documenting the software requirements.  In a Secure SDLC, the requirements phase is where we start building security into the application. Start by selecting a security expert to make security-related decisions for the project team. Send the entire project team through application security awareness training. Ensure the software requirements include security requirements for the development team. Identify any privacy or compliance laws that may impact the product.

Design:

The design phase traditionally allows time to choose the platform and programming language to meet the product requirements. The software is broken up into modules, system interfaces are documented, and the overall system architecture is created. In a Secure SDLC, more security-specific steps must be complete. Attack surface analysis and threat modeling will be done to identify potential vulnerabilities, the high-risk interfaces, and any countermeasures. Specify how system authentication will be performed and where secure communication is required.

Construction:

The construction phase is where we actually start building the product. Development teams build components and libraries based on the design and requirements documents. In a Secure SDLC, provide secure coding guidelines to the development team. Ensure that development team uses the security libraries available in the framework for security-specific tasks (e.g. encoding, validation, data access, authentication, authorization, etc.). As each component is completed, scan the source code with a code analyzer that searches for vulnerabilities, also known as a static analysis tool. The scans present immediate feedback to developers, allowing them to correct vulnerabilities as code is written.

Verification:

The verification phase is where quality assurance and customers test the product to ensure it meets the requirements. The tests plans typically cover unit testing, integration testing, stress testing, and user acceptance testing. In a Secure SDLC, perform testing to identify vulnerabilities in the live running application. Dynamic analysis, also known as penetration testing, submits malicious parameters to the application in an attempt to compromise the system. Again, providing the feedback to the development team so identified issues can be corrected before product delivery.

Maintenance:

Once verification is complete, the product is delivered to the customer. The maintenance phase is where product support and monitoring takes place. Future enhancements are sent back to the requirements phase, and the process repeats itself. In a Secure SDLC, continuously monitor the application for security-specific events and provide feedback directly to the development team. This allows them to detect and react quickly to an attack, taking further action if necessary.

Conclusion:

While this post describes a very high level Secure SDLC, the intent is to show you how integrating security into each phase helps the project team understand the importance of security. This understanding helps keep team members engaged during security discussions because they recognize the threats facing their applications. As the software is built, security issues are identified early in process, ultimately delivering more secure products to the customer.

We encourage you to continue exploring the Secure SDLC and visit a number of more in-depth resources on this topic.

The STH.Developer awareness program (http://www.securingthehuman.org/developer) contains over 30 minutes of Secure SDLC awareness training content.

Jim Bird, an expert on Secure Software Development, has written a few in-depth posts on the SANS Software Security blog covering Secure Agile and DevOps proceses.

http://software-security.sans.org/blog/2012/07/09/what-appsec-can-learn-from-devops

http://software-security.sans.org/blog/2012/02/22/agile-development-teams-can-build-secure-software

About the Author

Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric previously spent 10 years working on web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools in the financial industry. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

Developer Security Awareness: How To Measure

In the previous post (What Topics To Cover), we laid the foundation for your developer security awareness-training program. Now let’s talk about the metrics we can collect to help improve our program.

It’s all about the metrics

As we previously mentioned, establishing a common baseline for the entire development team would be helpful. A comprehensive application security assessment should be performed before awareness training begins. For example, the SANS Software Security team has a free web based security assessment knowledge check: http://software-security.sans.org/courses/assessment. A knowledge check such as this allows you to create a baseline, establish core strengths and weaknesses, and steer the types of training to be provided.

As the awareness training takes place, knowledge checks are helpful at the end of each module along the way. These short quizzes ensure each student understands the topic that was presented and provides immediate feedback with any problem areas. The organization can use these metrics to track each team member’s progress and understanding along the way. Reports can be generated from this data to show how things are moving along, and even used for employee reviews.

The final step is to repeat the comprehensive security assessment again. This time use a similar exam with different questions covering the same topics. Ideally, the results will show improvement from the original baseline. Doing so will prove the effectiveness of our developer awareness program, and guarantee our funding as we onboard additional team members. Most importantly, the knowledge gained from the training program will help protect your organization with secure software moving forward.

We hope this series has helped form a vision for your developer security training. For more information, check out the SANS Securing The Human Developer program: http://www.securingthehuman.org/developer/index

About The Author

Eric Johnson (@emjohn20) is a Senior Security Consultant at Cypress Data Defense and the Application Security Curriculum Product Manager at SANS. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. His experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.