Securing the SDLC: Dynamic Testing Java Web Apps

 
Editors Note: Today’s post is from Gregory Leonard. Gregory is an application security consultant at Optiv Security, Inc and a SANS instructor for DEV541 Secure Coding in Java/JEE.

Introduction
The creation and integration of a secure development lifecycle (SDLC) can be an intimidating, even overwhelming, task. There are so many aspects that need to be covered, including performing risk analysis and threat modeling during design, following secure coding practices during development, and conducting proper security testing during later stages.

Established Development Practices
Software development has begun evolving to account for the reality of security risks inherent with web application development, while also adapting to the increased pressure of performing code drops in the accelerated world of DevOps and other continuous delivery methodologies. Major strides have been made to integrate static analysis tools into the build process, giving development teams instant feedback about any vulnerabilities that were detected from the latest check-in, and giving teams the option to prevent builds from being deployed if new issues are detected.

Being able to perform static analysis on source code before performing a deployment build gives teams a strong edge in creating secure applications. However, static analysis does not cover the entire realm of possible vulnerabilities. Static analysis scanners are excellent at detecting obvious cases of some forms of attack, such as basic Cross-Site Scripting (XSS) or SQL injection attacks. Unfortunately, they are not capable of detecting more complex vulnerabilities or design flaws. If a design decision is made that results in a security vulnerability, such as the decision to use a sequence generator to create session IDs for authenticated users, it is highly likely that a static analysis scan won’t catch it.

Dynamic Testing Helps Close the Gaps
Dynamic testing, the process of exploring a running application to find vulnerabilities, helps cover the gaps left by the static analysis scanners. Teams with a mature SDLC will integrate the use of dynamic testing near the end of the development process to shore up any remaining vulnerabilities. Testing is generally limited to a single time period near the end of the process due to two reasons:

  • It is generally ineffective to perform dynamic testing on an application that is only partially completed as many of the vulnerabilities reported may be due to unimplemented code. These vulnerabilities are essentially false positives as the fixes may already be planned for an upcoming sprint.
  • Testing resources tend to be limited due to budget or other constraints. Most organizations, especially larger ones with dozens or even hundreds of development projects, can’t afford to have enough dedicated security testers to provide testing through the entire lifecycle of every project.

Putting the Power into Developer’s Hands
One way to help relieve the second issue highlighted above is to put some of the power to perform dynamic testing directly into the hands of the developers. Just as static analysis tools have become an essential part of the automated build process that developers work with on a daily basis, wouldn’t it be great if we could allow developers to perform some perfunctory dynamic scanning on their applications before making a delivery?

The first issue that needs to be addressed is which tools to start with? There is a plethora of different testing tools out there. Multiple options exist for every imaginable type of test, from network scanners to database testers to web application scanners. Trying to select the right tool, learn how to configure and use it, and interpret the results generated by the tools can be a daunting task.

Fortunately, many testing tools include helpful APIs that allow you to harness the power of a testing tool and integrate it into a different program. Often these APIs allow security testers to create a Python scripts which will perform multiple scans with a single script execution.

Taking the First Step
With so many tools to choose from, where should a developer who wants to start performing dynamic testing start? To help answer that question, Eric Johnson and I set out to create a plugin that would allow a developer to utilize some of the basic scanning functionality of OWASP’s Zed Attack Proxy (ZAP) within the more familiar confines of their IDE. Being the most familiar with the Eclipse IDE for Java, I decided to tackle that plugin first.

The plugin is designed to perform two tasks: perform a spider of a URL provided by the developer, followed by an active scan of all of the in-scope items detected during the spider. After the spider and scan are complete, the results are saved into a SecurityTesting project within Eclipse. More information about the SecurityTesting plugin, including installation and execution instructions, can be found at the project’s GitHub repository:

https://github.com/polyhedraltech/SecurityTesting

Using this plugin, developers can begin performing some basic security scans against their application while they are still in development. This gives the developer feedback about any potential vulnerabilities that may exist, while reducing the workload for the security testers later on.

Moving Forward
Once you have downloaded the ZAP plugin and started getting comfortable with what the scans produce and how it interacts with your web application, I encourage you to begin working more directly with ZAP as well as other security tools. When used correctly, dynamic testing tools can provide a wealth of information. Just be aware, dynamic testing tools are prone to the same false positives and false negatives that affect static analysis tools. Use the vulnerability reports generated by ZAP and other dynamic testing tools as a guide to determine what may need to be fixed in your application, but be sure to validate that a finding is legitimate before proceeding.

Hopefully you or your development team can find value in this plugin. We are in the process of enhancing it to support more dynamic testing tools as well as introducing support for more IDEs. Keep an eye on the GitHub repository for future progress!

To learn more about secure coding in Java using a Secure Development Lifecycle, sign up for DEV541: Secure Coding in Java/JEE!

About the Author
Gregory Leonard (@appsecgreg) has more than 17 years of experience in software development, with an emphasis on writing large-scale enterprise applications. Greg’s responsibilities over the course of his career have included application architecture and security, performing infrastructure design and implementation, providing security analysis, conducting security code reviews, performing web application penetration testing, and developing security tools. Greg is also co-author for the SANS DEV541 Secure Coding in Java/JEE course as well as a contributing author for the security awareness modules of the SANS Securing the Human Developer training program. He is currently employed as an application security consultant at Optiv Security, Inc.

Static Analysis and Code Reviews in Agile and DevOps

 
Editors Note: Today’s post is from Jim Bird. Jim is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program. In this post, Jim covers how to perform secure code analysis in an Agile development lifecycle.

More and more organizations are adopting Agile development and DevOps in order to speed up delivery of projects and systems. The faster that these teams move and the more changes that they make, the harder it is for InfoSec to keep up, to understand and manage security risks. In extreme cases where teams are following practices like One Piece Continuous Flow with rapid delivery of individual features, or Continuous Deployment, automatically pushing out 10 or 100 or more changes every day, well-established security controls like manual audits and pen testing are no longer useful.

Security has to be done in a completely different way in these environments, by shifting security controls earlier into the lifecycle, and integrating security directly into engineering workflows.

A key part of this is focusing on code security, through static analysis testing (SAST) and code reviews.

Static Analysis in Agile/DevOps

Self-service, automated code checking with static analysis tools can be wired directly into how engineers write code. Static analysis checking can be plugged into each developer’s IDE to catch problems while they are coding. Fast incremental static analysis checking can be included in Continuous Integration to catch problems immediately after code has been checked in, and deeper static analysis checks can be run as a stage in a Continuous Delivery pipeline to ensure that code won’t be deployed with any serious security vulnerabilities.

Different kinds of static analysis tools provide different kinds of value:

  • Style checking tools like PMD help to ensure that code is readable and follows good practice. These tools won’t make code more secure, but they will make it easier to review and easier to understand and safer to change.
  • Correctness tools like FindBugs catch common coding mistakes and logic bugs which could be exploited or could cause run-time failures.
  • Security analysis tools like Find Security Bugs or commercial tools like HP Fortify can identify serious security vulnerabilities like XSS injection and SQL injection.

To be successful with static analysis, you need to:

  • Ensure that scans run quickly, and find a way to feed problems directly back to developers – into their IDEs or into their development backlog or bug tracking system. Tools like ThreadFix make this easy.
  • Select tools which provide clear, unambiguous information to developers on where the problem is, why it is a problem, and how to fix it.
  • Take time to configure the tools and tune out false positives and low-priority noise – so that developers can focus on real vulnerabilities and other bugs which need to be fixed.

Code Reviews
Code reviews are – or should be – a common practice in many Agile development shops, and are fundamental to the engineering culture in leading DevOps organizations like Google, Facebook and Etsy.

Peer reviews generally focus on making sure that the code is understandable and maintainable, that it follows code conventions and makes proper use of libraries and frameworks. Peer reviewers may also suggest ways to improve the efficiency of code, or ways to make it simpler. These reviews provide a way for developers to learn how to write good code and share best practices. And they can find important logic and functional bugs.

Code reviews can also be leveraged to improve security by finding problems early in development.

Through increased transparency, peer reviews discourage malicious insiders from trying to plant a back door or logic bomb in code. Reviewers can be trained to check for defensive coding (data validation, error and exception handling, and logging) and secure coding practices.

While all code changes should be peer reviewed, high-risk code needs special attention – this is code where even small mistakes can have high consequences for security or reliability. In organizations like Google or Twitter, developers post a request for a member of the security team to review code changes when they assess that a code change is risky. At other organizations like Etsy, the security team works with engineers to identify high-risk code (like security libraries and security features, or public network-facing APIs), and are automatically alerted if this code changes so that they can conduct a review.

Because most changes in an Agile or DevOps environment are small and incremental, code reviews can be done quickly – especially if the code has already passed through static analysis to catch the low-hanging fruit.

As DevOps teams adopt code-driven infrastructure management tools like Puppet and Chef, code reviews can be extended to catch mistakes and oversights in Chef recipes and Puppet manifests, helping to ensure that the run-time is reliable and secure.

Scaling through Continuous Security Checking
Adding automated static analysis checking into a team’s CI/CD workflow, and leveraging code reviews to include security checks will go a long way to improving the quality, safety and security of systems with minimal friction and without adding significant costs or delays. Most of this can be introduced in a way that is almost transparent to the team, without changing how they work or the tools that they rely on.

Instead of depending on point-in-time assessments and control gates late in a project, security checking In DevOps and Agile should be done continuously and incrementally – the same way that these teams build and deliver systems. This will ensure that security can keep up with the rapid pace of change, and will help to scale up your security capability, by enlisting the engineering organization to take on more of the responsibility for code security.

The SANS Software Security curriculum is also excited to announce a new course Secure DevOps: A Practical Introduction co-authored by Jim Bird (@jimrbird) and Ben Allen (@mr_secure). Follow us @sansappsec for more information on this course and training locations in your area.

About the Author
Jim Bird, SANS analyst and lead author of the SANS Application Security Survey series, is an active contributor to the Open Web Application Security Project (OWASP) and a popular blogger on agile development, DevOps and software security at his blog, “Building Real Software.” He is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program. Jim is an experienced software development professional and IT manager, having worked with high-integrity and high-reliability systems at stock exchanges and banks in more than 30 countries. He holds PMP, PMI-ACP, CSM, SCPM and ITIL certifications.

Breaking CSRF: ASP.NET MVC

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will discuss using anti-forgery tokens in the ASP.NET MVC framework.

CSRF (Cross-Site Request Forgery) has been #8 in the OWASP Top 10 for many years. For a little background, CSRF is comprehensively explained in this article showing the attack, how it works, and how to defend your applications using synchronization tokens. Due to its prevalence, popular frameworks are building in CSRF protection for development teams to use in their web applications.

Protecting the ASP.NET MVC Web Application

This post will demonstrate how easily the ASP.NET MVC framework components can be used to protect your web application from CSRF attacks without any additional coding logic.

Generally, the protection from CSRF is built on top of two components:

  • @Html.AntiForgeryToken(): Special HTML helper that generates an anti-forgery token on the web page of the application (more details can be found here).
  • ValidateAntiForgeryToken: Action attribute that validates the anti-forgery token (more information can be found here).

They say practice makes perfect, so let’s walk through to a real-life demo. First, create a simple “Hello World” ASP.NET MVC web application in your Visual Studio. If you need help with this, all the necessary steps are described here.

The Client Side
After creating the web application in Visual Studio, go to the Solution Explorer panel and navigate to the main login page located here: Views/Account/Login.cshtml. We’ll use the built in CSRF prevention libraries to protect the login page of the web application.

The markup of the file is quite simple. A standard form tag with input fields for the username and password. The important piece for preventing CSRF can be seen on line 4 in the following snippet. Notice the inclusion of the AntiForgeryToken() HTML helper inside the form tag:

[sourcecode]
@using (Html.BeginForm(“Login”, “Account”, new { ReturnUrl = ViewBag.ReturnUrl },
FormMethod.Post, new { @class = “form-horizontal”, role = “form” }))
{
@Html.AntiForgeryToken()
<h4>Use a local account to log in.</h4>
<hr />
@Html.ValidationSummary(true, “”, new { @class = “text-danger” })
<div class=”form-group”>
@Html.LabelFor(m => m.Email, new { @class = “col-md-2 control-label” })
<div class=”col-md-10″>
@Html.TextBoxFor(m => m.Email, new { @class = “form-control” })
@Html.ValidationMessageFor(m => m.Email, “”, new { @class = “text-danger” })
</div>
</div>
<div class=”form-group”>
@Html.LabelFor(m => m.Password, new { @class = “col-md-2 control-label” })
<div class=”col-md-10″>
@Html.PasswordFor(m => m.Password, new { @class = “form-control” })
@Html.ValidationMessageFor(m => m.Password, “”, new { @class = “text-danger” })
</div>
</div>
}
[/sourcecode]

Let’s run the application and see what the anti-forgery token helper actually does. Browse to the login page via the menu bar. You should see a similar window on your screen:

Sample Login page

Right click in the page, and view the page source. You will see a hidden field added by the anti-forgery token helper called “__RequestVerificationToken”:

[sourcecode]
<input name=”__RequestVerificationToken” type=”hidden” value=”WhwU7C1dj4UHGEbYowHP9HZqWTDJDD8-VKAfl3f2vhMut3tvZY8OZIlMBOBL3NONMHlwUTZctE7BlrGKITG8tEEHVpTsV4Ul6Z6ijTVK_8M1″/>
[/sourcecode]

The Server Side
Next, lets shift our focus to the server side. Pressing the Log In button will POST the form fields, including the request verification token to the server. To view the data submitted, turn on the browser debugging tool in Internet Explorer (or any other network debugging tool) by pressing F12. Enter a username and password, then log in. The data model of the form request contains the anti-forgery token generated earlier on the server side:

Request verification token post parameter

The request reaches the server and will be processed by the account controller’s Login action. You can view the implementation to by going to Controllers/AccountController.cs file. As you can see, the ValidateAntiForgeryToken attribute on line 4 is in place above the action. This tells the framework to validate the request verification token that was sent to the browser in the view.

[sourcecode]
// POST: /Account/Login
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Login(LoginViewModel model, string returnUrl)
{

}
[/sourcecode]

For demo purposes, let’s remove the HTML helper @Html.AntiForgeryToken() on Login.cshtml view to simulate a malicious request from an attacker’s web site. Try to log in again. The server responds with an error because the ValidateAntiForgeryToken filter runs and cannot find the anti-forgery token in the request. This causes the application to stop processing the request before any important functionality is executed by the server. The result will look as follows:

AntiForgery validation error

Now that we’ve shown you how easy it is to set up CSRF protection in your ASP.NET MVC applications, make sure you protect all endpoints that make data modifications in your system. This simple, effective solution protects your MVC applications from one of the most common web application vulnerabilities.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

About the Author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

HTTP Verb Tampering in ASP.NET

We’re only a few days into 2016, and it didn’t take long for me to see a web application vulnerability that has been documented for over 10 years: HTTP Verb Tampering. This vulnerability occurs when a web application responds to more HTTP verbs than necessary for the application to properly function. Clever attackers can exploit this vulnerability by sending unexpected HTTP verbs in a request (e.g. HEAD or a FAKE verb). In some cases, the application may bypass authorization rules and allow the attacker to access protected resources. In others, the application may display detailed stack traces in the browser and provide the attacker with internal information about the system.

This issue has been recognized in the J2EE development stack for many years, which is well supported by most static analysis tools. But, this particular application wasn’t using J2EE. I’ve been doing static code analysis for many years, and I must admit – this is the first time I’ve seen this issue in an ASP.NET web application. Let’s explore this verb tampering scenario and see what the vulnerability looks like in ASP.NET.

Authorization Testing
Consider the following example. A web page named “DeleteUser.aspx” accepts one URL parameter called “user”. Logging in as an “Admin”, the following snippet shows a simple GET request to delete the user account for “bob”. In this example, the application correctly returns a 200 OK response code telling me the request was successful.

[sourcecode]
GET /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: .ASPXAUTH=0BD6E325C92D9E7A6FF307BBE3B4BA7E97FB49B44B518D9391A43837DE983F6E7C5EC42CD6AB5
Connection: keep-alive
[/sourcecode]

Obviously, this is an important piece of functionally that should be restricted to privileged users. Let’s test the same request again, this time without the Cookie header. This makes an anonymous (i.e. unauthenticated request) to delete the user account for “bob”. As expected, the application correctly returns a 302 response code to the login page, which tells me the request was denied.

[sourcecode]
GET /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
[/sourcecode]

Similarly, I logged in as a non-admin user and tried the same request again with the non-admin’s cookie. As expected, the application responds with a 302 response code to the login page. At this point, the access control rules seem to be working perfectly.

Hunt the Bug
During the security assessment, I noticed something very interesting in the application’s web.config file. The following code example shows the authorization rule for the delete user page. Can you find the vulnerability in this configuration?

[sourcecode]
<location path=”DeleteUser.aspx”>
<system.web>
<authorization>
<allow roles=”Admin” verbs=”GET” />
<deny users=”*” verbs=”GET” />
</authorization>
</system.web>
</location>
[/sourcecode]

Recognizing this vulnerability requires an understanding of how the ASP.NET authorization rules work. The framework starts at the top of the list and checks each rule until the first “true” condition is met. Any rules after the first match are ignored. If no matching rules are found, the framework’s default allow all rule is used (e.g. <allow users=”*” />).

In the example above, the development team intended to configure the following rules:

  • Allow users in the “Admin” role access to the “DeleteUser.aspx” page
  • Deny all users access to the “DeleteUser.aspx” page
  • Otherwise, the default rule allows all users

But, the “verbs” attribute accidentally creates a verb tampering issue. The “verbs” attribute is a rarely used feature in the ASP.NET framework that restricts each rule to a set of HTTP verbs. The configuration above actually enforces the following rules:

  • If the verb is a GET, then allow users in the “Admin” role access to the “DeleteUser.aspx” page
  • If the verb is a GET, then deny all users access to the “DeleteUser.aspx” page
  • Otherwise, the default rule allows all users

Verb Tampering

Now that we know what the authorization rules actually enforce, are you wondering the same thing I am? Will this web site really allow me to delete a user if I submit a verb other than GET? Let’s find out by submitting the following unauthenticated request using the HEAD verb. Sure enough, the server responds with a 200 OK response code and deletes Bob’s account.

[sourcecode]
HEAD /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
[/sourcecode]

For what it’s worth, we could also submit the following POST request with any parameter and achieve the same result. Both verbs bypass the broken authorization rules above.

[sourcecode]
POST /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 3

x=1
[/sourcecode]

Secure Configuration
Luckily for our development team, the HTTP verb tampering issue is very simple to correct. By removing the “verb” attributes in the new configuration shown below, ASP.NET enforces the access control constraints for all HTTP verbs (e.g. GET, POST, PUT, DELETE, HEAD, TRACE, DEBUG).

[sourcecode]
<location path=”DeleteUser.aspx”>
<system.web>
<authorization>
<allow roles=”Admin” />
<deny users=”*” />
</authorization>
</system.web>
</location>
[/sourcecode]

It is also a best practice to also disable HTTP verbs that are not needed for the application to function. For example, if the application does not need PUT or DELETE methods, configure the web server to return a 405 Method Not Allowed status code. Doing so will reduce the likelihood of HTTP verb tampering vulnerabilities being exploited.

It should be noted that the sample tests in this blog were run against ASP.NET 4.5 and IIS 8.0. Older versions may react differently, but the fundamental problem remains the same.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References:
[1] http://www.kernelpanik.org/docs/kernelpanik/bme.eng.pdf
[2] http://www.aspectsecurity.com/research-presentations/bypassing-vbaac-with-http-verb-tampering
[3] http://www.hpenterprisesecurity.com/vulncat/en/vulncat/java/http_verb_tampering.html
[4] https://msdn.microsoft.com/en-us/library/8d82143t(vs.71).aspx

About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

ASP.NET MVC: Secure Data Storage

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will review secure data storage in the ASP.NET MVC framework.

Secure Logging
The system should not log any sensitive data (e.g. PCI, PHI, PII) into unprotected log storage. Let’s look at an example from a healthcare application. The following snippet shows when the application was not able to successfully store the updates to a particular patient. In this case, the logger is writing the patient’s name and other PHI information to an unprotected log file.

[sourcecode]
log.Info(“Save failed for user: ” + user.SSN + “,” + user.Prescription);
[/sourcecode]

Instead, the application can write the surrogate key (i.e. ID of the patient’s record) to the log file, which can be used to lookup the patient’s information.

[sourcecode]
log.Info(“Save failed for user: ” + user.Identifer);
[/sourcecode]

Secure Password Storage
Passwords must be salted and hashed for storage. For additional protection, it is recommended that an adaptive hashing algorithm is used to provide a number of iterations or a “work factor” to slow down the computation. If the password hashes are somehow exposed to public, adaptive hashing algorithms slow down password cracking attacks because they require many more computations. By default, the ASP.NET Identity framework uses the Crypto class shown below to hash its passwords. Notice it uses PBKDF2 with the HMAC-SHA1 algorithm, a 128-bit salt, and 1,000 iterations to produce a 256-bit password hash.

[sourcecode]
internal static class Crypto
{
private const int PBKDF2IterCount = 1000; // default for Rfc2898DeriveBytes
private const int PBKDF2SubkeyLength = 256/8; // 256 bits
private const int SaltSize = 128/8; // 128 bits

public static string HashPassword(string password)
{
byte[] salt;
byte[] subkey;
using (var deriveBytes = new Rfc2898DeriveBytes(password, SaltSize, PBKDF2IterCount))
{
salt = deriveBytes.Salt;
subkey = deriveBytes.GetBytes(PBKDF2SubkeyLength);
}

}
}
[/sourcecode]

Secure Application Configuration
System credentials providing access to data storage areas should be accessible to authorized personnel only. All connection strings, system accounts, and encryption keys in the configuration files must be encrypted. Luckily, ASP.NET provides an easy way to protect configuration settings using the aspnet_regiis.exe utility. The following example shows the connection strings section of a configuration file with cleartext system passwords:

[sourcecode]
<connectionStrings>
<add name=”sqlDb” sqlConnectionString=”data source=10.100.1.1;userid=username;
password=$secretvalue!”/>
</connectionStrings>
[/sourcecode]

After deploying the file, run the following command on the web server to encrypt the connection strings section of the file using Microsoft Data Protection API (DPAPI) machine key from the Windows installation:

[sourcecode]
aspnet_regiis.exe –pef “connectionStrings” . –prov “DataProtectionConfigurationProvider”
[/sourcecode]

After the command completes, you will see the following snippet in the connection strings section.

[sourcecode]
<connectionStrings configProtectionProvider=”DataProtectionConfigurationProvider”>
<CipherData>
<CipherValue>OpWQgQbq2wBZEGYAeV8WF82yz6q5WNFIj3rcuQ8gT0MP97aO9S…</CipherValue>
</CipherData>
</connectionStrings>
[/sourcecode]

The best part about this solution is that you don’t need to change any source code to read the original connection string value. The framework automatically decrypts the connection strings section on application start, and places the values in memory for the application to read.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Secure Data Transmission

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will review secure data transmission in the ASP.NET MVC framework.

Secure data transmission is a critical step towards securing our customer information over the web. In fact, many of our SoftServe applications are regulated by HIPAA, which has the following secure data transmission requirements:

  • Client-server communication should be performed via secured channel (TLS/HTTPS)
  • Client (front-end application) should not pass any PHI data in URL parameters when sending requests to the server
  • All data transmission outside of the system should be performed via secure protocol (HTTPS, Direct Protocol, etc.)

To satisfy this requirement, let’s examine how to secure data transmission in an ASP.NET MVC application.

Enable HTTPS Debugging
One of my favorite features introduced in Visual Studio 2013 is the ability to debug applications over HTTPS using IIS Express. This eliminates the need to configure IIS, virtual directories, create self signed certificates, etc.

Let’s begin by modifying our SecureWebApp from the ASP.NET MVC: Audit Logging blog entry to use HTTPS. First, select the SecureWebApp project file in the Solution Explorer. Then, view the Properties window. Notice we have an option called “SSL Enabled”. Flip this to true, and you will see an SSL URL appear:

Visual_Studio_HTTPS

Now, our SecureWebApp can be accessed at this endpoint: https://localhost:44301/.

RequireHttps Attribute
With HTTPS enabled, we can use the ASP.NET MVC framework to require transport layer encryption for our application. By applying the RequireHttps attribute, our application forces an insecure HTTP request to be redirected to a secure HTTPS connection. This attribute can be applied to an individual action, an entire controller (e.g. AccountController), or across the entire application. In our case, we will apply the RequireHttps attribute globally across the application. To do this, add one line of code to App_Start\FilterConfig.cs:

[sourcecode highlight=”8″]
public class FilterConfig
{
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new HandleErrorAttribute());

//Apply HTTPS globally
filters.Add(new RequireHttpsAttribute());
}
}
[/sourcecode]

After applying the require HTTPS filter, the application protects itself from insecure HTTP requests. Notice that browsing to http://localhost:8080/ results in a redirection to a secure https://localhost connection.

Now, it should be noted that the redirect uses the standard HTTPS port number 443. In Visual Studio, you will see a 404 error because the IIS Express instance is using port 44301. Don’t be alarmed, on a production system this will work as long as you are hosting your application on the standard HTTPS port 443.

Secure Cookies
Another important step for secure data transmission is protecting authentication cookies from being transmitted insecurely over HTTP. Because our entire site requires HTTPS, start by setting the requireSSL attribute to true on the httpCookies element in the web.config file:

[sourcecode]
<system.web>
<httpCookies httpOnlyCookies=”true” requireSSL=”true” lockItem=”true”/>
</system.web>
[/sourcecode]

If you are using ASP.NET Identity, set the CookieSecure option to Always to ensure the secure flag is set on the .AspNet.Application cookie. This line of code can be added to the Startup.Auth.cs file:

[sourcecode highlight=”5″]
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
CookieHttpOnly = true,
CookieSecure = CookieSecureOption.Always,
LoginPath = new PathString(“/Account/Login”),

});
[/sourcecode]

If you are using forms authentication, don’t forget to set requireSSL to true on the forms element to protect the .ASPXAuth cookie.

[sourcecode]
<authentication mode=”Forms”>
<forms loginUrl=”~/Account/Login” timeout=”2880″ requireSSL=”true”/>
</authentication>
[/sourcecode]

After making these changes, login to the application and review the set-cookie headers returned by the application. You should see something similar to this:

[sourcecode]
HTTP/1.1 200 OK

Set-Cookie: ASP.NET_SessionId=ztr4e3; path=/; secure; HttpOnly
Set-Cookie: .AspNet.ApplicationCookie=usg7rt; path=/; secure; HttpOnly
[/sourcecode]

HTTP Strict Transport Security
Finally, we need to add HTTP Strict Transport Security (HSTS) protection to the application. This protection instructs the browser to always communicate with a domain over HTTPS, even if the user attempts to downgrade the request to HTTP. The HSTS header also happens to prevent HTTP downgrade attack tools, such as sslstrip, from performing man-in-the-middle attacks. More information on this tool can be found in the references below.

With IE finally adding support for the HSTS header in IE11 and Edge 12, all major browsers now support the Strict-Transport-Security header. Once again, one line of code is added to the Global.asax.cs file:

[sourcecode]
protected void Application_EndRequest()
{
Response.AddHeader(“Strict-Transport-Security”, “max-age=31536000; includeSubDomains;”);
}
[/sourcecode]

The max-age option specifies the number of seconds the browser should upgrade all requests to HTTPS (31536000 seconds = 1 year), and the includeSubDomains option tells the browser to upgrade all sub-domains as well.

Alternatively, you can preload your web site’s domain into browser installation packages by registering in the Chrome pre-load list: https://hstspreload.appspot.com/

For more information on security-specific response headers for ASP.NET applications, there is an open source tool that you can use to automatically set the HSTS header. The Security Header Injection Module (SHIM) can be downloaded via NuGet. More information can be found in the link below.

Making these minor configuration and code changes will ensure your data is secure in transit! To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

Breaking CSRF: Spring Security and Thymeleaf

As someone who spends half of their year teaching web application security, I tend to give a lot of presentations that include live demonstrations, mitigation techniques, and exploits. When preparing for a quality assurance presentation earlier this year, I decided to show the group a demonstration of Cross-Site Request Forgery (CSRF) and how to fix the vulnerability.

A CSRF Refresher
If you’re not familiar with Cross-Site Request Forgery (CSRF), check out the article Steve Kosten wrote earlier this year about the attack, how it works, and how to defend your applications using synchronization tokens:

The Demo
My favorite way to demonstrate the power of CSRF is by exploiting a vulnerable change password screen to take over a user’s account. The attack goes like this:

  • Our unsuspecting user signs into a web site, opens a new tab, and visits my evil web page
  • My evil web page submits a forged request
  • The server changes the user’s password to a value that I know, and
  • I own their account!

I know, I know. Change password screens are supposed to ask for the user’s previous password, and the attacker wouldn’t know what that is! Well, you’re absolutely right. But, you’d be shocked at the number of web applications that I test each year that are missing this simple security mechanism.

Now, let’s create our vulnerable page.

The Change Password Page
My demo applications in the Java world use Spring MVC, Spring Security, Thymeleaf, and Spring JPA. If you’re not familiar with Thymeleaf, see the References section, and check out the documentation. This framework provides an easy-to-use HTML template engine that allows us to quickly create views for a web application. I created the following page called changepassword.html containing inputs for new password and confirm password, and a submit button:

[sourcecode]
<form method=”post” th:action=”@{/content/changepassword}”
th:object=”${changePasswordForm}”>
<table cellpadding=”0″>
<tr>
<td align=”right”>New Password:</td>
<td>
<input id=”newPassword” type=”password” th:field=”*{newPassword}” />
</td>
</tr>
<tr>
<td align=”right”>Confirm Password:</td>
<td>
<input id=”confirmPassword” type=”password” th:field=”*{confirmPassword}” />
</td>
</tr>
<tr>
<td>
<input type=”submit” value=”Change Password” id=”btnChangePassword” />
</td>
</tr>
</table>
</form>
[/sourcecode]

The Change Password HTML
After creating the server-side controller to change the user’s password, I fired up the application, signed into my demo site, and browsed to the change password screen. In the HTML source I found something very interesting. A mysterious hidden field, “_csrf”, had been added to my form:

[sourcecode light=”true”]
<input type=”hidden” name=”_csrf” value=”aa1688de-3984-448a-a56e-43bfa027790c” />
[/sourcecode]

When submitting the request to change my password, the raw HTTP request sent the _csrf request parameter to the server and I received a successful 200 response code.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler&_csrf=bd729b66-8d90-46c8-94ff-136cd1188caa
[/sourcecode]

“Interesting,” I thought to myself. Is this token really being validated on the server-side? I immediately fired up Burp Suite, captured a change password request, and sent it to the repeater plug-in.

For the first test, I removed the “_csrf” parameter, submitted the request, and watched it fail.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler
[/sourcecode]

For the second test, I tried again with an invalid “_csrf” token.

[sourcecode light=”true”]
POST /csrf/content/changepassword HTTP/1.1
Host: localhost:8080
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: JSESSIONID=2A801D28758DFF67FCDEC7BE180857B8
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded

newPassword=Stapler&confirmPassword=Stapler&_csrf=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
[/sourcecode]

Both test cases returned the same 403 Forbidden HTTP response code:

[sourcecode light=”true”]
HTTP/1.1 403 Forbidden
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html;charset=UTF-8
Content-Language: en-US
Content-Length: 2464
Date: Thu, 27 Aug 2015 11:48:08 GMT
[/sourcecode]

Spring Security
As an attacker trying to show how powerful CSRF attacks can be, this is troubling. As a defender, this tells me that my application’s POST requests are protected from CSRF attacks by default! According to the Spring documentation, CSRF protection has been baked into Spring Security since v3.2.0 was released in August 2013. For applications using the Java configuration, this is enabled by default and explains why my change password demo page was already being protected.

However, applications using XML-based configuration on versions before Spring v4.0 still need to enable the feature in the http configuration:

[sourcecode light=”true”]
<http>

<csrf />
</http>
[/sourcecode]

Applications on Spring v4.0+ have this protection enabled by default, regardless of how they are configured, making it easy for code reviewers and static analyzers to find instances where it is disabled. Of course this is exactly what I had to do for my presentation demo to work, but it’s a security anti-pattern so we don’t discuss how to do this here. See the Reference section for more details on Spring’s CSRF protections.
 
Caveats
Before we run around the office saying our applications are safe because we’re using Thymeleaf and Spring Security, I’m going to identify a few weaknesses in this defense.

  • Thymeleaf and Spring Security only place CSRF tokens into “form” elements. This means that only form actions submitting POST parameters are protected. If your application makes data modifications via GET requests or AJAX calls, you still have work to do.
  • Cross-Site Scripting (XSS) attacks can easily extract CSRF tokens to an attacker’s remote web server. If you have XSS vulnerabilities in your application, this protection does you no good.
  • My demo application is using Thymeleaf 2.1.3 and Spring 4.1.3. Older versions of these frameworks may not provide the same protection. As always, make sure you perform code reviews and security testing on your applications to ensure they are secure before deploying them to production.

References:

To learn more about securing your Java applications, sign up for DEV541: Secure Coding in Java/JEE!
 
About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

ASP.NET MVC: Audit Logging

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will take a look at creating an audit logging action filter in the ASP.NET MVC framework.

Audit logging is a critical step for adding security to your applications. Often times, audit logs are used to trace an attacker’s steps, provide evidence in legal proceedings, and used to detect and prevent attacks as they are occurring. If you’re not convinced yet, many regulatory compliance laws, such as HIPAA, also require security-specific audit logs to be kept. With that said, let’s take a look at some high-level things to consider as you build out your audit logging functionality.

Events to Log:
The first step is deciding which events require logging. While regulatory compliance laws, such as HIPAA and PCI, may specify exactly which actions should be logged, each application is different. Here are some general actions to consider logging in your applications:

  • Administrative actions, such as adding, editing, and deleting user accounts / permissions.
  • Authentication attempts
  • Authorized and unauthorized access attempts
  • Reading or writing to sensitive information, such as HIPAA, PCI, encryption keys, authentication tokens, API keys, etc.
  • Validation failures that could indicate an attack against the application

Details to Log:
Each audit log entry must provide enough information to identify the user and their action in the application. The following details provide some details to consider:

  • Timestamp identifying when was an operation/action performed?
  • Who performed the action.? This is often achieved with a unique user identifier, session token hash value, and source IP address.
  • What operation/action was performed? Include the URL, action / method, and action result.
  • Which record(s) were impacted?
  • What values were changed?
  • Determine an event severity level to assist the operations team with incident response

Threshold Monitoring
In addition to the above, it is also helpful to monitor the application for unusual activity. Capturing high volumes of requests beyond normal usage can help detect automated attacks, such as password guessing, SQL injection, or access control attempts. Blocking IP addresses performing unusual activity and notifying the operations team will go a long way towards incident detection and response.

Creating an Audit Logging Action Filter
Now that we know what to do, let’s discuss how using ASP.NET MVC. The framework provides a centralized feature, known as Action Filters, which can be used to implement audit logging in your applications. ASP.NET Action Filters are custom classes that provide both declarative and programmatic means to add pre-action and post-action behavior to controller action methods. More information about Action Filters may be found here.

Let’s create an example action filter and apply it to the AccountController.Login() and HomeController.Contact() methods from the SecureWebApp project that was described in the previous article ASP.NET MVC: Data Validation Techniques.

login

First, create a new class called AuditLogActionFilter.cs, as shown in the following code snippet:

[sourcecode]
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;

namespace SecureWebApp.Audit
{
public class AuditLogActionFilter : ActionFilterAttribute
{
public string ActionType { get; set; }

public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Retrieve the information we need to log
string routeInfo = GetRouteData(filterContext.RouteData);
string user = filterContext.HttpContext.User.Identity.Name;

// Write the information to “Audit Log”
Debug.WriteLine(String.Format(“ActionExecuting – {0} ActionType: {1}; User:{2}”
, routeInfo, ActionType, user), “Audit Log”);

base.OnActionExecuting(filterContext);
}

public override void OnActionExecuted(ActionExecutedContext filterContext)
{
// Retrieve the information we need to log
string routeInfo = GetRouteData(filterContext.RouteData);
bool isActionSucceeded = filterContext.Exception == null;
string user = filterContext.HttpContext.User.Identity.Name;

// Write the information to “Audit Log”
Debug.WriteLine(String.Format(“ActionExecuted – {0} ActionType: {1}; Executed successfully:{2}; User:{3}”
, routeInfo, ActionType, isActionSucceeded, user), “Audit Log”);

base.OnActionExecuted(filterContext);
}

private string GetRouteData(RouteData routeData)
{
var controller = routeData.Values[“controller”];
var action = routeData.Values[“action”];

return String.Format(“Controller:{0}; Action:{1};”, controller, action);
}
}
}
[/sourcecode]

The newly created custom action filter AuditLogActionFilter overrides OnActionExecuting() and OnActionExecuted() methods to write audit information to debug output window (for testing purposes). In a real scenario, use a logging framework (e.g. nLog, log4net, etc.) to write the details to a secure audit logging repository.

It’s very easy to apply the logging filter to the Login() method using our new annotation. With just one line of code, add the AuditLogActionFilter with a custom ActionType argument.
[sourcecode]
[HttpPost]
[AuditLogActionFilter(ActionType=”UserLogIn”)]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Login(LoginViewModel model, string returnUrl)
{
[/sourcecode]

And to the Contact() method:
[sourcecode]
[AuditLogActionFilter(ActionType=”AccessContactInfo”)]
public ActionResult Contact()
{
[/sourcecode]

The ActionType parameter is just an example showing that it’s possible to pass data into the action filter. In this particular case, the ActionType identifies the action being taken for the audit log. Once the value is passed to the action filter, any logic can be added to handle special cases.

When debugging the application, logging in and visiting the contact page cause the following lines to be written to the console output.
[sourcecode]
Audit Log: ActionExecuting – Controller:Account; Action:Login; ActionType: UserLogIn; User:
Audit Log: ActionExecuted – Controller:Account; Action:Login; ActionType: UserLogIn; Executed successfully:True; User:
Audit Log: ActionExecuting – Controller:Home; Action:Contact; ActionType: AccessContactInfo; User:taras.kholopkin@gmail.com
Audit Log: ActionExecuted – Controller:Home; Action:Contact; ActionType: AccessContactInfo; Executed successfully:True; User:taras.kholopkin@gmail.com
[/sourcecode]

This shows how easy it is to create a custom-logging filter and apply it to important security-specific actions in your applications.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

Cloud Encryption Options – Good for Compliance, Not Great for Security

Guest Editor: Today’s post is from David Hazar. David is a security engineer focusing on cloud security architecture, application security, and security training. In this post, David will take a look at the encryption options for applications hosted in the cloud.

Over the last decade, due to new compliance requirements or contractual obligations, many, if not most, companies have been implementing encryption to better protect the sensitive data they are storing and to avoid having to report a breach if an employee loses a laptop or if backup media is lost in the mail. One of the more popular ways of adding this additional protection is to implement some form of volume-based, container-based, or whole-disk encryption. It would be difficult to argue that there is an easier, more cost-effective method to achieve compliance than to utilize this type of encryption. Also, although there are potential weaknesses to some implementations of the technology, it is pretty effective at protecting data from incidental exposure or non-targeted attacks such as the ones listed above. So, why does the steady rise of the use of this type of encryption concern me?

As referenced above, there are many legitimate uses for this type of encryption, and if you are looking to protect against those types of threats, it is a good choice. However, many companies and cloud vendors are now utilizing this type of encryption in the datacenter. Let’s review what type of threat we are protecting against when we use this type of encryption in the datacenter. If someone were to walk into a datacenter and walk out with some of the equipment, they wouldn’t be able to read the data on the disks stored within that equipment without breaking the encryption or brute forcing the key (this assumes they were not able walk out with the equipment still running and the encryption unlocked). More realistically, if the datacenter decommissioned some older equipment and did not properly erase the data from that system before selling or disposing of the equipment, this type of encryption would provide some additional protection. Although, even if it were not encrypted, in most datacenters, the data is probably striped across so many disks that the process of mining that data would be painful.

I am sure there are a few other examples of threats that would be mitigated by this type of encryption, but the point I am trying to make is that when the system is running and the disk, volume, or container are unlocked the data is unprotected. So, while you may be able to check “Yes” on your next audit, you would have a tough time claiming data was not exposed if unauthorized parties were able to get access to that system while it was running. The most interesting use of this encryption technology I have seen is to satisfy the need to encrypt database management systems (DBMS) that don’t offer some form of Transparent Data Encryption (TDE). This is especially true in the Cloud.

I will discuss TDE in more detail later, but first let’s focus on encrypting the database using one of the techniques we have already discussed. To shorten the discussion, I will focus on the encryption offered by Amazon RDS for their MySQL and PostgreSQL RDS instances (technically it is offered for Oracle and MS SQL as well, but you can also choose TDE for these types of instances). Here is some text from the Amazon Relational Database Service documentation:

“Amazon RDS encrypted instances provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud, and to fulfill compliance requirements for data-at-rest encryption.”

Now the concern here is that the statement is vague enough that most individuals and smaller companies would read the statement, follow the simple enablement guide, and sleep soundly at night assuming that the data is now unreadable. However, if that is the case, how is your application not totally broken? Well, here is another statement from Amazon’s website:

“You don’t need to make any changes to your code or to your operating model in order to benefit from this important data protection feature.”

Great, I don’t have to do anything and now I am protected. Sounds great, right? However, the documentation fails to explain the extent of that protection. The reason your application is not totally broken is because, while the system is running, you have no additional protection. Therefore, you are not protected if someone gains access to the server or in the more likely scenario that you have a SQL injection flaw in your application.

With TDE, you get some additional protection because the DBMS will only decrypt the data as it is being used. Both Microsoft and Oracle offer the ability to encrypt the entire database or tablespace or to implement column-level encryption. However, at some point this data is rendered readable in memory and possibly on disk. While this may be a small concern for some, the larger issue is that, while the database is running, the application has access to all of the plain-text data. The process of decrypting the data is by name and definition transparent to the application. Therefore, we gain no additional protection from injection flaws unless we are following the principle of least privilege when establishing and using database connections within our applications.

Historically, we have used one connection per database and granted this connection all of the rights the application needs to function. In some cases, for ease of setup and maintenance, we have granted these connections even more privileges than necessary. This concept of having single database connection per application is very common and almost universally accepted. One example of this is the recommended “Integrated Security” model of Microsoft SQL Server. This method of connecting to the database is architected in such a way that making multiple connections to the same database is difficult since the connection to the database is made using the credentials assigned to the “Application Pool”. With this model, if the application must read and decrypt sensitive, encrypted data, then that data must be available to be read and to be decrypted for any database call using that connection.

To solve this problem, there are a few options. You can either manage the decryption of sensitive data at the application layer (more to come on the complexities of this at a later date), you can eliminate all current and future SQL injection vulnerabilities, or you can start following the principle of least privilege when connecting to the database (in a future post and upcoming webcast I will give some specific examples of this technique). While eliminating all current and future SQL injection vulnerabilities would be ideal, it is not always best to put all your eggs in one basket. Therefore, being able to reduce the access provided to the database by the application when making specific database calls can provide additional protection and additional benefits.

For example, if a SQL injection flaw is found in one of your endpoints using a connection string that does not have the rights to transparently decrypt data or that does not have rights to the sensitive columns or tables you are protecting with whole-disk or volume encryption, an attacker exploiting the injection flaw would not be able to access this data. Another benefit of this technique is that it will allow you to focus your code reviews and development efforts on securing the code which is using the connections that provide access to your most sensitive data. In some cases, this may only be usernames and password hashes. In other cases, your database may contain credit card numbers, national identifiers, personal health information, or any number of other sensitive data types. Maybe you are not as concerned about data disclosure as you are about data integrity. In that case, you could limit access to state changing SQL commands and focus your efforts on securing the connections that allow these types of operations.

I am not saying we stop using the forms of encryption provided by cloud providers or built-in to the DBMS. They provide value, they do not require a lot of effort to implement and maintain, and they are an essential part of your defense in-depth strategy. However, as we are increasingly removed from the responsibility to implement these controls ourselves through the use of cloud services, let’s take some time to focus on other potentially more realistic threats to our data and ensure we have protections in place for those as well.

References
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
https://aws.amazon.com/blogs/aws/new-encryption-options-for-amazon-rds/

About the author
David Hazar has been involved in information security since he began his career in 2000. In 2007, he took over the infrastructure and security teams for a business process-outsourcing firm in Utah to help them meet their own compliance requirements and contractual obligations and also those of their customers. This is where he began dedicating his time to security and has not looked back. David has worked in a wide variety of industries including government, healthcare, technology, banking, retail, and transportation. Over the last few years he has been focused on cloud security architecture, application security, and security training while working for the cloud services divisions within Aetna and Oracle. David also moonlights as an application security consultant-providing web and mobile application penetration testing.

David holds a master’s degree from Brigham Young University and the following certifications: CISSP, GCIH, GWAPT, GMOB, GCIA, GCUX, Certified FAIR Risk Analyst, ITIL v3 Foundation and the Microsoft Certified Database Administrator – MCDBA

ASP.NET MVC: Data Validation Techniques

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will take a look at the data validation features built into the ASP.NET MVC framework.

Data validation is one of the most important aspects of web app development. Investing effort into data validation makes your applications more robust and significantly reduces potential loss of data integrity.

Out of the box, the ASP.NET MVC framework provides full support of special components and mechanisms on both the client side and the server side.

Client-Side Validation
Enabled Unobtrusive JavaScript validation allows ASP.NET MVC HTML helper extensions to generate special markup to perform validation on the client side, before sending data to the server. The feature is controlled by the “UnobtrusiveJavaScriptEnabled” Boolean setting in the section.

Let’s have a look at the Register page from the SecureWebApp project in the previous article.

SecureWebApp registration page

The Register.cshtml view contains the following code declaring the text box field for entering a user’s email:

[sourcecode]
@model SecureWebApp.Models.RegisterViewModel

@Html.TextBoxFor(m => m.Email, new { @class = “form-control” })
[/sourcecode]

With unobtrusive JavaScript validation enabled, the framework transforms this code into the following HTML markup:

[sourcecode]
<input class=”form-control valid” data-val=”true”
data-val-email=”The Email field is not a valid e-mail address.”
data-val-required=”The Email field is required.” id=”Email”
name=”Email” type=”text” value=””
>
[/sourcecode]

You can see the validation attributes, data-val, data-val-email, and data-val-required, generated by the framework. Entering an invalid email address, the client-side validation executes and displays the error message to the user. More information about input extensions and available methods can be found here.

SecureWebApp client-side validation

Server-side Validation

While client-side data validation is user friendly, it is important to realize that it does not provide any security for our application. Browsers can be manipulated and web application proxies can be used to bypass all client-side validation protections.

For this reason, performing server-side validation is critical to the application’s security. The first step is using the data model validation annotations in the System.ComponentModel.DataAnnotations namespace, which provides a set of validation attributes that can be applied declaratively to the data model. Looking at the RegisterViewModel class, observe the attributes on each property. For example, the Email property uses the [Required] and [Email] annotations, which tell the server a value must be entered by the user and that value must be a valid email address.

[sourcecode]
public class RegisterViewModel
{
[Required]
[EmailAddress]
[Display(Name = “Email”)]
public string Email { get; set; }

[Required]
[StringLength(100, ErrorMessage = “The {0} must be at least {2} characters long.”
, MinimumLength = 6)]
[DataType(DataType.Password)]
[Display(Name = “Password”)]
public string Password { get; set; }
[/sourcecode]

Next, the server-side action that processes the model must check the ModelState.IsValid property before accepting the request. The following example shows the AccountController Register method checking if the model is valid before registering the user’s account.

[sourcecode]
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Register(RegisterViewModel model)
{
if (ModelState.IsValid)
{
var user = new ApplicationUser { UserName = model.Email, Email = model.Email };
var result = await UserManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
await SignInManager.SignInAsync(user, isPersistent:false, rememberBrowser:false);
return RedirectToAction(“Index”, “Home”);
}
AddErrors(result);
}

// If we got this far, something failed, redisplay form
return View(model);
}
[/sourcecode]

To demonstrate server-side validation in action, let’s disable “UnobtrusiveJavaScriptEnabled” option in web.config and try to register a user with an invalid email address and password. Using the ModelState.IsValid check, the server-side validation results in the same error message as we had with client-side validation enabled.

SecureWebApp server-side validation

ASP.NET provides several built-in validation annotations that can be used in your applications including file extensions, credit card numbers, max length, min length, phone number, numeric ranges, URL, and regular expressions. Or, you can use the ValidationAttribute class to create a custom annotation for your model classes. We will explore creating custom validation attributes in a future post.

To learn more about securing your .NET applications with ASP.NET MVC validation, sign up for DEV544: Secure Coding in .NET!

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).