Continuous Integration: Live Static Analysis with Roslyn

Early in 2016, I had a conversation with a colleague about the very, very limited free and open-source .NET security static analysis options. We discussed CAT.NET, which released back in 2009 and hasn’t been updated since. Next came FxCop, which has a few security rules looking for SQL Injection and Cross-Site Scripting included in the Visual Studio Code Analysis Microsoft Security Rules. Plus, the various configuration analyzer tools that run checks against web.config files, such as the Web Configuration Security Analyzer (WCSA) tools written by Troy Hunt and James Jardine.

Combining these scans all together leaves .NET development teams with a few glaring problems:

  • From past experience, I know that CAT.NET and FxCop have very poor coverage of secure coding issues covered by OWASP, SANS / CWE Top 25, etc. See the presentation slides below for stats from CAT.NET and FxCop scans against a vulnerable .NET target app. The results are very concerning.
     
  • Aside from the Visual Studio Code Analysis rules, none of the tools actually run inside Visual Studio where engineers are actually writing code. This means more add-on tools that need to be run by the security / development leads, resulting in more reports to manage and import into bug tracking systems. For this reason, it is very unlikely these scans are actually run before committing code, which in my opinion is too late in the development process.

There has to be a better way, right?
 
Roslyn – The .NET Complier Platform
At the end of the conversation, my colleague asked me if I had tried writing any Roslyn rules yet? Wait, what’s Roslyn? After some quick searching, I discovered that Roslyn, aka the .NET Compiler Platform, is a set of code analysis APIs for the C# and Visual Basic compilers. Using Roslyn, Microsoft provides a way to query and report diagnostics on just about any piece of code. These diagnostic warnings are displayed in two different places:

  • Inside a code document, diagnostic warnings show as spell check style squiggles as code is being written! See the following screenshot for an example:
    Puma Scan diagnostic warning in code
     
  • In the Error List, diagnostic warnings are added to the standard list of compiler errors. Roslyn provides rule authors the ability to create custom warnings (or errors if you want to be THAT disruptive security team that slows down dev teams) in the list. See the following screenshot for an example:
    Puma Scan diagnostics added to the error list.
     

After reading Alex Turner’s MSDN Magazine article, I realized that the Roslyn team at Microsoft had provided me exactly what I needed to build a powerful, accurate secure coding engine that runs inside of Visual Studio. During development. As code is written. Before code enters the repository. Bingo!

With that, I decided to help address the gap in the lack of free .NET static analysis tools.
 
The Puma Scanner
Once we figured out the Roslyn basics, my team and I spent a small part of our consulting hours over the past 4 months learning about the various types of analyzers and building security rules. Of course we had some help, as the folks on the Roslyn team at Microsoft and the Roslyn Gitter channel are very helpful with newbie questions.

Last week, I had the pleasure of presenting our progress at the OWASP AppSec USA conference in Washington, DC. While there is a lot of work left to be done, we are happy to say that Puma Scan has roughly 40 different rules looking for insecure configuration, cross-site scripting, SQL injection, unvalidated redirects, password management, and cross-site request forgery. At this point, I am confident it is worthwhile running the scanner in your development environment and MSBuild pipeline to gain more advanced static analysis compared the free options that exist today.

If you were unable to attend AppSec USA 2016, don’t worry. The hard working volunteers at OWASP record all of the sessions, which are posted on the OWASP YouTube channel. You can watch the presentation here: Continuous Integration: Live Static Analysis using Visual Studio & the Roslyn API session.

For now, as I promised those that attended the presentation, the presentation slides are available on Slideshare. Additionally, links to the Puma Scan web site, installation instructions, rules documentation, and github repository are available below. Let me know if you’re interested in contributing to the project!

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!
 
References
Puma Scan Web Site: https://www.pumascan.com/
AppSec USA Presentation: Continuous Integration: Live Static Analysis using Visual Studio & the Roslyn API
Github Repository: https://github.com/pumasecurity
Visual Studio Extension: https://visualstudiogallery.msdn.microsoft.com/80206c43-348b-4a21-9f84-a4d4f0d85007
NuGet Package: https://www.nuget.org/packages/Puma.Security.Rules/
 
About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is an author for DEV544 Secure Coding in .NET and DEV531 Defending Mobile App Security Essentials, as well as an instructor for DEV541 Secure Coding in Java/JEE and DEV534: Secure DevOps. Eric’s experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

Breaking CSRF: ASP.NET MVC

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will discuss using anti-forgery tokens in the ASP.NET MVC framework.

CSRF (Cross-Site Request Forgery) has been #8 in the OWASP Top 10 for many years. For a little background, CSRF is comprehensively explained in this article showing the attack, how it works, and how to defend your applications using synchronization tokens. Due to its prevalence, popular frameworks are building in CSRF protection for development teams to use in their web applications.

Protecting the ASP.NET MVC Web Application

This post will demonstrate how easily the ASP.NET MVC framework components can be used to protect your web application from CSRF attacks without any additional coding logic.

Generally, the protection from CSRF is built on top of two components:

  • @Html.AntiForgeryToken(): Special HTML helper that generates an anti-forgery token on the web page of the application (more details can be found here).
  • ValidateAntiForgeryToken: Action attribute that validates the anti-forgery token (more information can be found here).

They say practice makes perfect, so let’s walk through to a real-life demo. First, create a simple “Hello World” ASP.NET MVC web application in your Visual Studio. If you need help with this, all the necessary steps are described here.

The Client Side
After creating the web application in Visual Studio, go to the Solution Explorer panel and navigate to the main login page located here: Views/Account/Login.cshtml. We’ll use the built in CSRF prevention libraries to protect the login page of the web application.

The markup of the file is quite simple. A standard form tag with input fields for the username and password. The important piece for preventing CSRF can be seen on line 4 in the following snippet. Notice the inclusion of the AntiForgeryToken() HTML helper inside the form tag:

[sourcecode]
@using (Html.BeginForm(“Login”, “Account”, new { ReturnUrl = ViewBag.ReturnUrl },
FormMethod.Post, new { @class = “form-horizontal”, role = “form” }))
{
@Html.AntiForgeryToken()
<h4>Use a local account to log in.</h4>
<hr />
@Html.ValidationSummary(true, “”, new { @class = “text-danger” })
<div class=”form-group”>
@Html.LabelFor(m => m.Email, new { @class = “col-md-2 control-label” })
<div class=”col-md-10″>
@Html.TextBoxFor(m => m.Email, new { @class = “form-control” })
@Html.ValidationMessageFor(m => m.Email, “”, new { @class = “text-danger” })
</div>
</div>
<div class=”form-group”>
@Html.LabelFor(m => m.Password, new { @class = “col-md-2 control-label” })
<div class=”col-md-10″>
@Html.PasswordFor(m => m.Password, new { @class = “form-control” })
@Html.ValidationMessageFor(m => m.Password, “”, new { @class = “text-danger” })
</div>
</div>
}
[/sourcecode]

Let’s run the application and see what the anti-forgery token helper actually does. Browse to the login page via the menu bar. You should see a similar window on your screen:

Sample Login page

Right click in the page, and view the page source. You will see a hidden field added by the anti-forgery token helper called “__RequestVerificationToken”:

[sourcecode]
<input name=”__RequestVerificationToken” type=”hidden” value=”WhwU7C1dj4UHGEbYowHP9HZqWTDJDD8-VKAfl3f2vhMut3tvZY8OZIlMBOBL3NONMHlwUTZctE7BlrGKITG8tEEHVpTsV4Ul6Z6ijTVK_8M1″/>
[/sourcecode]

The Server Side
Next, lets shift our focus to the server side. Pressing the Log In button will POST the form fields, including the request verification token to the server. To view the data submitted, turn on the browser debugging tool in Internet Explorer (or any other network debugging tool) by pressing F12. Enter a username and password, then log in. The data model of the form request contains the anti-forgery token generated earlier on the server side:

Request verification token post parameter

The request reaches the server and will be processed by the account controller’s Login action. You can view the implementation to by going to Controllers/AccountController.cs file. As you can see, the ValidateAntiForgeryToken attribute on line 4 is in place above the action. This tells the framework to validate the request verification token that was sent to the browser in the view.

[sourcecode]
// POST: /Account/Login
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Login(LoginViewModel model, string returnUrl)
{

}
[/sourcecode]

For demo purposes, let’s remove the HTML helper @Html.AntiForgeryToken() on Login.cshtml view to simulate a malicious request from an attacker’s web site. Try to log in again. The server responds with an error because the ValidateAntiForgeryToken filter runs and cannot find the anti-forgery token in the request. This causes the application to stop processing the request before any important functionality is executed by the server. The result will look as follows:

AntiForgery validation error

Now that we’ve shown you how easy it is to set up CSRF protection in your ASP.NET MVC applications, make sure you protect all endpoints that make data modifications in your system. This simple, effective solution protects your MVC applications from one of the most common web application vulnerabilities.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

About the Author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

HTTP Verb Tampering in ASP.NET

We’re only a few days into 2016, and it didn’t take long for me to see a web application vulnerability that has been documented for over 10 years: HTTP Verb Tampering. This vulnerability occurs when a web application responds to more HTTP verbs than necessary for the application to properly function. Clever attackers can exploit this vulnerability by sending unexpected HTTP verbs in a request (e.g. HEAD or a FAKE verb). In some cases, the application may bypass authorization rules and allow the attacker to access protected resources. In others, the application may display detailed stack traces in the browser and provide the attacker with internal information about the system.

This issue has been recognized in the J2EE development stack for many years, which is well supported by most static analysis tools. But, this particular application wasn’t using J2EE. I’ve been doing static code analysis for many years, and I must admit – this is the first time I’ve seen this issue in an ASP.NET web application. Let’s explore this verb tampering scenario and see what the vulnerability looks like in ASP.NET.

Authorization Testing
Consider the following example. A web page named “DeleteUser.aspx” accepts one URL parameter called “user”. Logging in as an “Admin”, the following snippet shows a simple GET request to delete the user account for “bob”. In this example, the application correctly returns a 200 OK response code telling me the request was successful.

[sourcecode]
GET /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: .ASPXAUTH=0BD6E325C92D9E7A6FF307BBE3B4BA7E97FB49B44B518D9391A43837DE983F6E7C5EC42CD6AB5
Connection: keep-alive
[/sourcecode]

Obviously, this is an important piece of functionally that should be restricted to privileged users. Let’s test the same request again, this time without the Cookie header. This makes an anonymous (i.e. unauthenticated request) to delete the user account for “bob”. As expected, the application correctly returns a 302 response code to the login page, which tells me the request was denied.

[sourcecode]
GET /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
[/sourcecode]

Similarly, I logged in as a non-admin user and tried the same request again with the non-admin’s cookie. As expected, the application responds with a 302 response code to the login page. At this point, the access control rules seem to be working perfectly.

Hunt the Bug
During the security assessment, I noticed something very interesting in the application’s web.config file. The following code example shows the authorization rule for the delete user page. Can you find the vulnerability in this configuration?

[sourcecode]
<location path=”DeleteUser.aspx”>
<system.web>
<authorization>
<allow roles=”Admin” verbs=”GET” />
<deny users=”*” verbs=”GET” />
</authorization>
</system.web>
</location>
[/sourcecode]

Recognizing this vulnerability requires an understanding of how the ASP.NET authorization rules work. The framework starts at the top of the list and checks each rule until the first “true” condition is met. Any rules after the first match are ignored. If no matching rules are found, the framework’s default allow all rule is used (e.g. <allow users=”*” />).

In the example above, the development team intended to configure the following rules:

  • Allow users in the “Admin” role access to the “DeleteUser.aspx” page
  • Deny all users access to the “DeleteUser.aspx” page
  • Otherwise, the default rule allows all users

But, the “verbs” attribute accidentally creates a verb tampering issue. The “verbs” attribute is a rarely used feature in the ASP.NET framework that restricts each rule to a set of HTTP verbs. The configuration above actually enforces the following rules:

  • If the verb is a GET, then allow users in the “Admin” role access to the “DeleteUser.aspx” page
  • If the verb is a GET, then deny all users access to the “DeleteUser.aspx” page
  • Otherwise, the default rule allows all users

Verb Tampering

Now that we know what the authorization rules actually enforce, are you wondering the same thing I am? Will this web site really allow me to delete a user if I submit a verb other than GET? Let’s find out by submitting the following unauthenticated request using the HEAD verb. Sure enough, the server responds with a 200 OK response code and deletes Bob’s account.

[sourcecode]
HEAD /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
[/sourcecode]

For what it’s worth, we could also submit the following POST request with any parameter and achieve the same result. Both verbs bypass the broken authorization rules above.

[sourcecode]
POST /account/deleteuser.aspx?user=bob HTTP/1.1
Host: localhost:3060
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 3

x=1
[/sourcecode]

Secure Configuration
Luckily for our development team, the HTTP verb tampering issue is very simple to correct. By removing the “verb” attributes in the new configuration shown below, ASP.NET enforces the access control constraints for all HTTP verbs (e.g. GET, POST, PUT, DELETE, HEAD, TRACE, DEBUG).

[sourcecode]
<location path=”DeleteUser.aspx”>
<system.web>
<authorization>
<allow roles=”Admin” />
<deny users=”*” />
</authorization>
</system.web>
</location>
[/sourcecode]

It is also a best practice to also disable HTTP verbs that are not needed for the application to function. For example, if the application does not need PUT or DELETE methods, configure the web server to return a 405 Method Not Allowed status code. Doing so will reduce the likelihood of HTTP verb tampering vulnerabilities being exploited.

It should be noted that the sample tests in this blog were run against ASP.NET 4.5 and IIS 8.0. Older versions may react differently, but the fundamental problem remains the same.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References:
[1] http://www.kernelpanik.org/docs/kernelpanik/bme.eng.pdf
[2] http://www.aspectsecurity.com/research-presentations/bypassing-vbaac-with-http-verb-tampering
[3] http://www.hpenterprisesecurity.com/vulncat/en/vulncat/java/http_verb_tampering.html
[4] https://msdn.microsoft.com/en-us/library/8d82143t(vs.71).aspx

About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. Eric serves on the advisory board for the SANS Securing the Human Developer awareness training program and is a contributing author for the developer security awareness modules. Eric’s previous experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

ASP.NET MVC: Secure Data Storage

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will review secure data storage in the ASP.NET MVC framework.

Secure Logging
The system should not log any sensitive data (e.g. PCI, PHI, PII) into unprotected log storage. Let’s look at an example from a healthcare application. The following snippet shows when the application was not able to successfully store the updates to a particular patient. In this case, the logger is writing the patient’s name and other PHI information to an unprotected log file.

[sourcecode]
log.Info(“Save failed for user: ” + user.SSN + “,” + user.Prescription);
[/sourcecode]

Instead, the application can write the surrogate key (i.e. ID of the patient’s record) to the log file, which can be used to lookup the patient’s information.

[sourcecode]
log.Info(“Save failed for user: ” + user.Identifer);
[/sourcecode]

Secure Password Storage
Passwords must be salted and hashed for storage. For additional protection, it is recommended that an adaptive hashing algorithm is used to provide a number of iterations or a “work factor” to slow down the computation. If the password hashes are somehow exposed to public, adaptive hashing algorithms slow down password cracking attacks because they require many more computations. By default, the ASP.NET Identity framework uses the Crypto class shown below to hash its passwords. Notice it uses PBKDF2 with the HMAC-SHA1 algorithm, a 128-bit salt, and 1,000 iterations to produce a 256-bit password hash.

[sourcecode]
internal static class Crypto
{
private const int PBKDF2IterCount = 1000; // default for Rfc2898DeriveBytes
private const int PBKDF2SubkeyLength = 256/8; // 256 bits
private const int SaltSize = 128/8; // 128 bits

public static string HashPassword(string password)
{
byte[] salt;
byte[] subkey;
using (var deriveBytes = new Rfc2898DeriveBytes(password, SaltSize, PBKDF2IterCount))
{
salt = deriveBytes.Salt;
subkey = deriveBytes.GetBytes(PBKDF2SubkeyLength);
}

}
}
[/sourcecode]

Secure Application Configuration
System credentials providing access to data storage areas should be accessible to authorized personnel only. All connection strings, system accounts, and encryption keys in the configuration files must be encrypted. Luckily, ASP.NET provides an easy way to protect configuration settings using the aspnet_regiis.exe utility. The following example shows the connection strings section of a configuration file with cleartext system passwords:

[sourcecode]
<connectionStrings>
<add name=”sqlDb” sqlConnectionString=”data source=10.100.1.1;userid=username;
password=$secretvalue!”/>
</connectionStrings>
[/sourcecode]

After deploying the file, run the following command on the web server to encrypt the connection strings section of the file using Microsoft Data Protection API (DPAPI) machine key from the Windows installation:

[sourcecode]
aspnet_regiis.exe –pef “connectionStrings” . –prov “DataProtectionConfigurationProvider”
[/sourcecode]

After the command completes, you will see the following snippet in the connection strings section.

[sourcecode]
<connectionStrings configProtectionProvider=”DataProtectionConfigurationProvider”>
<CipherData>
<CipherValue>OpWQgQbq2wBZEGYAeV8WF82yz6q5WNFIj3rcuQ8gT0MP97aO9S…</CipherValue>
</CipherData>
</connectionStrings>
[/sourcecode]

The best part about this solution is that you don’t need to change any source code to read the original connection string value. The framework automatically decrypts the connection strings section on application start, and places the values in memory for the application to read.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Secure Data Transmission

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will review secure data transmission in the ASP.NET MVC framework.

Secure data transmission is a critical step towards securing our customer information over the web. In fact, many of our SoftServe applications are regulated by HIPAA, which has the following secure data transmission requirements:

  • Client-server communication should be performed via secured channel (TLS/HTTPS)
  • Client (front-end application) should not pass any PHI data in URL parameters when sending requests to the server
  • All data transmission outside of the system should be performed via secure protocol (HTTPS, Direct Protocol, etc.)

To satisfy this requirement, let’s examine how to secure data transmission in an ASP.NET MVC application.

Enable HTTPS Debugging
One of my favorite features introduced in Visual Studio 2013 is the ability to debug applications over HTTPS using IIS Express. This eliminates the need to configure IIS, virtual directories, create self signed certificates, etc.

Let’s begin by modifying our SecureWebApp from the ASP.NET MVC: Audit Logging blog entry to use HTTPS. First, select the SecureWebApp project file in the Solution Explorer. Then, view the Properties window. Notice we have an option called “SSL Enabled”. Flip this to true, and you will see an SSL URL appear:

Visual_Studio_HTTPS

Now, our SecureWebApp can be accessed at this endpoint: https://localhost:44301/.

RequireHttps Attribute
With HTTPS enabled, we can use the ASP.NET MVC framework to require transport layer encryption for our application. By applying the RequireHttps attribute, our application forces an insecure HTTP request to be redirected to a secure HTTPS connection. This attribute can be applied to an individual action, an entire controller (e.g. AccountController), or across the entire application. In our case, we will apply the RequireHttps attribute globally across the application. To do this, add one line of code to App_Start\FilterConfig.cs:

[sourcecode highlight=”8″]
public class FilterConfig
{
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new HandleErrorAttribute());

//Apply HTTPS globally
filters.Add(new RequireHttpsAttribute());
}
}
[/sourcecode]

After applying the require HTTPS filter, the application protects itself from insecure HTTP requests. Notice that browsing to http://localhost:8080/ results in a redirection to a secure https://localhost connection.

Now, it should be noted that the redirect uses the standard HTTPS port number 443. In Visual Studio, you will see a 404 error because the IIS Express instance is using port 44301. Don’t be alarmed, on a production system this will work as long as you are hosting your application on the standard HTTPS port 443.

Secure Cookies
Another important step for secure data transmission is protecting authentication cookies from being transmitted insecurely over HTTP. Because our entire site requires HTTPS, start by setting the requireSSL attribute to true on the httpCookies element in the web.config file:

[sourcecode]
<system.web>
<httpCookies httpOnlyCookies=”true” requireSSL=”true” lockItem=”true”/>
</system.web>
[/sourcecode]

If you are using ASP.NET Identity, set the CookieSecure option to Always to ensure the secure flag is set on the .AspNet.Application cookie. This line of code can be added to the Startup.Auth.cs file:

[sourcecode highlight=”5″]
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
CookieHttpOnly = true,
CookieSecure = CookieSecureOption.Always,
LoginPath = new PathString(“/Account/Login”),

});
[/sourcecode]

If you are using forms authentication, don’t forget to set requireSSL to true on the forms element to protect the .ASPXAuth cookie.

[sourcecode]
<authentication mode=”Forms”>
<forms loginUrl=”~/Account/Login” timeout=”2880″ requireSSL=”true”/>
</authentication>
[/sourcecode]

After making these changes, login to the application and review the set-cookie headers returned by the application. You should see something similar to this:

[sourcecode]
HTTP/1.1 200 OK

Set-Cookie: ASP.NET_SessionId=ztr4e3; path=/; secure; HttpOnly
Set-Cookie: .AspNet.ApplicationCookie=usg7rt; path=/; secure; HttpOnly
[/sourcecode]

HTTP Strict Transport Security
Finally, we need to add HTTP Strict Transport Security (HSTS) protection to the application. This protection instructs the browser to always communicate with a domain over HTTPS, even if the user attempts to downgrade the request to HTTP. The HSTS header also happens to prevent HTTP downgrade attack tools, such as sslstrip, from performing man-in-the-middle attacks. More information on this tool can be found in the references below.

With IE finally adding support for the HSTS header in IE11 and Edge 12, all major browsers now support the Strict-Transport-Security header. Once again, one line of code is added to the Global.asax.cs file:

[sourcecode]
protected void Application_EndRequest()
{
Response.AddHeader(“Strict-Transport-Security”, “max-age=31536000; includeSubDomains;”);
}
[/sourcecode]

The max-age option specifies the number of seconds the browser should upgrade all requests to HTTPS (31536000 seconds = 1 year), and the includeSubDomains option tells the browser to upgrade all sub-domains as well.

Alternatively, you can preload your web site’s domain into browser installation packages by registering in the Chrome pre-load list: https://hstspreload.appspot.com/

For more information on security-specific response headers for ASP.NET applications, there is an open source tool that you can use to automatically set the HSTS header. The Security Header Injection Module (SHIM) can be downloaded via NuGet. More information can be found in the link below.

Making these minor configuration and code changes will ensure your data is secure in transit! To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

References

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Audit Logging

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will take a look at creating an audit logging action filter in the ASP.NET MVC framework.

Audit logging is a critical step for adding security to your applications. Often times, audit logs are used to trace an attacker’s steps, provide evidence in legal proceedings, and used to detect and prevent attacks as they are occurring. If you’re not convinced yet, many regulatory compliance laws, such as HIPAA, also require security-specific audit logs to be kept. With that said, let’s take a look at some high-level things to consider as you build out your audit logging functionality.

Events to Log:
The first step is deciding which events require logging. While regulatory compliance laws, such as HIPAA and PCI, may specify exactly which actions should be logged, each application is different. Here are some general actions to consider logging in your applications:

  • Administrative actions, such as adding, editing, and deleting user accounts / permissions.
  • Authentication attempts
  • Authorized and unauthorized access attempts
  • Reading or writing to sensitive information, such as HIPAA, PCI, encryption keys, authentication tokens, API keys, etc.
  • Validation failures that could indicate an attack against the application

Details to Log:
Each audit log entry must provide enough information to identify the user and their action in the application. The following details provide some details to consider:

  • Timestamp identifying when was an operation/action performed?
  • Who performed the action.? This is often achieved with a unique user identifier, session token hash value, and source IP address.
  • What operation/action was performed? Include the URL, action / method, and action result.
  • Which record(s) were impacted?
  • What values were changed?
  • Determine an event severity level to assist the operations team with incident response

Threshold Monitoring
In addition to the above, it is also helpful to monitor the application for unusual activity. Capturing high volumes of requests beyond normal usage can help detect automated attacks, such as password guessing, SQL injection, or access control attempts. Blocking IP addresses performing unusual activity and notifying the operations team will go a long way towards incident detection and response.

Creating an Audit Logging Action Filter
Now that we know what to do, let’s discuss how using ASP.NET MVC. The framework provides a centralized feature, known as Action Filters, which can be used to implement audit logging in your applications. ASP.NET Action Filters are custom classes that provide both declarative and programmatic means to add pre-action and post-action behavior to controller action methods. More information about Action Filters may be found here.

Let’s create an example action filter and apply it to the AccountController.Login() and HomeController.Contact() methods from the SecureWebApp project that was described in the previous article ASP.NET MVC: Data Validation Techniques.

login

First, create a new class called AuditLogActionFilter.cs, as shown in the following code snippet:

[sourcecode]
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;

namespace SecureWebApp.Audit
{
public class AuditLogActionFilter : ActionFilterAttribute
{
public string ActionType { get; set; }

public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Retrieve the information we need to log
string routeInfo = GetRouteData(filterContext.RouteData);
string user = filterContext.HttpContext.User.Identity.Name;

// Write the information to “Audit Log”
Debug.WriteLine(String.Format(“ActionExecuting – {0} ActionType: {1}; User:{2}”
, routeInfo, ActionType, user), “Audit Log”);

base.OnActionExecuting(filterContext);
}

public override void OnActionExecuted(ActionExecutedContext filterContext)
{
// Retrieve the information we need to log
string routeInfo = GetRouteData(filterContext.RouteData);
bool isActionSucceeded = filterContext.Exception == null;
string user = filterContext.HttpContext.User.Identity.Name;

// Write the information to “Audit Log”
Debug.WriteLine(String.Format(“ActionExecuted – {0} ActionType: {1}; Executed successfully:{2}; User:{3}”
, routeInfo, ActionType, isActionSucceeded, user), “Audit Log”);

base.OnActionExecuted(filterContext);
}

private string GetRouteData(RouteData routeData)
{
var controller = routeData.Values[“controller”];
var action = routeData.Values[“action”];

return String.Format(“Controller:{0}; Action:{1};”, controller, action);
}
}
}
[/sourcecode]

The newly created custom action filter AuditLogActionFilter overrides OnActionExecuting() and OnActionExecuted() methods to write audit information to debug output window (for testing purposes). In a real scenario, use a logging framework (e.g. nLog, log4net, etc.) to write the details to a secure audit logging repository.

It’s very easy to apply the logging filter to the Login() method using our new annotation. With just one line of code, add the AuditLogActionFilter with a custom ActionType argument.
[sourcecode]
[HttpPost]
[AuditLogActionFilter(ActionType=”UserLogIn”)]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Login(LoginViewModel model, string returnUrl)
{
[/sourcecode]

And to the Contact() method:
[sourcecode]
[AuditLogActionFilter(ActionType=”AccessContactInfo”)]
public ActionResult Contact()
{
[/sourcecode]

The ActionType parameter is just an example showing that it’s possible to pass data into the action filter. In this particular case, the ActionType identifies the action being taken for the audit log. Once the value is passed to the action filter, any logic can be added to handle special cases.

When debugging the application, logging in and visiting the contact page cause the following lines to be written to the console output.
[sourcecode]
Audit Log: ActionExecuting – Controller:Account; Action:Login; ActionType: UserLogIn; User:
Audit Log: ActionExecuted – Controller:Account; Action:Login; ActionType: UserLogIn; Executed successfully:True; User:
Audit Log: ActionExecuting – Controller:Home; Action:Contact; ActionType: AccessContactInfo; User:taras.kholopkin@gmail.com
Audit Log: ActionExecuted – Controller:Home; Action:Contact; ActionType: AccessContactInfo; Executed successfully:True; User:taras.kholopkin@gmail.com
[/sourcecode]

This shows how easy it is to create a custom-logging filter and apply it to important security-specific actions in your applications.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Data Validation Techniques

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will take a look at the data validation features built into the ASP.NET MVC framework.

Data validation is one of the most important aspects of web app development. Investing effort into data validation makes your applications more robust and significantly reduces potential loss of data integrity.

Out of the box, the ASP.NET MVC framework provides full support of special components and mechanisms on both the client side and the server side.

Client-Side Validation
Enabled Unobtrusive JavaScript validation allows ASP.NET MVC HTML helper extensions to generate special markup to perform validation on the client side, before sending data to the server. The feature is controlled by the “UnobtrusiveJavaScriptEnabled” Boolean setting in the section.

Let’s have a look at the Register page from the SecureWebApp project in the previous article.

SecureWebApp registration page

The Register.cshtml view contains the following code declaring the text box field for entering a user’s email:

[sourcecode]
@model SecureWebApp.Models.RegisterViewModel

@Html.TextBoxFor(m => m.Email, new { @class = “form-control” })
[/sourcecode]

With unobtrusive JavaScript validation enabled, the framework transforms this code into the following HTML markup:

[sourcecode]
<input class=”form-control valid” data-val=”true”
data-val-email=”The Email field is not a valid e-mail address.”
data-val-required=”The Email field is required.” id=”Email”
name=”Email” type=”text” value=””
>
[/sourcecode]

You can see the validation attributes, data-val, data-val-email, and data-val-required, generated by the framework. Entering an invalid email address, the client-side validation executes and displays the error message to the user. More information about input extensions and available methods can be found here.

SecureWebApp client-side validation

Server-side Validation

While client-side data validation is user friendly, it is important to realize that it does not provide any security for our application. Browsers can be manipulated and web application proxies can be used to bypass all client-side validation protections.

For this reason, performing server-side validation is critical to the application’s security. The first step is using the data model validation annotations in the System.ComponentModel.DataAnnotations namespace, which provides a set of validation attributes that can be applied declaratively to the data model. Looking at the RegisterViewModel class, observe the attributes on each property. For example, the Email property uses the [Required] and [Email] annotations, which tell the server a value must be entered by the user and that value must be a valid email address.

[sourcecode]
public class RegisterViewModel
{
[Required]
[EmailAddress]
[Display(Name = “Email”)]
public string Email { get; set; }

[Required]
[StringLength(100, ErrorMessage = “The {0} must be at least {2} characters long.”
, MinimumLength = 6)]
[DataType(DataType.Password)]
[Display(Name = “Password”)]
public string Password { get; set; }
[/sourcecode]

Next, the server-side action that processes the model must check the ModelState.IsValid property before accepting the request. The following example shows the AccountController Register method checking if the model is valid before registering the user’s account.

[sourcecode]
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Register(RegisterViewModel model)
{
if (ModelState.IsValid)
{
var user = new ApplicationUser { UserName = model.Email, Email = model.Email };
var result = await UserManager.CreateAsync(user, model.Password);
if (result.Succeeded)
{
await SignInManager.SignInAsync(user, isPersistent:false, rememberBrowser:false);
return RedirectToAction(“Index”, “Home”);
}
AddErrors(result);
}

// If we got this far, something failed, redisplay form
return View(model);
}
[/sourcecode]

To demonstrate server-side validation in action, let’s disable “UnobtrusiveJavaScriptEnabled” option in web.config and try to register a user with an invalid email address and password. Using the ModelState.IsValid check, the server-side validation results in the same error message as we had with client-side validation enabled.

SecureWebApp server-side validation

ASP.NET provides several built-in validation annotations that can be used in your applications including file extensions, credit card numbers, max length, min length, phone number, numeric ranges, URL, and regular expressions. Or, you can use the ValidationAttribute class to create a custom annotation for your model classes. We will explore creating custom validation attributes in a future post.

To learn more about securing your .NET applications with ASP.NET MVC validation, sign up for DEV544: Secure Coding in .NET!

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Using Identity for Authentication and Authorization

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will take a look at the authentication and authorization security features built into the ASP.NET MVC framework.

Implementing authentication and authorization mechanisms into a web application with a powerful ASP.NET Identity system has become a trivial task. The ASP.NET system was originally created to satisfy membership requirements, covering Forms Authentication with a SQL Server database for user names, passwords and profile data. It now includes a more substantial range of web application data storage options.

One of the advantages of the ASP.NET system is its two-folded usage: it may be either added to an existing project or configured during the creation of an application. ASP.NET Identity libraries are available through NuGet Packages, so they may be added to existing project via NuGet Package Manager by simply searching for Microsoft ASP.NET Identity.

Configuration during creation of an ASP.NET project is easy. Here’s an example:

Creating a new ASP.NET web application

In Visual Studio 2013, all the authentication options are available on the “New Project” screen. Simply select the ‘Change Authentication’ button, and you are presented with the following options: Individual User Accounts, Organizational Accounts, or Windows Authentication. The detailed description of each authentication type can be found here.

Authentication Options

In this case, we opt for ‘Individual User Accounts.’ With ‘Individual User Accounts,’ the default behavior of ASP.NET Identity provides local user registration by creating a username and password, or by signing in with social network providers such as Google+, Facebook, Twitter, or their Microsoft account. The default data storage for user profiles in ASP.NET Identity is a SQL Server database, but can also be configured for MySQL, Azure, MongoDB, and many more.

Authentication

The auto-generated project files contain several class that handle authentication for the application:

App_Start/Startup.Auth.cs – Contains the authentication configuration setup during bootstrap of ASP.NET MVC application. See the following ConfigureAuth snippet:

[sourcecode]
public void ConfigureAuth(IAppBuilder app)
{
// Configure the db context, user manager and signin manager to use a single
//instance per request
app.CreatePerOwinContext(ApplicationDbContext.Create);
app.CreatePerOwinContext(ApplicationUserManager.Create);
app.CreatePerOwinContext(ApplicationSignInManager.Create);

// Enable the application to use a cookie to store information for the signed
// in user and to use a cookie to temporarily store information about a user
// logging in with a third party login provider
// Configure the sign in cookie
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString(“/Account/Login”),
Provider = new CookieAuthenticationProvider
{
// Enables the application to validate the security stamp when the user
// logs in. This is a security feature which is used when you change
// a password or add an external login to your account.
OnValidateIdentity = SecurityStampValidator.OnValidateIdentity(
validateInterval: TimeSpan.FromMinutes(30),
regenerateIdentity: (manager, user) =>
user.GenerateUserIdentityAsync(manager))
}
});

app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);

// Enables the application to temporarily store user information when they are
// verifying the second factor in the two-factor authentication process.
app.UseTwoFactorSignInCookie(DefaultAuthenticationTypes.TwoFactorCookie
, TimeSpan.FromMinutes(5));

// Enables the application to remember the second login verification factor such
// as phone or email.
// Once you check this option, your second step of verification during the login
// process will be remembered on the device where you logged in from.
// This is similar to the RememberMe option when you log in.
app.UseTwoFactorRememberBrowserCookie(DefaultAuthenticationTypes.TwoFactorRememberBrowserCookie);

// Enable login in with third party login providers
app.UseTwitterAuthentication(
consumerKey: “[Never hard-code this]”,
consumerSecret: “[Never hard-code this]”);
}
[/sourcecode]

A few things in this class are very important to the authentication process:

  • The CookieAuthenticationOptions class controls the authentication cookie’s HttpOnly, Secure, and timeout options.
  • Two-factor authentication via email or SMS is built into ASP.NET Identity
  • Social logins via Microsoft, Twitter, Facebook, or Google are supported.

Additional details regarding configuration of authentication can be found here.

App_Start/IdentityConfig.cs – Configuring and extending ASP.NET Identity:

ApplicationUserManager – This class extends UserManager from ASP.NET Identity. In the Create snippet below, password construction rules, account lockout settings, and two-factor messages are configured:

[sourcecode]
public class ApplicationUserManager : UserManager
{
public static ApplicationUserManager Create(IdentityFactoryOptions options, IOwinContext context)
{
var manager = new ApplicationUserManager(new UserStore(context.Get()));
// Configure validation logic for usernames
manager.UserValidator = new UserValidator(manager)
{
AllowOnlyAlphanumericUserNames = false,
RequireUniqueEmail = true
};

// Configure validation logic for passwords
manager.PasswordValidator = new PasswordValidator
{
RequiredLength = 6,
RequireNonLetterOrDigit = true,
RequireDigit = true,
RequireLowercase = true,
RequireUppercase = true,
};

// Configure user lockout defaults
manager.UserLockoutEnabledByDefault = true;
manager.DefaultAccountLockoutTimeSpan = TimeSpan.FromMinutes(5);
manager.MaxFailedAccessAttemptsBeforeLockout = 5;

// Register two factor authentication providers. This application uses Phone
// and Emails as a step of receiving a code for verifying the user
// You can write your own provider and plug it in here.
manager.RegisterTwoFactorProvider(“Phone Code”, new PhoneNumberTokenProvider
{
MessageFormat = “Your security code is {0}”
});

manager.RegisterTwoFactorProvider(“Email Code”, new EmailTokenProvider
{
Subject = “Security Code”,
BodyFormat = “Your security code is {0}”
});
manager.EmailService = new EmailService();
manager.SmsService = new SmsService();
var dataProtectionProvider = options.DataProtectionProvider;
if (dataProtectionProvider != null)
{
manager.UserTokenProvider =
new DataProtectorTokenProvider(dataProtectionProvider.Create(“ASP.NET Identity”));
}
return manager;
}
}
[/sourcecode]

You’ll notice a few important authentication settings in this class:

  • The PasswordValidator allows 6 character passwords by default. This should be modified to fit your organization’s password policy
  • The password lockout policy disables user accounts for 5 minutes after 5 invalid attempts
  • If two-factor is enabled, the email or SMS services and messages are configured for the application user manager

Models/IdentityModels.cs – This class defines the User Identity that is used in authentication and stored to the database (ApplicationUser extends IdentityUser from ASP.NET Identity). The “default” database schema provided by ASP.NET Identity is limited of the box. It is generated by the EntityFramework with CodeFirst approach during the first launch of the web application. More information on how the ApplicationUser class can be extended with new fields can be found here.

[sourcecode]
public class ApplicationUser : IdentityUser
{
//Add custom user properties here

public async Task GenerateUserIdentityAsync(UserManager manager)
{
// Note the authenticationType must match the one defined in CookieAuthenticationOptions.AuthenticationType
var userIdentity = await manager.CreateIdentityAsync(this, DefaultAuthenticationTypes.ApplicationCookie);

// Add custom user claims here
return userIdentity;
}
}
[/sourcecode]

Controllers/AccountController.cs – Finally, we get to the class that is actually using the configuration for performing registration and authentication. The initial implementation of login is shown below:

[sourcecode]
// POST: /Account/Login
[HttpPost]
[AllowAnonymous]
[ValidateAntiForgeryToken]
public async Task Login(LoginViewModel model, string returnUrl)
{
if (!ModelState.IsValid)
{
return View(model);
}

var result = await SignInManager.PasswordSignInAsync(model.Email
, model.Password, model.RememberMe, shouldLockout: false);
switch (result)
{
case SignInStatus.Success:
return RedirectToLocal(returnUrl);
case SignInStatus.LockedOut:
return View(“Lockout”);
case SignInStatus.RequiresVerification:
return RedirectToAction(“SendCode”, new { ReturnUrl = returnUrl
, RememberMe = model.RememberMe });
case SignInStatus.Failure:
default:
ModelState.AddModelError(“”, “Invalid login attempt.”);
return View(model);
}
}
[/sourcecode]

It is important to realize that the Login action sets the shouldLockout parameter to false by default. This means that account lockout is disabled, and vulnerable to brute force password attacks out of the box. Make sure you modify this code and set shouldLockout to true if your application requires the lockout feature.

Looking at the code above, we can see that the social login functionality and two-factor authentication is automatically handled by ASP.NET Identity and does not require additional code.

Authorization

Having created an account, proceed with setting up your authorization rules. The authorize attribute ensures that the identity of a User operating with a particular resource is known and is not anonymous.

[sourcecode]
[Authorize]
public class HomeController : Controller
{
public ActionResult Index()
{
return View();
}

[Authorize(Roles=”Admin”)]
public ActionResult About()
{
ViewBag.Message = “Your application description page.”;
return View();
}

public ActionResult Contact()
{
ViewBag.Message = “Your contact page.”;
return View();
}
}
[/sourcecode]

Adding Authorize attribute to HomeController class guarantees authenticated access to all the actions in the controller. Role-based authorization is done by adding the authorize attribute with the Roles parameter. The example above shows the Roles=”Admin” on the About() action, meaning that access to the About() action can be performed only by an authorized User that has the role of Admin.

To learn more about securing your .NET applications with ASP.NET Identity, sign up for DEV544: Secure Coding in .NET!

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

WhatWorks in AppSec: ASP.NET – Defend Against Cross-Site Scripting Using The HTML Encode Shortcuts

Eric Johnson is an instructor with the SANS Institute for DEV544: Secure Coding in .NET: Developing Defensible Applications, and an information security engineer at a financial institution, where he is responsible for secure code review assessments of Internet facing web applications. Eric has spent nine years working in software development with over five years focusing on ASP .NET web application security. His experience includes software development, secure code review, risk assessment, static source code analysis, and security research. Eric completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University. He currently holds the CISSP and GSSP-.NET certifications and is located in Las Vegas, NV.

The .NET 4.0 & 4.5 frameworks introduced new syntax shortcuts to HTML encode dynamic content being rendered to the browser. These shortcuts provide an easy way to protect against Cross-Site Scripting (XSS) attacks in the newer versions of the .NET framework.

All Frameworks – Vulnerable Code Example

First, let’s review the two vulnerable instances of XSS in all versions of the .NET framework shown in the code snippet below. The first exploitable instance on line 1 is writing the dynamic server side variable, ProductType, to the browser without HTML encoding. The second exploitable instance on line 7 is writing the dynamic server side variable, Name, to the browser during the data binding of the products grid view. If an attacker had the ability to edit these fields, then a malicious value, such as <script>alert(document.cookie);</script>, could be used to inject content into the page.
[sourcecode]

<h1><%= ProductType %></h1>
<asp:GridView ID=”gvProducts” runat=”server” AutoGenerateColumns=”false”
ItemType=”Data.Product”>
<Columns>
<asp:TemplateField HeaderText=”Name”>
<ItemTemplate>
<%# Item.Name %>
</ItemTemplate>
</asp:TemplateField>
</Columns>
</asp:GridView>
[/sourcecode]

.NET 3.5 – Secure Code Example

Next, let’s review how these instances would be mitigated in .NET 3.5 (as well as any version prior to .NET 3.5). In earlier version of the framework, both exploitable instances would be modified to HTML encode the dynamic server side variables using the Microsoft Web Protection Library (formerly known as the AntiXSS library). If an attacker supplied the same malicious content described above in the vulnerable code example, then the HTML encoded value,  &lt;script&gt;alert(document.cookie);&lt;/script&gt;, would not break out of the HTML context and execute in the browser.
[sourcecode]

<h1><%= Microsoft.Security.Application.Encoder.HtmlEncode(ProductType) %></h1>
<asp:GridView ID=”gvProducts” runat=”server” AutoGenerateColumns=”false”
ItemType=”Data.Product”>
<Columns>
<asp:TemplateField HeaderText=”Name”>
<ItemTemplate>
<%# Microsoft.Security.Application.Encoder.HtmlEncode(Item.Name) %>
</ItemTemplate>
</asp:TemplateField>
</Columns>
</asp:GridView>

[/sourcecode]

.NET 4.5 – Secure Code Example

Finally, let’s review how these instances would be mitigated in .NET 4.5 using the HTML encoding shortcuts provided by the framework. The instance on line 1 uses the HTML encode rendering shortcut (<%: %>) to HTML encode the dynamic ProductType value. The instance on line 7 uses the HTML encode binding shortcut (<%#: %>) to HTML encode the dynamic Name value being bound by the grid view.
[sourcecode]

<h1><%: ProductType %></h1>
<asp:GridView ID=”gvProducts” runat=”server” AutoGenerateColumns=”false”
ItemType=”Data.Product”>
<Columns>
<asp:TemplateField HeaderText=”Name”>
<ItemTemplate>
<%#: Item.Name %>
</ItemTemplate>
</asp:TemplateField>
</Columns>
</asp:GridView

[/sourcecode]

Default Encoding Library

Developers can further increase the strength of their default encoding library by overriding the default encoder to use the AntiXSS Library built into the .NET 4.5 framework.
[sourcecode]

<httpRuntime encoderType=”System.Web.Security.AntiXss.AntiXssEncoder,
System.Web, Version=4.0.0.0, Culture=neutral,
PublicKeyToken=b03f5f7f11d50a3a” />

[/sourcecode]

Summary

The HTML rendering shortcut (<%: %>) and binding shortcut (<%#: %>) provide a quick and simple way for developers to protect their web applications from XSS attacks when writing dynamic data to HTML contexts.

However, it should be noted that these shortcuts only provide XSS protection within a HTML context. Dynamic data being written to HTML attribute, JavaScript, CSS, and other contexts will each require the specific encoding algorithm provided in the AntiXss Library.

Ask the Expert – James Jardine

James Jardine is a senior security consultant at Secure Ideas and the founder of Jardine Software. James has spent over twelve years working in software development with over seven years focusing on application security. His experience includes penetration testing, secure development lifecycle creation, vulnerability management, code review, and training.  He has worked with mobile, web, and Windows development with the Microsoft .NET framework. James is a mentor for the Air Force Association’s Cyber Patriot competition. He currently holds the GSSP-NET, CSSLP, MCAD, and MCSD certifications and is located in Jacksonville, Florida.
This is the second in a series of interviews with appsec experts about threat modeling.
1. Threat Modeling is supposed to be one of the most effective and fundamental practices in secure software development. But a lot of teams that are trying to do secure development find threat modeling too difficult and too expensive. Why is threat modeling so hard – or do people just think it is hard because they don’t understand it?
I believe there is a lack of understanding when it comes to threat modeling.  Developers hear the term thrown around, but it can be difficult reading in a book or an online article how to actually create a threat model.  Without a good understanding of the process, people get frustrated and tend to stay away from it.  In addition to the lack of understanding of threat modeling, I believe that sometimes it is a lack of understanding of the application.  If the person/team working on the threat model doesn’t understand the application and the components it talks to, it can severely impact the threat model.

Threat modeling is time-consuming, especially the first iteration of a threat model, tied into the first time the team has created a threat model.  In the development world that time equates to costs in which there is no direct return on investment.  Developers are under tight timelines to get software out the door and many teams are not given the time to include threat modeling.  The time factor all depends on your experience, size of the application, development process and it will get quicker as you start doing the threat modeling on a regular basis.  The cost of doing the threat modeling, and adhering to the mitigations during development, will be less than the time spent going back and fixing the vulnerabilities later.

2. What are the keys to building a good threat model? What tools should you use, what practices should you follow? How do you know when you have built a good threat model – when are you done?

Motivation is the biggest key to building a good threat model.  If you are forced into threat modeling you are not going to take the time to do a good job.  In addition, an understanding of what you want to accomplish with your threat model.  Some threat models use data flow diagrams, while others do not.  You need to identify what works for your situation.  As long as the threat model is helping identify the potential risk areas, threats, and mitigations then it is doing its job.  It is important to set aside some time for threat modeling to focus on it.  Many times, I see it is something that is a task that is buried with other tasks handed off to one developer.  This should be a collaborative effort with a specific focus.

There are many different tools available to help with threat modeling. Microsoft has the SDL Threat Modeling Tool which is a great tool to use if you don’t have any threat modeling experience.  Microsoft also has the Escalation of Privileges card game which is meant to help developers become familiar with threat modeling in a fun way.  You could just use Microsoft Excel, or some other spreadsheet software.  It is all up to what works for you to get the results that you need.

Threat models are living documents.  They should not really ever be finished because each update to the application could affect the threat model.  You can usually tell if you are done with the current threat model when you are out of threats and mitigations.  Think of making popcorn, when the time between pops gets longer, the popcorn is probably done.  The same thing with threat modeling, when you haven’t identified a threat or mitigation in a while, it is probably finished.

3. Where should you start with threat modeling? Where’s your best pay-back, are there any quick wins?

In theory, threat modeling should be done before you start actually writing code for an application.  However, in most cases, threat modeling is implemented either after the code is started, or after the application is already in production.  It is never too late to do threat modeling for your applications.  If you are still in the design phase, then you are at an advantage for starting a threat model following best practices.  If you  are further along in the development lifecycle, the start high level and then work your way down as you can.  Often times, the threat model will go a few levels deep getting more details.  Each level may add more actors and more time.  You can start with just the top level identifying inputs/outputs and trust boundaries which will be a big security gain.  Then as time progresses, you can add a level or additional detail to the current level.

If your company has a bunch of applications that are somewhat similar, you can create a template for your company threat models that can help fill in some generic stuff. For example, protecting data between the client-server, or server-server.   This should be in all of your apps, so you can have a base template where you don’t need to do all the legwork, but just verify it is correct for the current application.  This can help increase the speed of threat modeling going forward.