ASP.NET MVC: Using Identity for Authentication and Authorization

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will take a look at the authentication and authorization security features built into the ASP.NET MVC framework.

Implementing authentication and authorization mechanisms into a web application with a powerful ASP.NET Identity system has become a trivial task. The ASP.NET system was originally created to satisfy membership requirements, covering Forms Authentication with a SQL Server database for user names, passwords and profile data. It now includes a more substantial range of web application data storage options.

One of the advantages of the ASP.NET system is its two-folded usage: it may be either added to an existing project or configured during the creation of an application. ASP.NET Identity libraries are available through NuGet Packages, so they may be added to existing project via NuGet Package Manager by simply searching for Microsoft ASP.NET Identity.

Configuration during creation of an ASP.NET project is easy. Here’s an example:

Creating a new ASP.NET web application

In Visual Studio 2013, all the authentication options are available on the “New Project” screen. Simply select the ‘Change Authentication’ button, and you are presented with the following options: Individual User Accounts, Organizational Accounts, or Windows Authentication. The detailed description of each authentication type can be found here.

Authentication Options

In this case, we opt for ‘Individual User Accounts.’ With ‘Individual User Accounts,’ the default behavior of ASP.NET Identity provides local user registration by creating a username and password, or by signing in with social network providers such as Google+, Facebook, Twitter, or their Microsoft account. The default data storage for user profiles in ASP.NET Identity is a SQL Server database, but can also be configured for MySQL, Azure, MongoDB, and many more.


The auto-generated project files contain several class that handle authentication for the application:

App_Start/Startup.Auth.cs – Contains the authentication configuration setup during bootstrap of ASP.NET MVC application. See the following ConfigureAuth snippet:

public void ConfigureAuth(IAppBuilder app)
// Configure the db context, user manager and signin manager to use a single
//instance per request

// Enable the application to use a cookie to store information for the signed
// in user and to use a cookie to temporarily store information about a user
// logging in with a third party login provider
// Configure the sign in cookie
app.UseCookieAuthentication(new CookieAuthenticationOptions
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString(“/Account/Login”),
Provider = new CookieAuthenticationProvider
// Enables the application to validate the security stamp when the user
// logs in. This is a security feature which is used when you change
// a password or add an external login to your account.
OnValidateIdentity = SecurityStampValidator.OnValidateIdentity(
validateInterval: TimeSpan.FromMinutes(30),
regenerateIdentity: (manager, user) =>


// Enables the application to temporarily store user information when they are
// verifying the second factor in the two-factor authentication process.
, TimeSpan.FromMinutes(5));

// Enables the application to remember the second login verification factor such
// as phone or email.
// Once you check this option, your second step of verification during the login
// process will be remembered on the device where you logged in from.
// This is similar to the RememberMe option when you log in.

// Enable login in with third party login providers
consumerKey: “[Never hard-code this]”,
consumerSecret: “[Never hard-code this]”);

A few things in this class are very important to the authentication process:

  • The CookieAuthenticationOptions class controls the authentication cookie’s HttpOnly, Secure, and timeout options.
  • Two-factor authentication via email or SMS is built into ASP.NET Identity
  • Social logins via Microsoft, Twitter, Facebook, or Google are supported.

Additional details regarding configuration of authentication can be found here.

App_Start/IdentityConfig.cs – Configuring and extending ASP.NET Identity:

ApplicationUserManager – This class extends UserManager from ASP.NET Identity. In the Create snippet below, password construction rules, account lockout settings, and two-factor messages are configured:

public class ApplicationUserManager : UserManager
public static ApplicationUserManager Create(IdentityFactoryOptions options, IOwinContext context)
var manager = new ApplicationUserManager(new UserStore(context.Get()));
// Configure validation logic for usernames
manager.UserValidator = new UserValidator(manager)
AllowOnlyAlphanumericUserNames = false,
RequireUniqueEmail = true

// Configure validation logic for passwords
manager.PasswordValidator = new PasswordValidator
RequiredLength = 6,
RequireNonLetterOrDigit = true,
RequireDigit = true,
RequireLowercase = true,
RequireUppercase = true,

// Configure user lockout defaults
manager.UserLockoutEnabledByDefault = true;
manager.DefaultAccountLockoutTimeSpan = TimeSpan.FromMinutes(5);
manager.MaxFailedAccessAttemptsBeforeLockout = 5;

// Register two factor authentication providers. This application uses Phone
// and Emails as a step of receiving a code for verifying the user
// You can write your own provider and plug it in here.
manager.RegisterTwoFactorProvider(“Phone Code”, new PhoneNumberTokenProvider
MessageFormat = “Your security code is {0}”

manager.RegisterTwoFactorProvider(“Email Code”, new EmailTokenProvider
Subject = “Security Code”,
BodyFormat = “Your security code is {0}”
manager.EmailService = new EmailService();
manager.SmsService = new SmsService();
var dataProtectionProvider = options.DataProtectionProvider;
if (dataProtectionProvider != null)
manager.UserTokenProvider =
new DataProtectorTokenProvider(dataProtectionProvider.Create(“ASP.NET Identity”));
return manager;

You’ll notice a few important authentication settings in this class:

  • The PasswordValidator allows 6 character passwords by default. This should be modified to fit your organization’s password policy
  • The password lockout policy disables user accounts for 5 minutes after 5 invalid attempts
  • If two-factor is enabled, the email or SMS services and messages are configured for the application user manager

Models/IdentityModels.cs – This class defines the User Identity that is used in authentication and stored to the database (ApplicationUser extends IdentityUser from ASP.NET Identity). The “default” database schema provided by ASP.NET Identity is limited of the box. It is generated by the EntityFramework with CodeFirst approach during the first launch of the web application. More information on how the ApplicationUser class can be extended with new fields can be found here.

public class ApplicationUser : IdentityUser
//Add custom user properties here

public async Task GenerateUserIdentityAsync(UserManager manager)
// Note the authenticationType must match the one defined in CookieAuthenticationOptions.AuthenticationType
var userIdentity = await manager.CreateIdentityAsync(this, DefaultAuthenticationTypes.ApplicationCookie);

// Add custom user claims here
return userIdentity;

Controllers/AccountController.cs – Finally, we get to the class that is actually using the configuration for performing registration and authentication. The initial implementation of login is shown below:

// POST: /Account/Login
public async Task Login(LoginViewModel model, string returnUrl)
if (!ModelState.IsValid)
return View(model);

var result = await SignInManager.PasswordSignInAsync(model.Email
, model.Password, model.RememberMe, shouldLockout: false);
switch (result)
case SignInStatus.Success:
return RedirectToLocal(returnUrl);
case SignInStatus.LockedOut:
return View(“Lockout”);
case SignInStatus.RequiresVerification:
return RedirectToAction(“SendCode”, new { ReturnUrl = returnUrl
, RememberMe = model.RememberMe });
case SignInStatus.Failure:
ModelState.AddModelError(“”, “Invalid login attempt.”);
return View(model);

It is important to realize that the Login action sets the shouldLockout parameter to false by default. This means that account lockout is disabled, and vulnerable to brute force password attacks out of the box. Make sure you modify this code and set shouldLockout to true if your application requires the lockout feature.

Looking at the code above, we can see that the social login functionality and two-factor authentication is automatically handled by ASP.NET Identity and does not require additional code.


Having created an account, proceed with setting up your authorization rules. The authorize attribute ensures that the identity of a User operating with a particular resource is known and is not anonymous.

public class HomeController : Controller
public ActionResult Index()
return View();

public ActionResult About()
ViewBag.Message = “Your application description page.”;
return View();

public ActionResult Contact()
ViewBag.Message = “Your contact page.”;
return View();

Adding Authorize attribute to HomeController class guarantees authenticated access to all the actions in the controller. Role-based authorization is done by adding the authorize attribute with the Roles parameter. The example above shows the Roles=”Admin” on the About() action, meaning that access to the About() action can be performed only by an authorized User that has the role of Admin.

To learn more about securing your .NET applications with ASP.NET Identity, sign up for DEV544: Secure Coding in .NET!

About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

LinkedIn OAuth Open Redirect Disclosure

During a recent mobile security engagement, I discovered an Insecure Redirect vulnerability in the LinkedIn OAuth 1.0 implementation that could allow an attacker to conduct phishing attacks against LinkedIn members. This vulnerability could be used to compromise LinkedIn user accounts, and gather sensitive information from those accounts (e.g. personal information and credit card numbers). The following describes this security vulnerability in detail and how I discovered it.

The Vulnerability
Section 4.7 of the OAuth 1.0 specification (RFC 5849) warns of possible phishing attacks, depending on the implementation. A vulnerable OAuth implementation could enable phishing attacks via user-agent redirection. The stated emphasis, further supported by OAuth 2.0 (RFC 6749 via “redirect_uri” parameter), is intended to raise awareness of open-redirection as a security vulnerability that should be avoided.

LinkedIn’s implementation lacked validation between the value of its “redirect_uri” parameter and the origin (base) of a registered client’s endpoint. In other words, if a redirect_uri (although optional) was entered during establishment of the OAuth integration, that redirect_uri value was not validated to ensure it belonged to a valid client. Manipulating this parameter to an unexpected location resulted in the application redirecting the user away from the intended web site.

The Attack
According to LinkedIn API documentation, the steps involved in obtaining an OAuth authorization token are as follows (prior to the fix recently put in place by LinkedIn):

First, a GET request specifying access permission is requested:,rw_nus,r_network&state=some_random_string_of_your_choice &redirect_uri=any_url

Upon successful authorization, a code is generated by LinkedIn and the end user’s browser (user-agent) is redirected to the URL specified in the redirect_uri parameter. Notice that the domain specified in the “redirect_uri” parameter in the above screenshot is the payload specified by our hypothetical attacker that redirects the end user to the malicious site.

Next, the browser redirects the user to an impersonated LinkedIn login page via the “redirect_uri” parameter, which attempts to gather the LinkedIn user’s credentials.

Finally, the victim then enters their credentials, trusting that the site is the official LinkedIn web application.

The proof-of-concept demonstrated the ability to phish for user’s credentials, but could have a larger impact if used against LinkedIn Premium members with credit card numbers stored with their personal account information.

LinkedIn Remediation
I contacted LinkedIn about the vulnerability in October 2013, and it was a pleasant experience working with the LinkedIn Information Security team members. They confirmed the vulnerability within 2 days. From confirmation of the vulnerability to deployment of its remediation, LinkedIn spent approximately seven months (from October 2013 to May 2014) developing the patch. LinkedIn took advantage of this opportunity to upgrade its OAuth implementation to v2.0, while still supporting its legacy API (v1.0) for compatibility with existing applications. Based on the information provided by LinkedIn, such remediation required deep collaboration with several partners on how OAuth tokens are handled.

At the time of writing of this document, a patch for the disclosed vulnerability had already been deployed. For a detailed disclosure, please visit to obtain a free electronic copy (PDF).

About the Author
Phillip Pham is a security engineer with a strong passion for Information Security. Phillip has worked in various sectors and for organizations of various sizes, ranging from small start-ups to Fortune 500 companies. Prior to transitioning into an Information Security career, Phillip spent ten years in software engineering. His last five years of Information Security experience have been focused on Application Security. Some of his application security assessment strengths (both onpremise and cloudbased) include web applications, web services, APIs, Windows client applications, and mobile applications. He currently holds a GCIH certification and is cofounder of APT Security Solutions. Follow Phillip on Twitter @philaptsec.

HTML5: Risky Business or Hidden Security Tool Chest?

I was lucky to be allowed to present about how to use HTML5 to improve security at the recent OWASP APPSEC USA Conference in New York City. OWASP now made a video of the talk available on YouTube for anybody interested.


Ask the Expert – Johannes Ullrich

Johannes Ullrich is the Chief Research Officer for the SANS Institute, where he is responsible for the SANS Internet Storm Center (ISC) and the GIAC Gold program. Prior to working for SANS, Johannes worked as a lead support engineer for a Web development company and as a research physicist. Johannes holds a PhD in Physics from SUNY Albany and is located in Jacksonville, Florida.

1. There have been so many reports of passwords being stolen lately. What is going on? Is the password system that everyone is using broken?

Passwords are broken. A password is supposed to be a secret you share with a site to authenticate yourself. In order for this to work, the secret may only be known to you and that particular site. This is no longer true if you use the same password with more than one site. Also, the password has to be hard to guess but easy to remember. It is virtually impossible to come up with numerous hard to guess but easy to remember passwords. As a result, users will reuse passwords, or use simple passwords that are easy to guess. Most users would have to remember several hundred hard to guess passwords to make the password system work.

2. What are the best practices today for managing and storing passwords for applications?

As recent breaches have shown, passwords may be leaked via various vulnerabilities. As a first step, you need to ensure that your applications are well written and as secure as possible. Code with access to passwords needs to be reviewed particularly well, and only limited database users should have access to the password data to avoid a SQL injection in one part of your code that doesn’t need access to the password data from being escalated to affect password data.

Password data should not be stored in clear text, but encrypted or hashed. Most applications never need to know the clear text form of the password, making cryptographic one way hashes an ideal storage mechanism for passwords. But it is important to recognize that any encryption or hashing scheme is still vulnerable to offline brute force attacks once the data is leaked. After a password database is leaked, you can only delay the attack, but you can not prevent it.

A proper hash algorithm should delay the attacker as much as possible, but should not be too expensive to cause performance problems for the application. Any modern hash algorithm, like sha256 will work, but you may need to apply it multiple times in order to slow down the attacker sufficiently. Other algorithms like PBKDF2 and bcrypt are slower and may be preferred over the rather fast and efficient hash algorithms like sha. But it is important to use an algorithm that is well supported by your development environments. Applying a fast algorithm like sha256 multiple times will in the end be as efficient as using a more expensive algorithm once (or fewer times).

In addition, data should be added to the password before it is hashed. This data is frequently referred to as “salt”. If possible, each password should use a different salt. The salt serves two purposes. First of all, multiple users tend to pick the same password. If these password will be hashed, the hashes will be identical. An attacker breaking this hash will be able to access multiple accounts. If a unique “salt” is added first, the password hashes will be different. In addition, by using a sufficiently large salt, brute forcing the password will be harder. One of the most efficient brute force methods is the use of “rainbow tables”. Rainbow tables are pre-computed hash databases that are compressed to allow for very efficient storage and queries. Currently, these tables are widely available for strings up to 8 characters long. If one restricts the table to more common dictionary terms, larger tables are available as well. By adding a long unique salt, the string to be hashed will exceed the length commonly found in rainbow tables, and by picking a non-dictionary salt, and attacker would have to create specific rainbow tables for each salt value, making them impractical. The salt is stored in clear with the account information. Sometimes, it is just prepended to the hash. A random salt is ideal, but in many cases, a simple salt may be derived from information like the user id or the time stamp the account was created at.

As a basic summary describing secure password storage: Take the users password, prepend a salt unique to this account, and hash the result as many times as you can afford using a commonly available hashing algorithm.

Over time, you should expect CPUs to become more powerful, and as a result, the number of iterations required to protect your password will increase. Design your application in such a way as to make it easy to change the number of iterations.


3. How can people test or review their apps to make sure that they are handling passwords in a secure way? What should they look for?

Quick Answer: When reviewing an application, make sure that authentication is implemented only once and easily changed later to adopt to a more complex hashing scheme. Developers should be instructed to use these authentication libraries and refrain from writing code that accesses the password database directly.

Password Tracking in Malicious iOS Apps

In this article, John Bielich and Khash Kiani introduce OAuth, and demonstrate one type of approach in which a malicious native client application can compromise sensitive end-user data.

Earlier this year, Khash posted a paper entitled: Four Attacks on OAuth – How to Secure Your OAuth Implementation that introduced a common protocol flow, with specific examples and a few insecure implementations.  For more information about the protocol, various use cases and key concepts, please refer to the mentioned post and any other freely available OAuth resources on the web.

This article assumes that the readers are familiar with the detailed principles behind OAuth, and that they know how to make GET and POST requests over HTTPS.  However, we will still provide a high-level introduction to the protocol.


OAuth is a user-centric authorization protocol that is gaining popularity within the social networking space, and amongst other resource providers such as Google and SalesForce. Furthermore, sites such as Facebook are increasingly positioning themselves as Single Sign-On (SSO) providers for other sites using OAuth as the platform. This makes credentials from such providers extremely high-value targets for attackers.

OAuth is a permission-based authorization scheme.

It allows a user to grant access to a third-party application to access his or her protected content hosted by a different site.  This is done by having the user redirected to the OAuth provider and authenticating on that server, thereby bypassing the need for the client application to know the credentials.  The main goal this protocol attempts to achieve is defeating the password anti-pattern in which users share their credentials with various third-party applications for their service offerings.

OAuth and Embedded Mobile Applications

In mobile operating systems such as iOS, there are controls that can be embedded into the application that enable the login page to be served up within the application itself (in this example, UIWebView). This is often done by client applications to provide a good user experience. However, this approach violates the trust context by not using a web browser (Safari) external to the app.  Several of the browser controls and the OAuth redirect schemes are not applicable to facilitating user authorization.

The ideal application flow typically consists of the following steps:

1.     The client application requests a specific scope of access.

2.     The client application switches the user to an external web browser, and displays the OAuth authentication page, asking for consent and user permission to grant access to the client application.

3.     Upon user authentication and approval, the client application receives an Access Token that can be leveraged to access protected resources hosted by Service Provider.

4.     The callback feature of OAuth is typically leveraged to call the local client app in this scenario.

Building a Malicious Google OAuth Client

As mentioned, the main purpose behind OAuth is to defeat the password anti-pattern, preventing identity attacks through a delegation model where users do not share their Server credentials with third-party Client applications.  However, by using embedded web components with OAuth in native mobile applications, we can provide an attack landscape for malicious rich clients to easily compromise user credentials. This compromise would be undetected by either the user or the service provider, and is especially sneaky, as it provides the user with a false sense of safety in the authentication workflow.

In this section, we will demonstrate a malicious iOS application using Google’s own OAuth sample iOS application.  Our simple modifications will steal user’s credentials as they login through Google’s authorization endpoint.

The Attack

This sample exploit consists of the following overall approach:

1.     Downloading the native OAuthSampleTouch mobile Google application.

2.     Using XCode and the iOS SDK, modify the UIWebViewDelegate’s webView:shouldStartLoadWithRequest:navigationType: callback method to intercept the login page before sending the POST request.

3.     Capture the credentials out of the HTTP body.

4.     Use the captured credentials. In this case, we simply output the values. We could easily store or send the credentials in the background to another destination.

The Attack Details

First step is to download, compile and run OAuthSampleTouch that comes with the gtm-oauth code. This is the sample app that uses the Google framework which we are compromising.

figure 1 – install OAuthSampleTouch

Secondly, find the GTMOAuthViewControllerTouch. This class handles the interaction for the sample app, including the UIWebView for authentication.

figure 2 – edit GTMOAuthViewController

Thirdly, insert the malicious code in the webView:shouldStartLoadWithRequest:navigation that will process the request being sent before it is sent from your device. This code effectively does the following:

  • Grabs the HTTP BODY and assigns it to a string
  • Creates an array of string components separated by the “&” token to get a name/value pair per array element
  • Iterates through the values, grabbing the “Email” and “Passwd” name/value pairs, which are Google’s username / password fields

figure 3- insert evil code in the webView:shouldStartLoadWithRequest:navigationType

Fourthly, we authenticate and grant access to the device.

figure 4 – OAuth process with an embedded view

And lastly, we finish authenticating, granting access, and view the compromised credentials.

figure 5 – outputting credentials


The best solution for Client application developers is to follow the recommended flow and use an external browser (in this case, Safari).  Although it might diminish the user experience, this approach will allow the user to be assured that the authentication process is outside the context of the application altogether.

From a user’s perspective, do not trust native OAuth applications that do not follow the recommended flow and are not capturing your credentials in a trusted and neutral 3rd party application, such as Safari.

And lastly, the protocol needs to mature its support for native mobile applications to implement a stricter client authentication process with consumer credentials, a better browser API support, and also properly notify the OAuth providers upon unauthorized access.

About the Authors

John Bielich is a security consultant with  14 years of experience building and securing web-based software applications, as well as over one year in iOS development and security. He can be reached at:

Khash Kiani is a principal security consultant and researcher with over 13 years of experience in building and securing software applications for large defense, insurance, retail, technology, and health care organizations. He specializes in application security integration, penetration testing, and social-engineering assessments. Khash currently holds the GIAC GWAPT, GCIH, and GSNA certifications, has published papers and articles on various application security concerns, and spoken at Blackhat US. He can be reached at:

If you liked this post check out SANS’ new class on Secure iOS App Development.








Four Attacks on OAuth – How to Secure Your OAuth Implementation

This article briefly introduces an emerging open-protocol technology, OAuth, and presents scenarios and examples of how insecure implementations of OAuth can be abused maliciously.  We examine the characteristics of some of these attack vectors, and discuss ideas on countermeasures against possible attacks on users or applications that have implemented this protocol.

An Introduction to the Protocol

OAuth is an emerging authorization standard that is being adopted by a growing number of sites such as Twitter, Facebook, Google, Yahoo!, Netflix, Flickr, and several other Resource Providers and social networking sites.  It is an open-web specification for organizations to access protected resources on each other’s web sites.  This is achieved by allowing users to grant a third-party application access to their protected content without having to provide that application with their credentials.

Unlike Open ID, which is a federated authentication protocol, OAuth, which stands for Open Authorization, is intended for delegated authorization only and it does not attempt to address user authentication concerns.

There are several excellent online resources, referenced at the end of this article, that provide great material about the protocol and its use. However, we need to define a few key OAuth concepts that will be referenced throughout this article:

Key Concepts

  • Server or the Resource Provider controls all OAuth restrictions and is a web site or web services API where User keeps her protected data
  • User or the Resource Owner is a member of the Resource Provider, wanting to share certain resources with a third-party web site
  • Client or Consumer Application is typically a web-based or mobile application that wants to access User’s Protected Resources
  • Client Credentials are the consumer key and consumer secret used to authenticate the Client
  • Token Credentials are the access token and token secret used in place of User’s username and password


The actual business functionality in this example is real. However, the names of the organizations and users are fictional. provides Avon Barksdale a bill consolidation service based on the trust Avon has established with various Resource Providers, including

In the diagram below, we have demonstrated a typical OAuth handshake (aka OAuth dance) and delegation workflow, which includes the perspectives of the User, Client, and the Server in our example:

Figure 1: The entire OAuth1.0 Process Workflow

Insecure Implementations and Solutions

In this section, we will examine some of the security challenges and insecure implementations of the protocol, and provide solution ideas. It is important to note that like most other protocols, OAuth does not provide native security nor does it guarantee the privacy of protected data.  It relies on the implementers of OAuth, and other protocols such as SSL, to protect the exchange of data amongst parties. Consequently, most security risks described below do not reside within the protocol itself, but rather its use.

1. Lack Of Data Confidentiality and Server Trust

While analyzing the specification, and the way OAuth leverages the keys, tokens, and secrets to establish the digital signatures and secure the requests, one realizes that much of the effort put into this protocol was to avoid the need to use HTTPS requests all together. The specification attempts to provide fairly robust schemes to ensure the integrity of a request using its signatures; however it cannot guarantee a request’s confidentiality, and that could result in several threats, some of which are listed below.

1.1 Brute Force Attacks Against the Server

An attacker with access to the network will be able to eavesdrop on the traffic and gain access to the specific request parameters and attributes such as oauth_signature, consumer_key, oauth_token, signature_method (HMAC-SHA1), timestamp, or custom parameter data. These values could assist the attacker in gaining enough understanding of the request to craft a packet and launch a brute force attack against the Server. Creating tokens and shared-secrets that are long, random and resistant to these types of attacks can reduce this threat.

1.2 Lack of Server Trust

This protocol is all about authenticating the Client (consumer key and secret) and the User to the Server, but not the other way around. There is no protocol support to check the authenticity of the Server during the handshakes. So essentially, through phishing or other exploits, user requests can be directed to a malicious Server where the User can receive malicious or misleading payloads. This could adversely impact the Users, but also the Client and Server in terms of their credibility and bottom-line.

1.3 Solutions

As we have seen, the OAuth signature methods were primarily designed for insecure communications, mainly non-HTTPS.  Therefore, TLS/SSL is the recommended approach to prevent any eavesdropping during the data exchange.

Furthermore, Resource Providers can limit the likelihood of a replay attack from a tampered request by implementing protocol’s Nonce and Timestamp attributes.  The value of oauth_nonce attribute is a randomly generated number to sign the Client request, and the oauth_timestamp defines the retention timeframe of the Nonce.  The following example from Twitter .NET Development for OAuth Integration demonstrates the creation of these attributes by the application:

Figure 2: Sample oauth_nonce and oauth_timestamp methods

2. Insecure Storage of Secrets

The two areas of concern are to protect:

  • Shared-secrets on the Server
  • Consumer secrets on cross-platform clients

2.1 Servers

On the Server, in order to compute the oauth_signature, the Server must be able to access the shared-secrets (a signed combination of consumer secret and token secret) in plaintext format as opposed to a hashed value. Naturally, if the Server and all its shared-secrets were to be compromised via physical access or social engineering exploits, the attacker could own all the credentials and act on behalf of any Resource Owner of the compromised Server.

2.2 Clients

OAuth Clients use the consumer key and consumer secret combination to provide their authenticity to the Server. This allows:

  • Clients to uniquely identify themselves to the Server, giving the Resource Provider the ability to keep track of the source of all requests
  • The Server to let the User know which Client application is attempting to gain access to their account and protected resource

Securing the consumer secret on browser-based web application clients introduces the same exact challenges as securing shared-secrets on the Server. However, the installed mobile and desktop applications become much more problematic:

OAuth’s dependency on browser-based authorization creates an inherit implementation problem for mobile or desktop applications that by default do not run in the User’s browser. Moreover, from a pure security perspective, the main concern is when implementers store and obfuscate the key/secret combination in the Client application itself. This makes the key-rotation nearly impossible and enables unauthorized access to the decompiled source code or binary where the consumer secret is stored.  For instance, to compromise the Client Credentials for Twitter’s Client on Android, an attacker can simply disassemble the classes.dex with Android dissembler tool, dexdump:

# dexdump – d classes.dex

2.3 The Threats

It is important to understand that the core function of the consumer secret is to let the Server know which Client is making the request. So essentially, a compromised consumer secret does NOT directly grant access to User’s protected data.  However, compromised consumer credentials could lead to the following security threats:

  • In use cases where the Server MUST keep track of all Clients and their authenticity to fulfill a business requirement, (charge the Client for each User request, or for client application provisioning tasks) safeguarding consumer credential becomes critical
  • The attacker can use the compromised Client Credentials to imitate a valid Client and launch a phishing attack, where he can submit a request to the Server on behalf of the victim and gain access to sensitive data

Regardless of the use case, whenever the consumer secret of a popular desktop or mobile Client application is compromised, the Server must revoke access for ALL users of the compromised Client application. The Client must then register for a new key (a lengthy process), embed it into the application as part of a new release and deploy it to all its Users. A nightmarish process that could take weeks to restore service; and will surely impact the Client’s credibility and business objectives.

2.4 Solutions

Protecting the integrity of the Client Credentials and Token Credentials works fairly well when it comes to storing them on servers. The secrets can be isolated and stored in a database or file-system with proper access control, file permission, physical security, and even database or disk encryption.

For securing Client Credentials on mobile application clients, follow security best practices for storing sensitive, non-stale data such as application passwords and secrets.

The majority of current OAuth mobile and desktop Client applications embed the Client Credentials directly into the application.  This solution leaves a lot to be desired on the security front.

Few alternative implementations have attempted to reduce the security risks by obfuscation or simply shifting the security threat elsewhere:

  • Obfuscate the consumer secret by splitting it into segments or shifting characters by an offset, then embed it in the application
  • Store the Client Credentials on the Client’s backend server.  The credentials can then be negotiated with the front-end application prior to the Client/Server handshake.  However, nothing is going to stop a rogue client to retrieve the Client Credentials from application’s back-end server, making this design fundamentally unsound

Let’s consider a better architectural concept that might require some deviation from the typical OAuth flow:

  • The Service Provider could require certain Clients to bind their Consumer Credentials with a device-specific identifier (similar to a session id).  Prior to the initial OAuth handshake, the mobile or desktop application can authenticate the User to the Client application via username and password.  The mobile Client can then call home to retrieve the Device ID from the Client’s back-end server and store it securely on the device itself (e.g. iOS Keychain).   Once the initial request is submitted to the Serve with both the Client Credentials and Device ID, the Service Provider can validate the authenticity of the Device ID against the Client’s back-end server.

The example below illustrates this solution:

Figure 3 :  alternative solution for authenticating OAuth Clients

OAuth’s strength is that it never exposes a User’s Server credentials to the Client application. Instead, it provides the Client application with temporary access authorization that User can revoke if necessary.  So ultimately, regardless of the solution, when the Server cannot be sure of the authenticity of the Client’s key/secret, it should not solely rely on these attributes to validate the Client.

3. OAuth Implementation with Flawed Session Management

As described in the first example (see figure 1), during the authorization step, the User is prompted by the Server to enter his login credentials to grant permission.  The user then is redirected back to the Client application to complete the flow.  The main issue is with a specific OAuth Server implementation, such as Twitter’s, where the user remains logged in on the Server even after leaving the Client application.

This session management issue, in-conjunction with Server’s implementation of OAuth Auto Processing, to automatically process authorization requests from clients that have been previously authorized by the Server, present serious security concerns.

Let’s take a closer look at Twitter’s implementation:

There are numerous third-party Twitter applications to read or send Tweets; and as of August of 2010, all third-party Twitter applications had to exclusively use OAuth for their delegation-based integration with Twitter APIs.  Here is an example of this flawed implementation:

twitterfeed is a popular application that allows blog feeds to user’s Twitter accounts.

Avon Barksdale registers and signs into his twitterfeed client.  He then selects a publishing service such as Twitter to post his blog.

Figure 4: client authorization page

Avon is directed to Twitter’s authorization endpoint where he signs into Twitter and grants access.

Figure 5 : Twitter’s authorization page for third-party applications

Now, Avon is redirected back to twitterfeed where he completes the feed; and then signs out of twitterfeed and walks away.

Figure 6: twitterfeed’s workflow page post server authorization

A malicious user with access to the unattended browser can now fully compromise Avon’s Twitter account; and deal with the consequences of his action!

Figure 7:  Twitter session remains valid in the background after user signs out of

3.1 The Threat

The User might not be aware that he has a Server session open with the Resource Provider in the background.  He could simply just log out of his Client session and step away from his browser.  The threat is elevated when leaving a session unattended becomes inevitable in use cases where public computers are used to access OAuth-enabled APIs.  Imagine if Avon was a student who frequently accessed public computers for his daily tweet feeds. He could literally leave a Twitter session on every computer he uses.

3.2 Solutions

Resource Providers such as Twitter should always log the User out after handling the third-party OAuth authorization flow in situations where the User was not already logged into the Server before the OAuth initiation request.

Auto Processing should be turned off.  That is, servers should not automatically process requests from clients that have been previously authorized by the resource owner.  If the consumer secret is compromised, a rogue Client can gain ongoing unauthorized access to protected resources without the User’s explicit approval.

4. Session Fixation Attack with OAuth

OAuth key contributor, Eran Hammer Lahav, has published a detailed post, referenced at the end of this article, about this specific attack that caused a major disruption to many OAuth consumers and providers.  For instance, Twitter had to turn-off its third-party integration APIs for an extended period of time, impacting its users as well as all the third-party applications that depended on Twitter APIs.

In summary, the Session Fixation flaw makes it possible for an attacker to use social-engineering tactics to lure users into exposing their data via few simple steps:

  • Attacker uses a valid Client app to initiate a request to the Resource Provider to obtain a temporary Request Token.  He then receives a redirect URI with this token:

  • At a later time, the attacker uses social-engineering and phishing tactics to lure a victim to follow the redirect link with the server-provided Request Token
  • The victim follows the link, and grants client access to protected resources. This process authorizes the Request Token and associates it with the Resource Owner

The above steps demonstrate that the Resource Provider has no way of knowing whose Request Token is being authorized, and cannot distinguish between the two users.

  • The Attacker constructs the callback URI with the “authorized” Access Token and returns to the Client:…/oauth_token= XyZ&…

  • If constructed properly, the Attacker’s client account is now associated with victim’s authorized Access Token

The vulnerability is that there is no way for the Server to know whose key it is authorizing during the handshake.  The author describes the attack in detail and proposes risk-reducing solutions. Also, the new version of the protocol, OAuth2.0, attempts to remediate this issue via its Redirect URI being validated with the authorization key exchange.


As the web grows, more and more sites rely on distributed services and cloud computing.  And in today’s integrated web, users demand more functionality in terms of usability, cross-platform integration, cross-channel experiences and so on.  It is up to the implementers and security professionals to safeguard user and organizational data and ensure business continuity. The implementers should not rely on the protocol to provide all security measures, but instead they should be careful to consider all avenues of attack exposed by the protocol, and design their applications accordingly.


Protocol Specification
OAuth Extensions and Code Sample
Google Provider Specification</a
Twitter API References
OAuth session fixation</a

About the Author

Khash Kiani is a senior security consultant at a large health care organization. He specializes in security architecture, application penetration testing, PCI, and social-engineering assessments.  Khash currently holds the GIAC GWAPT, GCIH, and GSNA certifications.  He can be reached at

ASP.NET Padding Oracle Vulnerability

A very serious vulnerability in ASP.NET was revealed this past month that allows attackers to completely compromise ASP.NET Forms Authentication, among other things. When things like this happen, as developers it’s important to see what lessons can be learned in order to improve the defensibility of our software.

Source: ‘Padding Oracle’ Crypto Attack Affects Millions of ASP.NET Apps

This vulnerability illustrates a couple of important lessons that all developers should take note of:

  • Doing cryptography correctly is challenging
  • Don’t store sensitive information on the client

They’ve fixed the first issue with an out of band patch which means Microsoft took this vulnerability as a very real and serious threat to ASP.NET. If you’re not convinced as to the seriousness of this attack, watch this YouTube video of POET (Padding Oracle Exploit Tool) vs. DotNetNuke (which uses Forms Authentication) here.

Doing Cryptography Correctly Is Challenging

As developers, we must be masters of many different domains and it’s impossible to stay on top of all the different technologies, platforms, frameworks, and security issues that apply to each domain.  Case in point: Cryptography and this Padding Oracle Vulnerability. As developers, how many know what a side-channel attack is, as well as how to protect against it?

This is why we rely on Frameworks – we stand on the shoulders of others who were able to sit down, focus on a problem and come up with a solution that is safe, stable, reusable way of a doing a common task. This helps us save time on development and focus more specifically on the function requirements. The other benefit of using a widely propagated Framework is that in the event of a foible or huge screw-up, there will be a fix. Using obscure libraries doesn’t actually provide more protection; instead it just means that the problems that do exist may be taken advantage by attackers without it ever coming to light – though for some, ignorance is bliss.

A good example of this is authentication. Home grown web authentication mechanisms are typically fraught with mistakes that often lead to the compromise of web applications. There are many things that can be done wrong in web authentication. With ASP.NET Forms Authentication the common task authenticating users is abstracted into some reusable objects, controls, datastore, and configuration settings. Now, in theory, developers don’t need to think about authentication very much and can instead rely on someone else who took the time to sit down and to worry about the intricacies of doing this correctly. That doesn’t mean developers shouldn’t understand Forms Authentication because there are still many ways to incorrectly configure and use Forms Authentication that can leave sites vulnerable to attack. Using Frameworks is a good thing since it’s impractical for every developer to keep re-inventing the authentication ‘wheel’ every time they build a web site that needs to authenticate users. However, the benefit of having someone else implement a feature such as authentication can also be to the detriment of the system if it’s done incorrectly.  ASP.NET Forms Authentication has handled this job pretty well for many years and Microsoft should be commended for this, even though it’s not been without any mistakes.  The latest misstep isn’t limited to Forms Authentication in ASP.NET (JavaServer Faces, Ruby on Rails, and OWASP’s Java ESAPI), this is just one of many place where this vulnerability manifests itself.

ASP.NET encrypts information and stores it in several places on the client – in the URL, in a cookie or in a hidden form field when using features such as WebResources, ViewState, Forms Authentication, Anonymous Identification, and the Role Provider. ASP.NET typically uses AES (or 3DES depending on configuration) to encrypt this data and it uses a common key called the Machine Key. AES and 3DES are block ciphers, or symmetric key ciphers, that perform a transform on a fixed number of bytes of input (the plaintext) and then outputs encrypted data (or ciphertext) that is the same number of bytes as the input. Since the block size is fixed (64 or 128 bits) and the plaintext text size can vary, the transform needs to be able to encrypt data longer than the initial block size. This is accomplished using a ‘mode of operation’ (Cipher-block chaining or CBC is what ASP.NET uses) combined with a padding scheme (PKCS5 Padding) that is applied before the transformation (so that the input size matches the block size).

Because of the way these block ciphers are designed, if an attacker can change the encrypted data and send it back up to the server and get a SUCCESS or FAILED message as to if the padding is correct (hence Padding Oracle), they can actually use this information to decrypt the data. It turns out they can also encrypt new data as well using a different technique.

If a crypto system leaks just one bit of information about the encryption/decryption process – that’s all that may be needed for an attacker to beat the crypto system.

What’s interesting about this vulnerability is that it can easily occur through very legitimate use of the .NET Crypto API – meaning if you aren’t aware of this sort of vulnerability and you allow an untrusted client to bounce data that’s been encrypted with a block cipher against a service of some sort and the service returns error messages, then this vulnerability exists.

One solution you may arrive at is to just use one generic custom error page that doesn’t reveal the .NET cryptographic exception, but the attacker just needs to INFER a padding error occurred, they don’t require the actual padding error message. So the HTTP status may also give this away…so what if the web site just returns a 404 status code or 200 status code and an error page? An attacker can just use a timing attack to determine if it was a padding error in the decryption process – this is why one of the recommended workaround from Microsoft was to put a random timer loop in the error page to throw off these timing attacks used to detect if a Padding Oracle existed.

To cryptographers, this was an obvious mistake – verifying a MAC before decrypting the data or verifying a digital signature of a hash of the encrypted data would have provided the protections needed to prevent the tampering of the data and thus would have silenced the Padding Oracle. This is exactly what Microsoft has done in their patch, so the vulnerability has been fixed. The other important thing to consider is that encryption algorithms have a limited shelf-life as well – it’s always only a matter of time before they can be circumvented.  I would prefer to see ASP.NET put a large cryptographic pseudo-randomly generated ID in a cookie or URL and keep the actual sensitive data related to Authentication and roles on the server. As it turns out, ASP.NET does apply a MAC in almost every case – for ViewState, Forms Authentication Tickets, Anonymous Identification, and Role Cookies. However the MAC is used to check the authenticity AFTER decryption which is too late to protect against this vulnerability – it doesn’t silence the Padding Oracle. As for the WebResource that has encrypted data in the URL Query String, no MAC is used at all.

This is why it’s important to never store sensitive information on the client to begin with – because doing cryptography correctly is not a trivial task.

Some Thoughts About Passwords

Passwords don’t work. Any password has a finite chance of being guessed. A good password is just less likely to be guessed then a simple password. A password is hard to guess if it is randomly chosen from a large pool of passwords. However, this concept isn’t always understood well. Requiring (instead of just allowing) a password to include special characters and numbers does not actually extend the space of possible password, it may actually reduce it as some passwords are no longer “legal”. Aside from that, the number of distinct passwords that people actually chose may not increase.

Let’s say we require 10 characters, one has to be a number and one has to be a special character. The remainder may be any characters (special, number or upper/lower case letter). There are 10 numbers, 26 letters and typically another 32 printable special characters that are easily reached on a US keyboard. If we allow any combination of these, and exactly 10 characters long, we do obtain 96^10 difference possible passwords. If we require that at least one has to be a number, and one has to be a special characters, we only allow for 10*32*96^8 different passwords. A minor difference in password strength. But the more restrictive your password policy, the smaller the space of allowed passwords.

The next part is more psychological and hard to express in a simple equation. The fact is that there are only so many passwords that people are able to memorize. As a result, some passwords are used much more frequently then others, even if specific guidelines are provided (like requirements to include numbers/special characters). As an example, if users normally prefer the password ‘12345’, they may just add a ‘!’ if a special character is required. The new most popular password may be ‘12345!’.

Researchers from Microsoft and Harvard recently published a much discussed paper covering this aspect of passwords [1]. As a result, the paper suggests to prevent users from picking popular passwords, rather then picking “difficult” passwords. The difference is subtle but important: If we only require a special character, “password!” would be considered a strong password. The proposed system would consider any password weak if it is selected by multiple users.

In the end, what really matters is not if a password is long and contains many characters, but if a password is likely to be included in an attackers list of passwords to try. Many times, attackers will only use a very small (3-10) list of passwords, hoping that one of your users picked one of the weak passwords. The real goal of a good password is to be not included in common password lists.

What we need is a database that will tell us which passwords are most common. However, we should not store passwords in the clear, not even if they are not associated with specific accounts. As a result, we need to come up with a “hash” that will represent these passwords sufficiently. This hash should be different from the hash used to store the password. First of all, we mind collisions less. For the actual account data, collisions are a problem as multiple passwords could be used to access the account. For the list of most popular passwords, collisions would lead to false positives, in which case a less popular password would be considered popular. Not a big deal as long as it reasonable. In a limited fashion, these collisions help improve security as an attacker will not be able to enumerate the actual most popular password. The authors of the above paper propose the use of a “count-min sketch” which is very efficient to look up the frequency of values. The passwords itself are not stored in the clear, only the derivative of hash functions and the passwords are not linked to the accounts other then in the (salted and hashed) account table.

What’s really missing is a strong successor of passwords. There are many options, like various two factor authentication schemes, but none of them is as easily and cheaply implemented as passwords.


Response to Nielsen’s “Stop Password Masking”

I just ran across Jakob Nielsen‘s Alert Box post titled Stop Password Masking and wanted to provide some feedback from a security vs. usability perspective. I have great respect for Nielsen’s contribution to the usability of the web. Back in the early days of the internet (mid 1990’s), his books were gospel at my consulting firm, ATGi.

My initial reaction to his article was ‘that’s a crazy idea’ – but after some reflection, I really felt like it was a good mental exercise to actually consider what he was saying. If I hadn’t known who Nielsen was, I probably would have dismissed his suggestion outright. Sometimes it is a good exercise to go back and review why we do the things we do – especially as it relates to information security controls. Nielsen questions the real security benefit of password masking – something I haven’t given a second thought to…well, ever.

“Typically, masking passwords doesn’t even increase security, but it does cost you business due to login failures.”

Nielsen’s probably right: It might be costing you business. The question is, how much business? Security shouldn’t be the be-all, end-all goal. It’s there to serve the organization first and foremost. Viewing the cost of security controls with respect to the function it’s protecting is the correct perspective. Enter risk analysis – risk analysis is an important process that helps determine if the cost of having (or not having) a security control mitigates threats, while not adversely affecting the business (too much). It’s about reducing risk to an acceptable level, not about 100% security. It’s a balancing act. Why not target 100% security? Well, if an organization attempted to implement absolute security, it’d go out of business – it’s just too expensive. For example, it doesn’t make sense to spend a million dollars on security solutions that will protect a web site that only earns a few thousands dollars a month.

Again, Nielsen pretty much has it right- except for the ‘true loss of security’ statement:

“The more uncertain users feel about typing passwords, the more likely they are to (a) employ overly simple passwords and/or (b) copy-paste passwords from a file on their computer. Both behaviors lead to a true loss of security.”

Next Nielsen gives a nod to the importance of security and provides an option for masks that takes security into consideration:

“Yes, users are sometimes truly at risk of having bystanders spy on their passwords, such as when they’re using an Internet cafe. It’s therefore worth offering them a checkbox to have their passwords masked; for high-risk applications, such as bank accounts, you might even check this box by default. In cases where there’s a tension between security and usability, sometimes security should win.”

Though I would go further than Nielsen: Users, unless forced, won’t use a complex password even if there is no mask, unless complexity is enforced. If complexity is enforced, a non-masked textbox would certainly help the user meet complexity requirements more easily – which is probably what Nielsen means. Likewise, when presented with an option for masked password or not masked – users typically chose convenience over security so in most cases I would want to default the option to show the mask.

Using simple passwords or ‘copy-paste passwords’ is not NECESSARILY a ‘true loss of security’, but the devil’s in the details.

The loss of security for ‘copy-paste password’ scenarios (which assumes users are storing passwords locally in a file of some sort) assumes a local computer or local network compromise. If so, he’s correct – the gig is up, but that’s not necessarily a true loss of security…well, it IS for the ONE compromised user (or network of users) – but not for all the users of that web site. The main reason we insist on password complexity is increase the keyspace of a password which helps stop brute force and dictionary attacks that originate from attackers “out there” who don’t require access to the user’s computer to begin with. The attacker attempts to find a collision with a password for a known account through a dictionary attack and if they have more time, a brute force attack. Additionally, Password Manager Software (KeePass or Password Safe) will encrypt password stores locally and even associate them automatically with the appropriate web site. Though I suspect the use of password management tools is the exception, not the rule.

As for users choosing too simple of a password: If the site allows users to pick an insecure password, that’s already a failure of the site. But it’s possible to have both a simple password as well as a secure password. Contrary to common belief, good passwords don’t necessarily have to contain symbols, digits, upper and lower case characters.  The reason these complexity requirements were introduced was to protect against brute force attacks as well as dictionary attacks on shorter passwords. Humans have a natural propensity to use simple dictionary words that are easy to remember – such as names, meaningful numbers/dates, pet names, etc., as passwords – hackers take advantage of this seemingly reasonable behavior and use it to their advantage. By adding complexity – that is, the symbols, digits, upper/lower characters – this makes the dictionary attacks more expensive (time consuming) to the attacker since each time we increase the length and the number of possible combinations, this increases the keyspace of the password. So if instead of increasing the combinations, instead increase the password length and use a simple phrase or sentence as the password. These long passwords with less complexity are called a passphrase. Dictionary attacks are useless against passphrases since they are not made up of a dictionary words. By increasing the keyspace by requiring a minimum of 30 normal alpha characters (including spaces and maybe an optional period or comma) we’ve also made it very expensive (time consuming) to brute force. So my passphrase could be “my name is jason and this is my password.” – this is 40+ characters and no more difficult to remember than a dictionary word – but it’s not found in any dictionary so it will stop dictionary attacks and it has a very large keyspace so it will slow down brute force attacks considerably, making them impractical.

A true loss of security? It doesn’t have to be – at least not for the organization. However, I can already hear Nielsen groaning on the usability of typing 40+ characters blindly into a masked password box. :) This is where two-factor authentication can solve the problem of passwords and passphrases completely – which can really improve usability.

Nielsen’s observation about typing in passwords on mobile devices makes the most sense to me. It is more difficult to shoulder surf someone on a mobile device. It’s also much more difficult to type a password with complexity requirements correctly on a mobile device unless you’re a profession texter. There’s probably not as much risk here by unmasking, so we could probably forgo the mask on mobile devices.   It’s trivial to detect mobile devices, so simple choosing to use a non-masked text box in these situations would be okay for most situations.

But before we go running to dump the masked password box, there’s a few questions we must ask – what are the unintended side-effects of this change? Are there good workarounds to these issues? How much education do developers need to implement this? Remember, we’ve known about and had solutions to SQL Injection for over 10 years now, yet there’s still a constant barrage of developers who still don’t know how to protect against this properly – this is an education problem.

Here’s my initial pass of issues I see by displaying (not masking) passwords to end users:

  1. Accidental observation – in the enterprise there are many opportunities where someone else will be at your computer and may accidentally see your password. A great example of this is when using older command line programs that required the password as an argument. I’ve on occasion seen others’ passwords and others have seen mine.
  2. Autocomplete Web Forms – Modern web browsers now remember and prefill text boxes with previously used input. This is a usability feature at the expense of privacy/security. This means the password is stored not only  in the OS somewhere for retrieval later, but will pop-up anytime a user starts typing in the textbox. This has the most serious implications on public systems. Developers, if aware of the issue, can prevent this data from being stored by setting the autocomplete attribute to off in the form. This is a developer educational issue.
  3. Compliance – I don’t want to get into all the details here, but bad password management practices can cause compliance issues. Lack of compliance can affect business. Not masking passwords may be in violation of certain security requirements for the organization.
  4. Nuanced Issues – there are potentially a variety of issues for every masked password box implementation – for instance, in .NET for Windows Applications, there’s a way to encrypt passwords in memory (using the System.Security.SecureString object). Displaying the characters in the textbox first may completely defeat the security provided by this control.
  5. What else? – Are there other issues here?

Certainly a usability guy isn’t going to or need to understand all the ins and outs of security, just as I won’t understand the broad range of usability issues.  We should be cautious when recommending broad sweeping changes that affect security. Additionally, us security folks should not resist usability improvements that affect security just because we’ve always done things another way – we should have a good solid business reasons why we do things the way we do. We need to make sure changes can continue to protect our customers/users. Changes to security functionality can often cause unintended side-effects that can put us at risk.  Not that there isn’t a good solution – to me, the most IDEAL solution to improve usability would be to dump passwords as the authentication mechanism all together – there are some partial solutions to this problem, but Identification is a difficult problem to solve – we’re definitely not there yet.  Personally, I look forward to the day where authentication, passwords, and password management all become a thing of the past, but that’s a completely different conversation.

Session Attacks and ASP.NET – Part 2

In Session Attacks and ASP.NET – Part 1, I introduced one type of attack against the session called Session Fixation as well as ASP.NET’s session architecture and authentication architecture.  In this post, I’ll delve into a couple specific attack scenarios,  cover risk reduction, and countermeasures.

Attack Scenario: ASP.NET Session with Forms Authentication

So understanding the decoupled nature of ASP.NET’s session management from the authentication mechanisms outlined in the previous post, let’s walk through two different session fixation attempts with Forms Authentication and session management.

Scenario 1: Attacker does not have a Forms Authentication Account

  1. Attacker browses target web site and fixates a session
  2. Attacker traps victim to use the fixated session
  3. Victim logs in using Forms Authentication and moves to areas of the site protected by forms authorization
  4. Attacker cannot pursue because he has not authenticated.

Not a big deal – UNLESS session reveals information or makes decisions outside of an authenticated area (within the same Application of course), which is very plausible.  If the programmer happened to use Forms authentication and then strictly uses Session to make security decisions (no IPrincipal role authorization with Principal Permissions or authorization sections in the web.config), some real issues can result. Since session is decoupled from authentication, any area not controlled by authorization that shows session information can be exploited by the attacker.

Scenario 2: Attacker has their own Forms Authentication account

  1. Attacker browses to web site and fixates a session.
  2. Attacker Logs in to site using Forms Authentication.
  3. Attacker traps victim to use the fixated session.
  4. Victim logs in and moves to areas of the site protected by forms authorization.
  5. Attacker, still logged in under their own credentials, has the victim’s session.

This was odd and even surprising to me at first when I actually pulled it off, but it works because Authentication is completely decoupled from the Session Management – they are tracked separately (different cookies, remember?).

Consider next that session fixation isn’t the only problem – any session attack can take advantage of the decoupled nature of ASP.NET Session management and Authentication. If the attacker can fixate, hijack, or steal a Session from the victim, this type of attack will succeed no matter if the web site uses Forms Authentication, Windows Authentication, Client Certificates, etc. It doesn’t matter. As long as the attacker also has an account in the system as well as the victim’s Session ID, they can take over the session…that is unless the developer adds some additional checks – more on this later.

The take-away point is this – out of the box, authentication within ASP.NET does not provide protection from session attacks . This means that ASP.NET’s session implementation is not concerned with the security of session as it relates to authentication. Session is a feature that developers can provide to any user, authenticated or not.

This means that developers must understand the implications of the asymmetrical relationship between session and authentication and protect against it or the ASP.NET sites will be vulnerable to various session attacks.


The session infrastructure in ASP.NET is okay, but not as robust from a security perspective as it could (or should be, in my opinion). The ASP.NET session keyspace is decent – 120 bytes with good entropy, and a default 20 minute sliding window for expiration (30 minutes for Forms Authentication).

Comparing the configuration features APPLICABLE for securing the cookie for Forms Authentication vs Session – it’s clear great care was taken to provide a broad range of options to protect the Forms Authentication Ticket cookie.  The Session configuration options pale in comparison.

Web.Config Option Session Cookie Forms Cookie
Cookieless + +
Timeout + +
Require SSL +
Cookie Domain +
Cookie Path +
Disable Sliding Expiration +

Table – 1: Forms vs. Session Options

While not an “apples to apples” comparison (since session and authentication are different animals), the point I want to illustrate is this is the Cookie Protection features for both should be similarly strong.

Cookieless – should be avoided for both Forms Authentication and Session at all costs. It’s just too easy to hijack a session with the ID in the URL unless you really aren’t concerned so much with security.

Timeout – This timeout is typically a sliding value – so as long as the user is active on the site, the cookie remains valid (session OR authentication).

Require SSL – This requires that SSL be enabled for the Cookie to be transmitted.   Forms authentication allows this, Session does not (though you can certainly do this programmatically which is a good idea if your site security should requires a higher level of protection).

Cookie Domain – This allows you to override the default domain on the cookie.

Cookie Path – This allows the cookie to be used for certain paths.  This is important, as it can limit the area of the site where the cookie is valid. Forms Authentication allows this, Session does not, but perhaps could be set programmatically (not sure what the side effects would be, since ASP.NET may attempt to create a new cookie if the session object is used outside the set path).  This could potentially help add protection to the session if this cookie path is to set to only authorized areas of the site.

Sliding Expiration – By setting this to false, Forms Authentication allows an absolute expiration of the cookie. For example, you can force expiration every 10 minutes, whether the user is active or not. With session, there’s no way to force expiration.

While none of the above features are a slam-dunk 100% mitigation, they all certainly reduce risk. If ASP.NET was able to allow the Authentication Modules to collaborate with the Session module internally in certain situations through configuration, that would go a long way to add some protections in ASP.NET against session attacks.

So then the BEST solution, would be for ASP.NET to issue a NEW session ID after any successful authentication, and then transfer the session over to the new Session ID. That way if the attacker has a victim’s session, once they log in, it will vaporize from the attackers perspective. Or better yet, NEVER deliver session until the user logs in. There’s some code that looks like it may allow a new session ID to be distributed on login in KB899918 [4], but it’s not presented in any useful or reusable way for the masses.  The code is incredibly long and difficult to follow due to the nuances of setting cookies, allowing redirects, etc.  The language of the KB article is also not very clear.

A way to mitigate (and even detect) this attack now for Forms Authentication or any other authentication mechanism, is to simply store something related to the Authentication in Session. Then on each subsequent request, make sure they match. This couples the authentication mechanism to session management – if they don’t match – then a session attack occurred (or something’s out of sync programmatically). So for Forms Authentication, you would store the Forms Authentication Ticket Name in Session. This value is stored in the forms cookie which is encrypted and persisted to the client after authentication and can be configured to be transmitted only over SSL/TLS.

Once you couple session and the authentication mechanism together and than configure Forms authentication properly, you can provide adequate session protection. Another important point – don’t use Session anywhere outside of the authenticated areas of your site.

What Should Be Done?

As noted in Part 1, Microsoft closed a Bug submitted for this issue.  To be fair, Microsoft has worked hard to improve the security of their code through the Security Development Lifecycle.  To me, adding this feature in future versions of .NET is a no-brainer and right in step with their current security approach.

So back to the original question:

Should ASP.NET improve the session management module in ASP.NET to allow for more secure configuration by coupling it with Authentication mechanisms in ASP.NET?

I guess it depends on your perspective. The fundamental question is should Session be coupled to an authentication mechanism? This will depend on what the web site is doing and the level of acceptable risk, etc. If so, the session HTTP Module would need to be aware of the other Authentication HTTP Modules (which sort of breaks the pattern). Another option would be for each HTTP Module to implement its’ own or share a common session manager…of course there are  situations when you might want to use session without requiring authentication – should SSL, Path and Domain be configurable in the web.config for <sessionState>?  Or should things remain as they are, and should those concerned with the issue just fix it in code as described above?

Should the onus be on the developer to know about this and mitigate it in code? Without having the feature in the framework and in the MSDN documentation, most developers will remain in the dark about these issue. Remember, we’re still dealing heavily with SQL Injection and XSS issues in web sites…issues that we have known about how to protect against for years.

The INFOSEC community will almost always recommend you move a session after Authentication, meaning they have to be tied together (if the risk profile is such that you need that level of protection) – ASP.NET’s is decoupled – there-in lies the issue…

Love to hear your thoughts as well!


Here’s what a somewhat brief Google turned up for me on Session Fixation and ASP.NET – nothing quite seemed to address the issues described above:

[1] Threats and Countermeasures for Web Services
[2] MSDN Magazine March 2009 Security Briefs
Reveals the perspective from MS that the only attack is via cookieless Sessions (inaccurate):
[3] MS Employee Blog
Reveals assumption that cookie-less session is where the problems is (inaccurate):
[4] How and Why SessionID’s are reused in ASP.NET (.NET 1.1)
[5] JoelOnSoftware Forum Discussion on Session Fixation: