Continuous Opportunity – DevOps and Security

Thank you to everyone at the Minnesota ISSA chapter for the opportunity to share some background on DevOps and some ideas about how security teams can benefit by adopting DevOps practices & tools. The presentation slides are available here:

Continuous Opportunity- DevOps and Security.

To learn more about DevOps and Cloud Security, check out the new DEV540: Secure DevOps and Cloud Application Security course!

About the Author
At the SANS Institute, Ben Allen works as a member of the Information Security team to protect the world’s most trusted source of computer security training, certification, and research. He applies knowledge gained through over a decade of Information Security experience to problem domains ranging from packet analysis to policy development on an ongoing basis. Ben has contributed to Security best practices for DevOps and operationalized DevOps techniques for security teams leading to improvements in release time and stability.

Prior to joining the Information Security team at SANS, Ben worked in both the operations and development teams at SANS, as a Security Engineer and Architect at the University of Minnesota, and long ago as a systems administrator for the LCSE. Ben holds numerous SANS certifications, and a Bachelor’s degree in Electrical Engineering.

DOM-based XSS in the Wild

Editor’s Note: Today’s post is from Phillip Pham. Phillip is a Security Engineer at APT Security Solutions. In this post, Phillip walks through a cross-site scripting vulnerability he identified in the Fry’s web application.

At the time of writing, the stated vulnerability has already been remediated by Fry’s Electronics. Thank you for taking swift action!

Fry’s Electronics is a consumer electronic retail chain that has presence in various regions across the country. Fry’s Electronics introduced a marketing concept to promote sales and track consumers’ spending habits by providing personalized “Promo Code”. These codes once presented at either online and/or local store will provide shoppers additional discounts to their electronic goods.

Consumers are contacted by Fry’s Electronics marketing “bot” periodically via emails which include the aforementioned promo code (as seen in screenshot below). To view full list of discounted products, consumers are required to click on the provided link (example below) with promo code attached as a data parameter which is part of the URL.

Promotional product listing URLs
This URL is hosted by a Limelight Networks, who is a Content Delivery Network (CDN) vendor.

When viewing advertising materials on a mobile device.


The Vulnerability
Web applications are often vulnerable to a commonly found attack, Cross Site Scripting (aka XSS), which leverages the lack of output encoding and input validation within a web application. The exploit allows a malicious actor (hacker) to embed JavaScript content (payloads) into the end-users browser and execute malicious code. Often, this results in phishing for users credentials, credit card information, malware distribution, and other malicious activities. Exploiting these vulnerabilities is usually fairly simple, yet yields devastating impact to end-users.

Because Fry’s Electronics marketing web site exposes the promo code via an unfiltered data parameter, “promocode”, thus creates an attack vector for XSS exploitation.

Initial Test
To confirm the Cross-Site Scripting vulnerability, a malicious payload was appended to the end of the promotion code parameter as follows:


Immediately, a popup displayed as shown in screenshot below and the vulnerability was confirmed:


Lets take a look at the offending code:

var promocode = getUrlParameter(‘promocode’);
var pattern = new RegExp(“[0-9]{1,7}”);
var result = pattern.test(promocode);
var default_text = “Your PromoCode for this email:”+promocode;


Do you see any problems here? The “promocode” value, which is partially validated on line 3, is written to the browser using the raw .html method on line 7:

Phishing Campaign
As previously mentioned, XSS allows malicious actors to conduct phishing campaigns to lure their victims to provide personal information. With given context, the phishing campaign proof-of-concept built to trick the users to provide their personal cell phone numbers to further widen attack surface to infect consumer mobile devices leverage exploits (e.g. Stagefright (Android) or an iOS 0-day).


Note: The content of the floating panel (red) is loaded from an externally hosted JavaScript file which was embedded as part of the legitimate Fry’s marketing site.

Although, simple exploitation (alert popup) of the vulnerability requires very minimal effort. However, such exploitation does not allow a full delivery of the stated phishing campaign to the victims, thus diminishing the success rate. Upon further research and analysis of the web application HTML source code, in order to fully deliver the payload, the following series of events must take place:

  • Allow legitimate content to be completely loaded
  • Deliver externally hosted payload
  • Victims are required to provide cell phone number before able to dismiss such dialog (via floating DIV)

Evasion #1: Regular Expression Bypass
The application employs a “numbers only” validation against the actual “promocode” data parameter via client-side JavaScript (no further server-side validation):

var promocode = getUrlParameter(‘promocode’);
var pattern = new RegExp(“[0-9]{1,7}”);
var result = pattern.test(promocode);

The regular expression above tests for numeric values of the first 7 characters which can be easily bypassed as trailing characters (starting at 8th position) will not be validated. Hence, the payload executed during our Initial Test.

Evasion #2: The equal sign (=) breaks trailing payloads
The application obtains the name and value of the promo code data parameter via the “getUrlParameter” (snippet below).

function getUrlParameter(sParam) {
var sPageURL = decodeURIComponent(,
sURLVariables = sPageURL.split(‘&’),

for (i = 0; i < sURLVariables.length; i++) {
sParameterName = sURLVariables[i].split('=');

if (sParameterName[0] === sParam) {
return sParameterName[1] === undefined ? true : sParameterName[1];

With current script implementation, given the payload:


The trailing external URL will be nullified because the script detected the equal (=) sign and treated as another set of parameter/value pair. Thus yields the output:

1234567<script src

To defeat this logic, the payload must not contain the equal (=) sign.

Evasion #3: The Working Payload
Upon further review, the application employs Google JQuery technologies to dynamically render contents. Furthermore, JQuery allows loading of external script (acting as src) via its .getScript(‘url’) method which does not require an equal sign (=). Example of using JQuery to reference external scripts below.


Putting it altogether
Once evasions had been fully deployed, the remaining payload is trivial, thus the following payload will allow rendering of phishing content as shown below.


The evil JavaScript file (test.js) contains the following content:

var div = document.createElement(“div”); = “700px”; = “150px”; = “red”; = “white”;
div.innerHTML = “CONGRATLUATIONS…”; = “absolute”; //floating div = “20%”; //floating div = “20%”; //floating div

As you can see from the screenshot below, the form is injected into the page and asks the user for their phone number:


Then, pressing submit results in the the phone number being sent to the evil site:


Mitigation Strategies
As we have seen many times before, XSS vulnerabilities can be fixed using a combination of output encoding and input validation. Some simple mitigations are as follows:

  • This a classic DOM-based XSS vulnerability. OWASP provides a DOM-based XSS Prevention Cheat Sheet for fixing this. Short story – using an encoding library (e.g. jQuery Encoder) to HTML encode the promo code before writing it to the page
  • Fix the validation routine to check the entire input value

For more information on defending applications, check out the awesome courses available in SANS Application Security curriculum.

About the Author
Phillip Pham is a security engineer with a strong passion for Information Security. Phillip has worked in various sectors and for organizations of various sizes, ranging from small start-ups to Fortune 500 companies. Prior to transitioning into an Information Security career, Phillip spent ten years in software engineering. His last five years of Information Security experience have been focused on Application Security. Some of his application security assessment strengths (both onpremise and cloudbased) include web applications, web services, APIs, Windows client applications, and mobile applications. He currently holds a GCIH certification and is cofounder of APT Security Solutions. Follow Phillip on Twitter @philaptsec.

Threat Modeling: A Hybrid Approach

Editor’s Note: Today’s post is from Sriram Krishnan. Sriram is a Security Architect at Pegasystems. In this post, Sriram introduces a hybrid threat modeling white paper addressing the limitations in traditional threat modeling methodologies.

In the face of increasing attacks at the application layer and enterprise applications moving towards the cloud, organizations must look at security not as just a function, but a key business driver. It goes without saying that it is the inherent responsibility of organizations to provide a secure operating environment for their customers and employees to protect their interest. Understanding the threat landscape is an important prerequisite for building secure applications. Unless people developing the software applications are aware of the threats that their applications exposed to in production, it is not possible to build secure applications. A SANS survey (2015 State of Application Security: Closing the Gap) indicates that threat assessment (which can also be referred to as threat modeling) is the second leading application security practice (next to penetration testing) for building secure web applications. This helps prove that threat modeling is a fundamental building block for building secure software.

Threat Modeling should be viewed as a pro-active security practice to identify security weakness in software applications early Software Development Lifecycle phase. This enables software developers to build software applications by understanding associated threats. There are various techniques published for performing threat modeling such as STRIDE, Attack Tree, and Attack Library. Organizations tend to be prejudiced towards a particular technique while performing threat modeling exercise. However these techniques have their own advantage and limitations in real-world implementation.

The STRIDE technique may be good in enumerating the threats, however STRIDE does not aid in developing countermeasures and mitigation plans. Attack Tree provides an overview about the attack surface at some level of abstraction, which results in not capturing data essential for understanding the threat scenario. Finally, Attack Library may provide information about the attack vectors and be suitable as checklist model, but does not contribute to a complete threat model. When implemented as separate techniques, some of the key aspects required for threat modeling may be missed, thus impacting the productivity and comprehensiveness of the exercise.
To reap the complete benefit of threat modeling, I propose utilizing a combination of modeling techniques to perform the various activities or tasks involved in the threat modeling process. This hybrid model eliminates limitations by adopting a structured approach, capturing optimum details, and representing the data in an intelligible way.

The following white paper analyses the various limitations of the STRIDE, Attack Tree, and Attack Library methodologies and presents a hybrid model that would implement the best techniques from each option.

A Hybrid Approach to Threat Modeling

I have also created a github project that contains a Threat Modeling Template for project teams to use as they get started:

Threat Modeling

About the Author
Sriram Krishnan (@sriramk21) is a cyber security professional with 12 years of experience, currently working as a Security Architect at Pegasystems Worldwide. He has worked with a leading internet service provider, big 4 consulting firm and global banks performing penetration testing, security architecture review, and advising on security best practices for global telecom industry, banking and financial services sector, and government entities. Automated Hardening Framework

Editors Note: Today’s post is from Jim Bird. Jim is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program.

Automated configuration management tools like Ansible, Chef and Puppet are changing the way that organizations provision and manage their IT infrastructure. These tools allow engineers to programmatically define how systems are set up, and automatically install and configure software packages. System provisioning and configuration becomes testable, auditable, efficient, scalable and consistent, from tens to hundreds or thousands of hosts.

These tools also change the way that system hardening is done. Instead of following a checklist or a guidebook like one of the CIS Benchmarks, and manually applying or scripting changes, you can automatically enforce hardening policies or audit system configurations against recognized best practices, using pre-defined hardening rules programmed into code.

An excellent resource for automated hardening is a set of open source templates originally developed at Deutsche Telekom, under the project name “”. The authors have recently had to rename this hardening framework to

It includes Chef recipes and Puppet manifests for hardening base Linux, as well as for SSH, MySQL and PostgreSQL, Apache and Nginx. Ansible support at this time is limited to playbooks for base Linux and SSH. works on Ubuntu, Debian, RHEL, CenOS and Oracle Linux distros.

For container security, the project team just added an InSpec profile for Chef Compliance against the CIS Docker 1.11.0 benchmark. is comprehensive and at the same time accessible. And it’s open, actively maintained, and free. You can review the rules, adopt them wholesale, or cherry pick or customize them if needed. It’s definitely worth your time to check it out on GitHub:

To learn more about Dev, Ops, and Security, check out the new SANS Application Security course Secure DevOps: A Practical Introduction co-authored by Jim Bird (@jimrbird) and Ben Allen (@mr_secure).

This post was originally posted by Jim Bird at

About the Author
Jim Bird, SANS analyst and lead author of the SANS Application Security Survey series, is an active contributor to the Open Web Application Security Project (OWASP) and a popular blogger on agile development, DevOps and software security at his blog, “Building Real Software.” He is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program. Jim is an experienced software development professional and IT manager, having worked with high-integrity and high-reliability systems at stock exchanges and banks in more than 30 countries. He holds PMP, PMI-ACP, CSM, SCPM and ITIL certifications.

2016 State of Application Security: Skills, Configurations, and Components

The 2016 SANS State of Application Security Survey analyst paper and webcast are complete. This year, Johannes Ullrich, dean of research at the SANS Technology Institute and instructor for DEV522: Defending Web Applications Security Essentials, led the project by analyzing the survey results, writing the whitepaper, and delivering the webcast.

We had 475 respondents participate in this year’s survey, and Johannes identified the following key findings to discuss in the whitepaper:

38% have a “maturing” AppSec program

40% have documented approaches and policies to which third-party software vendors must adhere

41% name public-facing web apps as the leading cause of breaches

For more details, the webcast and whitepaper can be found here:

2016 State of Application Security: Skills, Configurations and Components

Managing Applications Securely: A SANS Survey

Thank you to all of the sponsors for bringing this content to the SANS community: Checkmarx, Veracode, and WhiteHat Security.

Also, a special thank you goes out to the webcast panel: Amit Ashbel (Checkmarx), Tim Jarrett (Veracode), and Ryan O’Leary (WhiteHat).

We will see you next year for the 2017 State of Application Security Survey!

Securing the SDLC: Dynamic Testing Java Web Apps

Editors Note: Today’s post is from Gregory Leonard. Gregory is an application security consultant at Optiv Security, Inc and a SANS instructor for DEV541 Secure Coding in Java/JEE.

The creation and integration of a secure development lifecycle (SDLC) can be an intimidating, even overwhelming, task. There are so many aspects that need to be covered, including performing risk analysis and threat modeling during design, following secure coding practices during development, and conducting proper security testing during later stages.

Established Development Practices
Software development has begun evolving to account for the reality of security risks inherent with web application development, while also adapting to the increased pressure of performing code drops in the accelerated world of DevOps and other continuous delivery methodologies. Major strides have been made to integrate static analysis tools into the build process, giving development teams instant feedback about any vulnerabilities that were detected from the latest check-in, and giving teams the option to prevent builds from being deployed if new issues are detected.

Being able to perform static analysis on source code before performing a deployment build gives teams a strong edge in creating secure applications. However, static analysis does not cover the entire realm of possible vulnerabilities. Static analysis scanners are excellent at detecting obvious cases of some forms of attack, such as basic Cross-Site Scripting (XSS) or SQL injection attacks. Unfortunately, they are not capable of detecting more complex vulnerabilities or design flaws. If a design decision is made that results in a security vulnerability, such as the decision to use a sequence generator to create session IDs for authenticated users, it is highly likely that a static analysis scan won’t catch it.

Dynamic Testing Helps Close the Gaps
Dynamic testing, the process of exploring a running application to find vulnerabilities, helps cover the gaps left by the static analysis scanners. Teams with a mature SDLC will integrate the use of dynamic testing near the end of the development process to shore up any remaining vulnerabilities. Testing is generally limited to a single time period near the end of the process due to two reasons:

  • It is generally ineffective to perform dynamic testing on an application that is only partially completed as many of the vulnerabilities reported may be due to unimplemented code. These vulnerabilities are essentially false positives as the fixes may already be planned for an upcoming sprint.
  • Testing resources tend to be limited due to budget or other constraints. Most organizations, especially larger ones with dozens or even hundreds of development projects, can’t afford to have enough dedicated security testers to provide testing through the entire lifecycle of every project.

Putting the Power into Developer’s Hands
One way to help relieve the second issue highlighted above is to put some of the power to perform dynamic testing directly into the hands of the developers. Just as static analysis tools have become an essential part of the automated build process that developers work with on a daily basis, wouldn’t it be great if we could allow developers to perform some perfunctory dynamic scanning on their applications before making a delivery?

The first issue that needs to be addressed is which tools to start with? There is a plethora of different testing tools out there. Multiple options exist for every imaginable type of test, from network scanners to database testers to web application scanners. Trying to select the right tool, learn how to configure and use it, and interpret the results generated by the tools can be a daunting task.

Fortunately, many testing tools include helpful APIs that allow you to harness the power of a testing tool and integrate it into a different program. Often these APIs allow security testers to create a Python scripts which will perform multiple scans with a single script execution.

Taking the First Step
With so many tools to choose from, where should a developer who wants to start performing dynamic testing start? To help answer that question, Eric Johnson and I set out to create a plugin that would allow a developer to utilize some of the basic scanning functionality of OWASP’s Zed Attack Proxy (ZAP) within the more familiar confines of their IDE. Being the most familiar with the Eclipse IDE for Java, I decided to tackle that plugin first.

The plugin is designed to perform two tasks: perform a spider of a URL provided by the developer, followed by an active scan of all of the in-scope items detected during the spider. After the spider and scan are complete, the results are saved into a SecurityTesting project within Eclipse. More information about the SecurityTesting plugin, including installation and execution instructions, can be found at the project’s GitHub repository:

Using this plugin, developers can begin performing some basic security scans against their application while they are still in development. This gives the developer feedback about any potential vulnerabilities that may exist, while reducing the workload for the security testers later on.

Moving Forward
Once you have downloaded the ZAP plugin and started getting comfortable with what the scans produce and how it interacts with your web application, I encourage you to begin working more directly with ZAP as well as other security tools. When used correctly, dynamic testing tools can provide a wealth of information. Just be aware, dynamic testing tools are prone to the same false positives and false negatives that affect static analysis tools. Use the vulnerability reports generated by ZAP and other dynamic testing tools as a guide to determine what may need to be fixed in your application, but be sure to validate that a finding is legitimate before proceeding.

Hopefully you or your development team can find value in this plugin. We are in the process of enhancing it to support more dynamic testing tools as well as introducing support for more IDEs. Keep an eye on the GitHub repository for future progress!

To learn more about secure coding in Java using a Secure Development Lifecycle, sign up for DEV541: Secure Coding in Java/JEE!

About the Author
Gregory Leonard (@appsecgreg) has more than 17 years of experience in software development, with an emphasis on writing large-scale enterprise applications. Greg’s responsibilities over the course of his career have included application architecture and security, performing infrastructure design and implementation, providing security analysis, conducting security code reviews, performing web application penetration testing, and developing security tools. Greg is also co-author for the SANS DEV541 Secure Coding in Java/JEE course as well as a contributing author for the security awareness modules of the SANS Securing the Human Developer training program. He is currently employed as an application security consultant at Optiv Security, Inc.

Static Analysis and Code Reviews in Agile and DevOps

Editors Note: Today’s post is from Jim Bird. Jim is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program. In this post, Jim covers how to perform secure code analysis in an Agile development lifecycle.

More and more organizations are adopting Agile development and DevOps in order to speed up delivery of projects and systems. The faster that these teams move and the more changes that they make, the harder it is for InfoSec to keep up, to understand and manage security risks. In extreme cases where teams are following practices like One Piece Continuous Flow with rapid delivery of individual features, or Continuous Deployment, automatically pushing out 10 or 100 or more changes every day, well-established security controls like manual audits and pen testing are no longer useful.

Security has to be done in a completely different way in these environments, by shifting security controls earlier into the lifecycle, and integrating security directly into engineering workflows.

A key part of this is focusing on code security, through static analysis testing (SAST) and code reviews.

Static Analysis in Agile/DevOps

Self-service, automated code checking with static analysis tools can be wired directly into how engineers write code. Static analysis checking can be plugged into each developer’s IDE to catch problems while they are coding. Fast incremental static analysis checking can be included in Continuous Integration to catch problems immediately after code has been checked in, and deeper static analysis checks can be run as a stage in a Continuous Delivery pipeline to ensure that code won’t be deployed with any serious security vulnerabilities.

Different kinds of static analysis tools provide different kinds of value:

  • Style checking tools like PMD help to ensure that code is readable and follows good practice. These tools won’t make code more secure, but they will make it easier to review and easier to understand and safer to change.
  • Correctness tools like FindBugs catch common coding mistakes and logic bugs which could be exploited or could cause run-time failures.
  • Security analysis tools like Find Security Bugs or commercial tools like HP Fortify can identify serious security vulnerabilities like XSS injection and SQL injection.

To be successful with static analysis, you need to:

  • Ensure that scans run quickly, and find a way to feed problems directly back to developers – into their IDEs or into their development backlog or bug tracking system. Tools like ThreadFix make this easy.
  • Select tools which provide clear, unambiguous information to developers on where the problem is, why it is a problem, and how to fix it.
  • Take time to configure the tools and tune out false positives and low-priority noise – so that developers can focus on real vulnerabilities and other bugs which need to be fixed.

Code Reviews
Code reviews are – or should be – a common practice in many Agile development shops, and are fundamental to the engineering culture in leading DevOps organizations like Google, Facebook and Etsy.

Peer reviews generally focus on making sure that the code is understandable and maintainable, that it follows code conventions and makes proper use of libraries and frameworks. Peer reviewers may also suggest ways to improve the efficiency of code, or ways to make it simpler. These reviews provide a way for developers to learn how to write good code and share best practices. And they can find important logic and functional bugs.

Code reviews can also be leveraged to improve security by finding problems early in development.

Through increased transparency, peer reviews discourage malicious insiders from trying to plant a back door or logic bomb in code. Reviewers can be trained to check for defensive coding (data validation, error and exception handling, and logging) and secure coding practices.

While all code changes should be peer reviewed, high-risk code needs special attention – this is code where even small mistakes can have high consequences for security or reliability. In organizations like Google or Twitter, developers post a request for a member of the security team to review code changes when they assess that a code change is risky. At other organizations like Etsy, the security team works with engineers to identify high-risk code (like security libraries and security features, or public network-facing APIs), and are automatically alerted if this code changes so that they can conduct a review.

Because most changes in an Agile or DevOps environment are small and incremental, code reviews can be done quickly – especially if the code has already passed through static analysis to catch the low-hanging fruit.

As DevOps teams adopt code-driven infrastructure management tools like Puppet and Chef, code reviews can be extended to catch mistakes and oversights in Chef recipes and Puppet manifests, helping to ensure that the run-time is reliable and secure.

Scaling through Continuous Security Checking
Adding automated static analysis checking into a team’s CI/CD workflow, and leveraging code reviews to include security checks will go a long way to improving the quality, safety and security of systems with minimal friction and without adding significant costs or delays. Most of this can be introduced in a way that is almost transparent to the team, without changing how they work or the tools that they rely on.

Instead of depending on point-in-time assessments and control gates late in a project, security checking In DevOps and Agile should be done continuously and incrementally – the same way that these teams build and deliver systems. This will ensure that security can keep up with the rapid pace of change, and will help to scale up your security capability, by enlisting the engineering organization to take on more of the responsibility for code security.

The SANS Software Security curriculum is also excited to announce a new course Secure DevOps: A Practical Introduction co-authored by Jim Bird (@jimrbird) and Ben Allen (@mr_secure). Follow us @sansappsec for more information on this course and training locations in your area.

About the Author
Jim Bird, SANS analyst and lead author of the SANS Application Security Survey series, is an active contributor to the Open Web Application Security Project (OWASP) and a popular blogger on agile development, DevOps and software security at his blog, “Building Real Software.” He is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program. Jim is an experienced software development professional and IT manager, having worked with high-integrity and high-reliability systems at stock exchanges and banks in more than 30 countries. He holds PMP, PMI-ACP, CSM, SCPM and ITIL certifications.


Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will discuss using anti-forgery tokens in the ASP.NET MVC framework.

CSRF (Cross-Site Request Forgery) has been #8 in the OWASP Top 10 for many years. For a little background, CSRF is comprehensively explained in this article showing the attack, how it works, and how to defend your applications using synchronization tokens. Due to its prevalence, popular frameworks are building in CSRF protection for development teams to use in their web applications.

Protecting the ASP.NET MVC Web Application

This post will demonstrate how easily the ASP.NET MVC framework components can be used to protect your web application from CSRF attacks without any additional coding logic.

Generally, the protection from CSRF is built on top of two components:

  • @Html.AntiForgeryToken(): Special HTML helper that generates an anti-forgery token on the web page of the application (more details can be found here).
  • ValidateAntiForgeryToken: Action attribute that validates the anti-forgery token (more information can be found here).

They say practice makes perfect, so let’s walk through to a real-life demo. First, create a simple “Hello World” ASP.NET MVC web application in your Visual Studio. If you need help with this, all the necessary steps are described here.

The Client Side
After creating the web application in Visual Studio, go to the Solution Explorer panel and navigate to the main login page located here: Views/Account/Login.cshtml. We’ll use the built in CSRF prevention libraries to protect the login page of the web application.

The markup of the file is quite simple. A standard form tag with input fields for the username and password. The important piece for preventing CSRF can be seen on line 4 in the following snippet. Notice the inclusion of the AntiForgeryToken() HTML helper inside the form tag:

@using (Html.BeginForm(“Login”, “Account”, new { ReturnUrl = ViewBag.ReturnUrl },
FormMethod.Post, new { @class = “form-horizontal”, role = “form” }))
<h4>Use a local account to log in.</h4>
<hr />
@Html.ValidationSummary(true, “”, new { @class = “text-danger” })
<div class=”form-group”>
@Html.LabelFor(m => m.Email, new { @class = “col-md-2 control-label” })
<div class=”col-md-10″>
@Html.TextBoxFor(m => m.Email, new { @class = “form-control” })
@Html.ValidationMessageFor(m => m.Email, “”, new { @class = “text-danger” })
<div class=”form-group”>
@Html.LabelFor(m => m.Password, new { @class = “col-md-2 control-label” })
<div class=”col-md-10″>
@Html.PasswordFor(m => m.Password, new { @class = “form-control” })
@Html.ValidationMessageFor(m => m.Password, “”, new { @class = “text-danger” })

Let’s run the application and see what the anti-forgery token helper actually does. Browse to the login page via the menu bar. You should see a similar window on your screen:

Sample Login page

Right click in the page, and view the page source. You will see a hidden field added by the anti-forgery token helper called “__RequestVerificationToken”:

<input name=”__RequestVerificationToken” type=”hidden” value=”WhwU7C1dj4UHGEbYowHP9HZqWTDJDD8-VKAfl3f2vhMut3tvZY8OZIlMBOBL3NONMHlwUTZctE7BlrGKITG8tEEHVpTsV4Ul6Z6ijTVK_8M1″/>

The Server Side
Next, lets shift our focus to the server side. Pressing the Log In button will POST the form fields, including the request verification token to the server. To view the data submitted, turn on the browser debugging tool in Internet Explorer (or any other network debugging tool) by pressing F12. Enter a username and password, then log in. The data model of the form request contains the anti-forgery token generated earlier on the server side:

Request verification token post parameter

The request reaches the server and will be processed by the account controller’s Login action. You can view the implementation to by going to Controllers/AccountController.cs file. As you can see, the ValidateAntiForgeryToken attribute on line 4 is in place above the action. This tells the framework to validate the request verification token that was sent to the browser in the view.

// POST: /Account/Login
public async Task Login(LoginViewModel model, string returnUrl)


For demo purposes, let’s remove the HTML helper @Html.AntiForgeryToken() on Login.cshtml view to simulate a malicious request from an attacker’s web site. Try to log in again. The server responds with an error because the ValidateAntiForgeryToken filter runs and cannot find the anti-forgery token in the request. This causes the application to stop processing the request before any important functionality is executed by the server. The result will look as follows:

AntiForgery validation error

Now that we’ve shown you how easy it is to set up CSRF protection in your ASP.NET MVC applications, make sure you protect all endpoints that make data modifications in your system. This simple, effective solution protects your MVC applications from one of the most common web application vulnerabilities.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!

About the Author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Secure Data Storage

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe. In this post, Taras will review secure data storage in the ASP.NET MVC framework.

Secure Logging
The system should not log any sensitive data (e.g. PCI, PHI, PII) into unprotected log storage. Let’s look at an example from a healthcare application. The following snippet shows when the application was not able to successfully store the updates to a particular patient. In this case, the logger is writing the patient’s name and other PHI information to an unprotected log file.

log.Info(“Save failed for user: ” + user.SSN + “,” + user.Prescription);

Instead, the application can write the surrogate key (i.e. ID of the patient’s record) to the log file, which can be used to lookup the patient’s information.

log.Info(“Save failed for user: ” + user.Identifer);

Secure Password Storage
Passwords must be salted and hashed for storage. For additional protection, it is recommended that an adaptive hashing algorithm is used to provide a number of iterations or a “work factor” to slow down the computation. If the password hashes are somehow exposed to public, adaptive hashing algorithms slow down password cracking attacks because they require many more computations. By default, the ASP.NET Identity framework uses the Crypto class shown below to hash its passwords. Notice it uses PBKDF2 with the HMAC-SHA1 algorithm, a 128-bit salt, and 1,000 iterations to produce a 256-bit password hash.

internal static class Crypto
private const int PBKDF2IterCount = 1000; // default for Rfc2898DeriveBytes
private const int PBKDF2SubkeyLength = 256/8; // 256 bits
private const int SaltSize = 128/8; // 128 bits

public static string HashPassword(string password)
byte[] salt;
byte[] subkey;
using (var deriveBytes = new Rfc2898DeriveBytes(password, SaltSize, PBKDF2IterCount))
salt = deriveBytes.Salt;
subkey = deriveBytes.GetBytes(PBKDF2SubkeyLength);


Secure Application Configuration
System credentials providing access to data storage areas should be accessible to authorized personnel only. All connection strings, system accounts, and encryption keys in the configuration files must be encrypted. Luckily, ASP.NET provides an easy way to protect configuration settings using the aspnet_regiis.exe utility. The following example shows the connection strings section of a configuration file with cleartext system passwords:

<add name=”sqlDb” sqlConnectionString=”data source=;userid=username;

After deploying the file, run the following command on the web server to encrypt the connection strings section of the file using Microsoft Data Protection API (DPAPI) machine key from the Windows installation:

aspnet_regiis.exe –pef “connectionStrings” . –prov “DataProtectionConfigurationProvider”

After the command completes, you will see the following snippet in the connection strings section.

<connectionStrings configProtectionProvider=”DataProtectionConfigurationProvider”>

The best part about this solution is that you don’t need to change any source code to read the original connection string value. The framework automatically decrypts the connection strings section on application start, and places the values in memory for the application to read.

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!


About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).

ASP.NET MVC: Secure Data Transmission

Guest Editor: Today’s post is from Taras Kholopkin. Taras is a Solutions Architect at SoftServe, Inc. In this post, Taras will review secure data transmission in the ASP.NET MVC framework.

Secure data transmission is a critical step towards securing our customer information over the web. In fact, many of our SoftServe applications are regulated by HIPAA, which has the following secure data transmission requirements:

  • Client-server communication should be performed via secured channel (TLS/HTTPS)
  • Client (front-end application) should not pass any PHI data in URL parameters when sending requests to the server
  • All data transmission outside of the system should be performed via secure protocol (HTTPS, Direct Protocol, etc.)

To satisfy this requirement, let’s examine how to secure data transmission in an ASP.NET MVC application.

Enable HTTPS Debugging
One of my favorite features introduced in Visual Studio 2013 is the ability to debug applications over HTTPS using IIS Express. This eliminates the need to configure IIS, virtual directories, create self signed certificates, etc.

Let’s begin by modifying our SecureWebApp from the ASP.NET MVC: Audit Logging blog entry to use HTTPS. First, select the SecureWebApp project file in the Solution Explorer. Then, view the Properties window. Notice we have an option called “SSL Enabled”. Flip this to true, and you will see an SSL URL appear:


Now, our SecureWebApp can be accessed at this endpoint: https://localhost:44301/.

RequireHttps Attribute
With HTTPS enabled, we can use the ASP.NET MVC framework to require transport layer encryption for our application. By applying the RequireHttps attribute, our application forces an insecure HTTP request to be redirected to a secure HTTPS connection. This attribute can be applied to an individual action, an entire controller (e.g. AccountController), or across the entire application. In our case, we will apply the RequireHttps attribute globally across the application. To do this, add one line of code to App_Start\FilterConfig.cs:

[sourcecode highlight=”8″]
public class FilterConfig
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
filters.Add(new HandleErrorAttribute());

//Apply HTTPS globally
filters.Add(new RequireHttpsAttribute());

After applying the require HTTPS filter, the application protects itself from insecure HTTP requests. Notice that browsing to http://localhost:8080/ results in a redirection to a secure https://localhost connection.

Now, it should be noted that the redirect uses the standard HTTPS port number 443. In Visual Studio, you will see a 404 error because the IIS Express instance is using port 44301. Don’t be alarmed, on a production system this will work as long as you are hosting your application on the standard HTTPS port 443.

Secure Cookies
Another important step for secure data transmission is protecting authentication cookies from being transmitted insecurely over HTTP. Because our entire site requires HTTPS, start by setting the requireSSL attribute to true on the httpCookies element in the web.config file:

<httpCookies httpOnlyCookies=”true” requireSSL=”true” lockItem=”true”/>

If you are using ASP.NET Identity, set the CookieSecure option to Always to ensure the secure flag is set on the .AspNet.Application cookie. This line of code can be added to the Startup.Auth.cs file:

[sourcecode highlight=”5″]
app.UseCookieAuthentication(new CookieAuthenticationOptions
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
CookieHttpOnly = true,
CookieSecure = CookieSecureOption.Always,
LoginPath = new PathString(“/Account/Login”),


If you are using forms authentication, don’t forget to set requireSSL to true on the forms element to protect the .ASPXAuth cookie.

<authentication mode=”Forms”>
<forms loginUrl=”~/Account/Login” timeout=”2880″ requireSSL=”true”/>

After making these changes, login to the application and review the set-cookie headers returned by the application. You should see something similar to this:

HTTP/1.1 200 OK

Set-Cookie: ASP.NET_SessionId=ztr4e3; path=/; secure; HttpOnly
Set-Cookie: .AspNet.ApplicationCookie=usg7rt; path=/; secure; HttpOnly

HTTP Strict Transport Security
Finally, we need to add HTTP Strict Transport Security (HSTS) protection to the application. This protection instructs the browser to always communicate with a domain over HTTPS, even if the user attempts to downgrade the request to HTTP. The HSTS header also happens to prevent HTTP downgrade attack tools, such as sslstrip, from performing man-in-the-middle attacks. More information on this tool can be found in the references below.

With IE finally adding support for the HSTS header in IE11 and Edge 12, all major browsers now support the Strict-Transport-Security header. Once again, one line of code is added to the Global.asax.cs file:

protected void Application_EndRequest()
Response.AddHeader(“Strict-Transport-Security”, “max-age=31536000; includeSubDomains;”);

The max-age option specifies the number of seconds the browser should upgrade all requests to HTTPS (31536000 seconds = 1 year), and the includeSubDomains option tells the browser to upgrade all sub-domains as well.

Alternatively, you can preload your web site’s domain into browser installation packages by registering in the Chrome pre-load list:

For more information on security-specific response headers for ASP.NET applications, there is an open source tool that you can use to automatically set the HSTS header. The Security Header Injection Module (SHIM) can be downloaded via NuGet. More information can be found in the link below.

Making these minor configuration and code changes will ensure your data is secure in transit! To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!


About the author
Taras Kholopkin is a Solutions Architect at SoftServe, and a frequent contributor to the SoftServe United blog. Taras has worked more than nine years in the industry, including extensive experience within the US IT Market. He is responsible for software architectural design and development of Enterprise and SaaS Solutions (including Healthcare Solutions).