Ask the Expert – John Steven

John Steven is the Internal CTO of Cigital. John’s expertise runs the gamut of software security from threat modeling and architectural risk analysis, through static analysis (with an emphasis on automation), to security testing. As a consultant, John has provided strategic direction to many multi-national corporations, and his keen interest in automation keeps Cigital technology at the cutting edge.

This is the last in a series of interviews with appsec experts about threat modeling.

1. Threat Modeling is supposed to be one of the most effective and fundamental practices in secure software development. But a lot of teams that are trying to do secure development find threat modeling too difficult and too expensive. Why is threat modeling so hard – or do people just think it is hard because they don’t understand it?

“Effective in what regard?” The world’s conception of what threat modeling is, what it produces, and what it accomplishes remains in flux. Many find appeal in thinking through “who may be able to do what to your system where” but we as a community can’t even agree on the vocabulary. I addressed this issue in a blog post. Microsoft, security “big spender”, published a threat modeling book and tool and still suffers divergence between the STRIDE+DREAD camp and its security testers on what it means to threat model. Without clear and consistent direction: what it is, what goes into the process, and how you use its output; threat modeling will remain hard.

Difficulty buttresses threat modeling misunderstanding. The security community understands vulnerability discovery fairly well. We’ve systematized techniques and automated with tools (for a fun distraction, start with Gunnar Peterson’s thoughts on checklists). Unfortunately some assessment gurus continue to couch threat modeling as the vulnerability discovery box labeled “think”: threat modeling becomes the catch-all for hard stuff we’ve not yet been able to bake into a tool or checklist.

And, yes, threat modelers operate at distinct disadvantages. Modelers almost never possess adequate source material (requirements, a design diagram, or entitlement data). They’re often forced to operate within an aggressive release schedule as well.

Summarizing: It can be tough, security experts aren’t doing the best job of providing support, and downward pressure on time/LoE squeezes quality.

2. What are the keys to building a good threat model? What tools should you use, what practices should you follow? How do you know when you have built a good threat model – when are you done?

My clients and I have surveyed a plethora of open source and commercial tools. We agree that they often create more drag than the lift they provide. So, with deference to good work that SD Elements has done, I don’t recommend starting with a tool: it’s distracting.

Combat threat modeling ambiguity with a “Threat Traceability Matrix”. Create your own simple spreadsheet tool if you like. Simply document:

  • WHO – threat: an agent capable of doing you harm
  • WHAT – misuse/abuse a threat intends to promulgate
  • WHERE – attack surface on which a threat will conduct a misuse/abuse
  • HOW – specific attack vectors (what –> how creates an attack tree)
  • IMPACT – negative effect on a business objective or violation of policy
  • MITIGATION – cell to track risk acceptance, transfer, or mitigation

Managing this table throughout the threat modeling, vulnerability discovery, and risk management loop will serve you well remaining largely agnostic of threat modeling religion. I like Cigital’s threat modeling–it’s very software and design centric, with roots in risk. Others like a more asset-centric approach. IMPORTANT: Use whatever practices effectively add high-value “whos” “whats” or “hows” that your vulnerability discovery techniques currently omit.

Combat difficulty and LoE/cost constraint by recognizing threat modeling will be an iterative and incremental process. Your context will govern whether you decide to go depth-first (into a threat, or a component) or breadth-first across the whole system/threat-space.

You’re probably never done. As you add to or refactor your system, your threat model’s fidelity decays. Update a threat model when new attacks are discovered (what/how), or when motivation/skill/access amongst your adversaries (the “who”) change. Your risk management framework (RMF) governs when you stop.

3. Where should you start with threat modeling? Where’s your best pay-back, are there any quick wins?

0) Draw a diagram – Give developers & modelers a ‘one-pager’ to work from in concert. Start with the attack surface. Focus on the software, not just systems & network.

0) Organize your work – Start by drawing up the traceability matrix.

1) Cut-and-paste – When you threat model application Y pull in application X’s threat model and evaluate applicability of its content.

2) Produce output that someone else consumes – Identify attack vectors (“hows”) that can be explored with other vulnerability discovery techniques (dynamic testing, code review, and even design review).

3) Pass-by-reference – Keep “what” enumeration conceptual. If attacks vectors fall into the AppSec canon don’t spend too much time describing “hows”. There’s little pay-back in re-treading what your tools and penetration testers discover well already.

4) Model what’s changing – Projects start green-field only once. The rest of the time you’re changing or extending existing functionality. Rather than stopping everything to model the complete system, start with what’s changing in this release and those key components on which changes depend. This confines scope and keeps work relevant to current development.

Ask the Expert – James Jardine

James Jardine is a senior security consultant at Secure Ideas and the founder of Jardine Software. James has spent over twelve years working in software development with over seven years focusing on application security. His experience includes penetration testing, secure development lifecycle creation, vulnerability management, code review, and training.  He has worked with mobile, web, and Windows development with the Microsoft .NET framework. James is a mentor for the Air Force Association’s Cyber Patriot competition. He currently holds the GSSP-NET, CSSLP, MCAD, and MCSD certifications and is located in Jacksonville, Florida.
This is the second in a series of interviews with appsec experts about threat modeling.
1. Threat Modeling is supposed to be one of the most effective and fundamental practices in secure software development. But a lot of teams that are trying to do secure development find threat modeling too difficult and too expensive. Why is threat modeling so hard – or do people just think it is hard because they don’t understand it?
I believe there is a lack of understanding when it comes to threat modeling.  Developers hear the term thrown around, but it can be difficult reading in a book or an online article how to actually create a threat model.  Without a good understanding of the process, people get frustrated and tend to stay away from it.  In addition to the lack of understanding of threat modeling, I believe that sometimes it is a lack of understanding of the application.  If the person/team working on the threat model doesn’t understand the application and the components it talks to, it can severely impact the threat model.

Threat modeling is time-consuming, especially the first iteration of a threat model, tied into the first time the team has created a threat model.  In the development world that time equates to costs in which there is no direct return on investment.  Developers are under tight timelines to get software out the door and many teams are not given the time to include threat modeling.  The time factor all depends on your experience, size of the application, development process and it will get quicker as you start doing the threat modeling on a regular basis.  The cost of doing the threat modeling, and adhering to the mitigations during development, will be less than the time spent going back and fixing the vulnerabilities later.

2. What are the keys to building a good threat model? What tools should you use, what practices should you follow? How do you know when you have built a good threat model – when are you done?

Motivation is the biggest key to building a good threat model.  If you are forced into threat modeling you are not going to take the time to do a good job.  In addition, an understanding of what you want to accomplish with your threat model.  Some threat models use data flow diagrams, while others do not.  You need to identify what works for your situation.  As long as the threat model is helping identify the potential risk areas, threats, and mitigations then it is doing its job.  It is important to set aside some time for threat modeling to focus on it.  Many times, I see it is something that is a task that is buried with other tasks handed off to one developer.  This should be a collaborative effort with a specific focus.

There are many different tools available to help with threat modeling. Microsoft has the SDL Threat Modeling Tool which is a great tool to use if you don’t have any threat modeling experience.  Microsoft also has the Escalation of Privileges card game which is meant to help developers become familiar with threat modeling in a fun way.  You could just use Microsoft Excel, or some other spreadsheet software.  It is all up to what works for you to get the results that you need.

Threat models are living documents.  They should not really ever be finished because each update to the application could affect the threat model.  You can usually tell if you are done with the current threat model when you are out of threats and mitigations.  Think of making popcorn, when the time between pops gets longer, the popcorn is probably done.  The same thing with threat modeling, when you haven’t identified a threat or mitigation in a while, it is probably finished.

3. Where should you start with threat modeling? Where’s your best pay-back, are there any quick wins?

In theory, threat modeling should be done before you start actually writing code for an application.  However, in most cases, threat modeling is implemented either after the code is started, or after the application is already in production.  It is never too late to do threat modeling for your applications.  If you are still in the design phase, then you are at an advantage for starting a threat model following best practices.  If you  are further along in the development lifecycle, the start high level and then work your way down as you can.  Often times, the threat model will go a few levels deep getting more details.  Each level may add more actors and more time.  You can start with just the top level identifying inputs/outputs and trust boundaries which will be a big security gain.  Then as time progresses, you can add a level or additional detail to the current level.

If your company has a bunch of applications that are somewhat similar, you can create a template for your company threat models that can help fill in some generic stuff. For example, protecting data between the client-server, or server-server.   This should be in all of your apps, so you can have a base template where you don’t need to do all the legwork, but just verify it is correct for the current application.  This can help increase the speed of threat modeling going forward.

Ask the Expert – Rohit Sethi

Rohit Sethi is a specialist in building security controls into the software development life cycle (SDLC). He has helped improve software security at some of the world’s most security-sensitive organizations in financial services, software, ecommerce, healthcare, telecom and other industries. Rohit has built and taught SANS courses on Secure J2EE development. He has spoken and taught at FS-ISAC, RSA, OWASP, Secure Development Conference, Shmoocon, CSI National, Sec Tor, Infosecurity, CFI-CIRT, and many others. Mr. Sethi has written articles for InfoQ, Dr. Dobb’s Journal, TechTarget, Security Focus and the Web Application Security Consortium (WASC), has appeared on Fox News Live, and has been quoted as an expert in application security for ITWorldCanada and Computer World. He also created the OWASP Design Patterns Security Analysis project.

1.  Threat Modeling is supposed to be one of the most effective and fundamental practices in secure software development. But a lot of teams that are trying to do secure development find threat modeling too difficult and too expensive. Why is threat modeling so hard – or do people just think it is hard because they don’t understand it?

I’ve recently started touring across North American OWASP chapters facilitating discussions with the topic “Is there an end to testing ourselves secure?”. It’s an open dialogue about the problems with successfully scaling early-phase secure SDLC activities, and one of the hottest topics is threat modeling. There are a handful of companies that have had success rolling out threat modeling with appropriate tool support. Unfortunately, most people at these OWASP chapters are saying they just do not have the organizational expertise, motivation, and available cycles to really institute proper threat modeling. Many software shops lack a single architecture diagram, let alone the capacity to build data flow diagrams. They’ve found it difficult to enumerate every use case, to identify interaction between processes, and to enumerate defenses against generalized classes of threats such as spoofing.

For threat modeling to scale, it’s supposed to be something that a development team can do on its own without the support of information security. The reality is that unless you have a development force that’s very well educated in security (i.e. generally places where senior executives see application security as a differentiator),  developer interest is generally low and they often feel that they don’t have the requisite expertise to effectively build a threat model.

2. What are the keys to building a good threat model? What tools should you use, what practices should you follow? How do you know when you have built a good threat model – when are you done?

At Security Compass and SD Elements we tend to advocate a more agile approach to threat modeling, ‘Threat Modeling Express’, which is just a name we gave for something that companies have been doing for years. In the absence of a solid investment in a threat modeling library, we think you need application security expertise to build useful threat models. The approach is basically a 2-4 hour collaborative review session between an app sec expert, a developer, and somebody who represents the business. You enumerate a few major uses cases, think about threats from a business process perspective (i.e. “I’m worried that somebody can view somebody else’s shopping cart ”), collectively rate the risk of these threats, excuse the business person and try to hash out the technical attack vectors that somebody might use to enable these threats.

Ideally you spend more time focusing on domain-specific (a.k.a. business logic) threats because the domain-agnostic threats  won’t very much between use cases. It’s an informal approach, it’s not very comprehensive, it’ll almost certainly miss things, but it’s much easier to get started with until you’ve built up a business case to do comprehensive threat modeling. Most importantly, it requires a relatively low time commitment, which is almost always the biggest blocker to doing threat modeling.

3. Where should you start with threat modeling? Where’s your best pay-back, are there any quick wins?

The earlier you can start the better, and doing it right at application design is most cost effective. The reality is that in most cases you won’t have any choice: you’re brought in to do a threat model and development is either well under way or done. In these cases it’s better to treat threat modeling as a precursor to a security assessment. For example, effective threat modeling should identify and prioritize domain-specific threats, which gives a penetration tester or a source code reviewer guidance on where to focus.

In terms of quick wins you’ll find that domain-agnostic threats tend to become repetitive for  classes of applications. For example, almost all web applications have to build controls for XSS and CSRF. These recurring security issues dovetail into a discussion on security requirements, which I recently wrote an article for InfoQ. If you can institute a solid library of reusable threats for particular application types, then you can focus threat modeling time on the all-important domain-specific issues.

Seven Tips for Picking a Static Analysis Tool

Stephen J, who is a member of our software security mailing list, asked a while back, “Do you have any recommendations on static source code scanners?” James Jardine and I started talking and came up with the following tips.

There are so many commercial static analysis tools from vendors like Armorize, Checkmarx, Coverity, Fortify (HP), Klocwork, IBM, and Veracode that it’s hard to recommend a specific product. Instead we’d like to focus on seven tips that can help you maximize your selection.

1) Test before you buy

This probably sounds obvious but, assuming you haven’t purchased anything yet, definitely do a bake off and have the vendor run the code against your actual apps. Do *not* simply run the tool on a vendor supplied sample app as the quality of the results, surprisingly, can vary quite a bit across different tools and code bases. Just keep in mind that some vendors will try to avoid this so they can better control the test or, simply, to keep you from getting a free scan.

2) Determine your budget

It’s really important to understand your budget. The cost of commercial tools varies greatly between vendors and you should be aware of how much you are willing to spend. This includes the cost of the tool, ongoing maintenance, and even things like additional RAM for your build servers (yes, it can take a lot of RAM). And don’t forgot to budget for the time people spend setting up, configuring, and reviewing results from the tool.

3) Prepare to spend a lot of time on it

Usually, anything worth doing takes time. Test-driven development and writing thorough unit tests don’t happen on their own. Similarly, using a static analysis tool to find security issues is an investment. It needs both management and developer buy-in to succeed when time consuming issues like false positives, tweaking rules, and product upgrades come up.

4) Cloud or Internal

Are you ok with uploading your source code, or binaries, to a third party site to do the analysis? This is only a question you and your company can answer, but it is an important one. Not only do you have to consider sharing your code, but if the solution is in the cloud, what about integration with your build process. Do you want to have a solution that can be built into your nightly build process? Tools that are housed internally usually lend themselves better to customization. Do you have the staff and capabilities to extend the tool so that it works better for your applications?

5) What type of support do you need?

Some tools come with much more support than others. One tool may have support personnel available to go over the results of your scan with you, so you understand them, whereas others may not have this type of support. Depending on your in-house skill set this could be a big deciding factor.

6) Check references

Ask other colleagues (not the vendors) who use the tool and get their feedback. These are the ones that will give you honest answers. Keep in mind that even though a specific vendor works well for one application or company it doesn’t mean that it’s the right fit for your situation.

7) Research the products

Make sure that the solution you are considering is the best fit for you. You will definitely want to make sure that the vendor/product you choose supports your development platform and your development process.

Every application is different and it’s impossible for an outsider to recommend the correct static analysis tool without having intimate knowledge of your application and processes. Given the investment in time and money, choosing a vendor and product is a difficult yet important decision. Do your due diligence and research before making a choice.

Take some time to lay out your current landscape and answer some of these questions:

– What language(s) do you develop in?
– What source control system do you use?
– What other types of technologies are used in your application (Ajax, jQuery, Silverlight, Flash, mobile, etc)?
– What skill sets do you have in house when it comes to application security?
– Do you have people that can quickly learn how to operate the scanner? Can they understand the output?
– What type of support do you want from the vendor?
– Are you willing to submit your code to an external site, or does your policy require that it not be shared outside the company boundaries?
– What is your budget for a static analyzer?
– How will the technology work with the people and processes that you have in house?

Hopefully, all these questions don’t change your opinion about needing a static analysis tool and make it a little easier to actually select a tool to use.

Thanks for reading!

Frank Kim & James Jardine

The C14N challenge

Failing to properly validate input data is behind at least half of all application security problems. In order to properly validate input data, you have to start by first ensuring that all data is in the same standard, simple, consistent format – a canonical form. This is because of all the wonderful flexibility in internationalization and data formatting and encoding that modern platforms and especially the Web offer. Wonderful capabilities that attackers can take advantage of to hide malicious code inside data in all sorts of sneaky ways.

Canonicalization is a conceptually simple idea: take data inputs, and convert all of it into a single, simple, consistent normalized internal format before you do anything else with it. But how exactly do you do this, and how do you know that it has been done properly? What are the steps that programmers need to take to properly canonicalize data? And how do you test for it? This is where things get fuzzy as hell.

First, it doesn’t help that if you google “security canonicalization” the top hits are around file naming problems. Sure, this is a canonicalization problem, but not the root of canonicalization problems. If you dig deeper, you’ll find that the explanations of canonicalization don’t actually explain how to do it.

The Official (ISC)2 Guide to the CSSLP explains that

“It is recommended to decode once and canonicalize input into the internal representation before performing validation to ensure that validation is not circumvented”.

But it doesn’t point you to resources that show how to do this.

Cigital’s Build Security In portal, which has a lot of useful information on software security issues, isn’t helpful in this case either. Stack Overflow explains what canonicalization is and why it’s important, but not how to do it.

OWASP’s summary of canonicalization, locale and Unicode issues in app security describes the issues well,  especially from an attack perspective. To protect your app from Unicode issues, it tells developers:

A suitable canonical form should be chosen and all user input canonicalized into that form before any authorization decisions are performed. Security checks should be carried out after UTF-8 decoding is completed. Moreover, it is recommended to check that the UTF-8 encoding is a valid canonical encoding for the symbol it represents.

Um, ok, that sounds good.

To handle different input formats:

Determine your application’s needs, and set both the asserted language locale and character set appropriately.

Sure…. And then:

Assert the correct locale and character set for your application. Use HTML entities, URL encoding, and so on to prevent Unicode characters being treated improperly by the many divergent browser, server, and application combinations. Test your code and overall solution extensively.

Good advice, especially that“and so on…” part. But what is the programmer actually supposed to do, and how are they supposed to do it? Where’s the sample code?

Microsoft’s Michael Howard and David LeBlanc almost explain the whole canonicalization problem in this sample chapter from Writing Secure Code, at least for developers working on Windows apps in .NET, and I am sure that the rest the explanation is somewhere in the book, but there aren’t a lot of developers who are going to read the whole book to find out.

OWASP’s ESAPI documentation explains what canonicalization is all about:

Canonicalization is simply the operation of reducing a possibly encoded string down to its simplest form. This is important, because attackers frequently use encoding to change their input in a way that will bypass validation filters, but still be interpreted properly by the target of the attack.

This starts off simple. But the authors then go on to scare you sleepless with descriptions of nasty attacks based on double encoding, double encoding with multiple schemes, and nested encoding (with multiple schemes). Then they say that canonicalization is incredibly tricky, that you may have to deal with over 100 different character encodings, and higher-level encodings such as percent-encoding, HTM-entity encoding and bbcode {whatever that is}. And so you really should use ESAPI to take care of it. Which you probably should.

If you are going to use ESAPI to take care of the problem, you can get an idea of where to start courtesy of the Security Ninja. But if you’re already using other frameworks that may or may not take care of canonicalization (how would you know?), or have a lot of legacy code that you need to maintain, and you want to understand the problem and maybe try to take care of it yourself, and you don’t want to read through the ESAPI source code to figure out how they did it, where do you go for answers? Maybe I haven’t looked hard enough, but I haven’t found them yet.

Canonicalization is far too important a problem to leave to programmers to try to figure it out and solve on their own.  We need experts at SANS and OWASP to step in and provide clear, simple, actionable guidance to developers on how to properly handle input canonicalization on the different major development platforms. Telling developers that a problem is fundamental and critical, and then not explaining how to deal with it properly (without rewriting the code to use something like ESAPI), just isn’t good enough.

Safer Software through Secure Frameworks

We have to make it easier for developers to build secure apps, especially Web apps. We can’t keep forcing everybody who builds an application to understand and plug all of the stupid holes in how the Web works on their own – and to do this perfectly right every time. It’s not just wasteful: it’s not possible.

What we need is implementation-level security issues taken care of at the language and framework level. So that developers can focus on their real jobs: solving design problems and writing code that works.

Security Frameworks and Libraries

One option is to get developers to use secure libraries that take care of application security functions like authentication, authorization, data validation and encryption. There are some good, and free, tools out there to help you.

If you’re a Microsoft .NET developer, there’s Microsoft’s Web Protection Library which provides functions and a runtime engine to protect .NET apps from XSS and SQL Injection.

Java developers can go to OWASP for help. As a starting point, someone at OWASP wrote up a summary of available Java security frameworks and libraries.  The most comprehensive, up-to-date choice for Java developers is OWASP’s ESAPI Enterprise Security API especially now that the 2.0 release has just come out. There are some serious people behind ESAPI, and you can get some support from the OWASP forums, or pay Aspect Security to get help in implementing it. ESAPI is big, and as I’ve pointed out before unfortunately it’s not easy to understand or use properly, it’s not nicely packaged, and the documentation is incomplete and out of date – although I know that people are working on this.

Or there’s Apache Shiro (FKA JSecurity and then later as Apache Ki), another secure framework for Java apps. Although it looks simpler to use and understand than ESAPI and covers most of the main security bases (authentication, authorization, session management and encryption), it doesn’t help take care of important functions like input validation and output encoding. And Spring users have Spring Security (Acegi) a comprehensive, but heavyweight authorization and authentication framework.

If you aren’t using ESAPI’s input data validation and output encoding functions to help protect your app from XSS attacks and other injection attacks, you could try OWASP’s Stinger declarative validators (although unfortunately Stinger isn’t an active project any more). Or maybe OWASP’s AntiSamy, an un-cool name for a cool library to sanitize rich HTML input data; or the newer, faster Java HTML Sanitizer built with technology contributed by Google: handy if you’re writing a bulletin board or social network.

Or you could check out the new auto-escaping technology that Jim Manico presented at this year’s SANS AppSec conference including Facebook’s XHP (for PHP programmers), OWASP’s JXT Java XML Templates , Google’s Context-Sensitive Auto-Sanitization (CSAS),and Javascript sandboxing from Google (CAJA – used in the new OWASP HTML Sanitizer) or from OWASP (JSReg).  And JQuery programmers can use Chris Schmid’ts JQuery Encoder to safely escape untrusted data.

To add CSRF protection to your app, try OWASP’s CSRFGuard.  If all you want to do is solve some encryption problems, there’s Jasypt – the Java Simplified Encryption library – and of course The Legion of the Bouncy Castle: a comprehensive encryption library for Java (and C#).

For developers working in other languages like Ruby and PHP and Objective C, there are ports or implementations of ESAPI available, although these aren’t as up-to-date or complete as the Java implementation. Python developers working with the Django framework can check out Django Security a new library from Rohit Sethi at SD Elements.

There are other security frameworks and libraries out there – if you know where to look. If you use this code properly, some or maybe even a lot of your security problems will be taken care of for you. But you need to know that you need it in the first place, then you need to know where to find it, and then you need to figure out how to use it properly, and then test it to make sure that everything works. Fast-moving Agile teams building new apps using simple incremental design approaches or maintenance teams fixing up legacy systems aren’t looking for auto-escaping templating technology or enterprise security frameworks like ESAPI because this stuff isn’t helping them to get code out of the door – and many of them don’t even know that they need something like this in the first place.

Frameworks and Libraries that are Secure

To reach application developers, to reduce risk at scale, we need to build security into the application frameworks and libraries, especially the Web frameworks, that developers already use. Application developers rely on these frameworks to solve a lot of fundamental plumbing problems – and they should be able to rely on these frameworks to solve their problems in a secure way. A good framework should protect developers from SQL Injection and other injection attacks, and provide strong session management including CSRF protection, and auto-escaping protection against XSS. Out of the box. By default.

Ruby on Rails is one example: a general-purpose Web framework that tries to protect developers from CSRF, SQL Injection and now auto-escaping protection against XSS in release 3. Ruby and Rails have security problems but at least the people behind this platform have taken responsible steps to protect developers from making the kind of fundamental and dangerous mistakes that are so common in Web apps.

Solving security problems in frameworks isn’t a new idea. Jeremiah Grossman at Whitehat Security talked about this back in 2006:

The only way I see software security improving significantly is if “security” is baked into the modern development frameworks and be virtually transparent. Remember, the primary developers mission is to pump out code to drive business and that’s what they’ll do not matter what. When developers find that its WAY easier and WAY better to do RIGHT by security, then we’ll get somewhere.

Unfortunately, we haven’t made a lot of progress on this over the last 5 years. A handful of people involved in OWASP’s Developer Outreach are trying to change this, working within some of the framework communities to help make frameworks more secure. It seems hopeless – after all, Java developers alone have dozens of frameworks to choose from for Web apps. But Jim Manico made an important point: to start making a real difference, we don’t need to secure every framework. We just need to make common, important frameworks safer to use, and secure by default: Spring, Struts, Rails, the main PHP and Python frameworks. This will reach a lot of people and protect a lot of software. Hopefully more people will stick with these frameworks because they are feature-rich, AND because they are secure by default. Other teams building other frameworks will respond and build security in, in order to compete. Building safer frameworks could become self-reinforcing as more people get involved and start understanding the problems and solving them. Even if this doesn’t happen, at least developers using some of the more popular frameworks will be protected.

Building secure frameworks and frameworks that are secure isn’t going to solve every application security problem. There are security problems that frameworks can’t take care of. We still need to improve the tools and languages that we use, and the way that we use them, the way that we design and build software. And the frameworks themselves will become targets of attack of course. And we’re all going to have to play catch up. Changing the next release of web application frameworks isn’t going to make a difference to people stuck maintaining code that has already been written. Developers will have to upgrade to take advantage of these new frameworks and learn how to use the new features. But it will go a long way to making the software world safer.

There are only a handful of people engaged in this work so far – most of them on a part-time basis. Such a small group of people, even smart and committed people, aren’t going to be able to change the world. This is important work: I think some of the most important work in application security today. If a government or a major corporation (unfortunately, I don’t work for either) is really looking for a way to help make the Web safer, this it – getting the best and brightest and most dedicated people from the appsec and application development communities to work together and solve foundational problems with wide-reaching impact.

To find out more, or to engaged, join the OWASP Developer Outreach.

Agile Security for Product Owners – Requirements

Much of cumulative application security knowledge and tools are aimed at detection, rather than prevention, of vulnerabilities. This is a natural consequence of the fact that the primary job of many information security analysts is to look for security vulnerabilities and provide high level remediation suggestions rather than be involved in detailed remediation efforts. Another reason is that most organizations want to get a grip on what security exposures they currently have before focusing their efforts on preventing future exposures.

The consequence for application owners is that many of the tools that we have at our disposal are focused on vulnerability detection:

  • Static analysis tools
  • Runtime vulnerability scanning tools
  • Verification standards such as the ASVS

Luckily, application security experts have long argued the case for focusing on prevention rather than relying solely relying on detection of security vulnerabilities. Building-in security early in the SDLC is a matter of cost effectiveness. Efforts such as OWASP’s Enterprise Security API deal with this issue head-on at the code level. At a higher level, some organizations have developed Secure Software Development Life Cycle models such as the Software Assurance Maturity Model, Building Security In Maturity Model, or Microsoft’s SDL.

Comprehensiveness Bias

Most SDLC security initiatives treat security requirements engineering as a foundational activity. The challenge in an Agile/Scrum context is that requirements (i.e. product backlog) are constantly shifting. Iterations are short, intense, and there is little time to introduce new processes such as an architectural analysis, threat risk assessments, or week-long penetration testing.Moreover, if we start with a minimal functionality system as Agile methodologies suggest then we can rarely, in practice, introduce all security requirements at one time. As an agile product owner, you are confronted with mountains of secure SDLC best practices that work best when you have a complete set of requirements up front: the antithesis of agile processes.

The Security Burden

Suppose you’re near the inception of a new product or major re-release and you have short iterations. Looking at your product backlog, you likely have several months of tickets/detects/stories that represent mostly functional requirements. Ideally you’ve carefully groomed the backlog and prioritized items according to their importance in building a minimally functional system and their perceived benefit to end users.

Now let’s say you want to add security requirements. Based on personal experience consulting at Security Compass and researching for SD Elements, I’d estimate that most large-scale enterprise web applications will have security requirements with a cumulative total of several months of effort. Simply put, you’ll be unable to fulfill all security requirements in one shot. In addition, excessive focus on a non-functional concern like security makes it challenging to show incremental value to stakeholders (unless your stakeholders care deeply about security). As a result, you’ll be faced with a somewhat ugly reality of agile development on new projects: Given sufficient visibility into security requirements, an agile application will have known security gaps in its early iterations.

You could defy the odds by focusing on security and not releasing the product until all known security gaps have been addressed. The reality, particularly for anyone operating in a competitive environment where security is not one of the top organizational priorities, is that time-to-market pressures make this infeasible.

Incremental Security

Like everything else in the product backlog you must prioritize security requirements. The Software Assurance Maturity Model spells this fact out explicitly:

“It is important to not attempt to bring in too many best-practice requirements into each development iteration since there is a time trade-off with design and implementation. The recommended approach is to slowly add best-practices over successive development cycles to bolster the software’s overall assurance profile over time.”

There are a couple of caveats you should keep in mind when prioritizing security requirements:

  1. Choose risk reduction over perceived customer utility: When developers demonstrate their increment of progress at the end of the iteration, the security requirements will generally be about as visible as refactoring class names unless you actually perform penetration testing. I say generally because there are some cases where security is visible. For example, you might be tempted to add OAuth support in the names of both security and usefulness for end users. The problem is that while OAuth might help improve your security posture, you may have a much more pressing security risk such as ensuring your confidential data is sent over SSL. If you’ve resolved to build security into your product, choose risk reduction over customer utility for security requirements.
  2. Differentiate between features: In the nomenclature of security defects, Dr. Gary McGraw provides a useful distinction between flaws and bugs: “Bug (implementation) A software security defect that can be detected locally through static analysis [code language=”review”][/code]

    [/code]. Flaw (design). A software security defect at the architecture or design level. Flaws may not be apparent given only source code of a software system.”The way you treat the corresponding controls is important:

    • a) Bug mitigations generally don’t belong on your product backlog because they don’t have one-time acceptance criteria; these mitigations have to be addressed in every iteration. You ought to be thinking of them at all times. These are issues like proper output escaping, safe library usage, variable binding in SQL statements, or making sure you don’t log confidential data; they are cross-cutting concerns that amount to following coding conventions. Developer education and code review / static analysis are the best ways to eradicate bugs.
    • b) Flaw mitigations do belong in a product backlog. They have acceptance criteria and they need to be prioritized amongst all other requirements. These include issues such as authentication, centralized authorization routines, and session management.Once you’ve figured out which requirements are flaw mitigations you can perform a second level of differentiation: which requirements prevent an exploitable vulnerability versus which requirements provide defense in depth. Block brute force attacks prevents an exploitable vulnerability. Hiding verbose error messages is a measure of defense in depth. You should set your sights on the first group as a priority. Strive for your application to not be vulnerable to any remotely exploitable vulnerabilities as soon as it stores, processes, or transmits production data. Depending on your risk profile, you may also need to consider locally exploitable vulnerabilities to the server and/or compliance-driven requirements that may not accurately reflect risk. If you’re lucky enough to start with non-production data then you have the latitude to build these controls across several iterations. However, once your application has production data you are knowingly putting your organization and (if applicable) your clients data at risk if the application lacks known security controls. If you must accept that you’ll have exploitable security vulnerabilities, always compare the levels of risk that each security control provides and prioritize for the most significant risk reduction.
In Summary

As a product owner you have scores of tools and processes to help you find vulnerabilities after they’ve been coded. Adopting security throughout the SDLC is cost effective, but difficult in practice to implement for agile shops. Security requirements, in particular, are difficult to prioritize or scope in early iterations. You can make the task more manageable by prioritizing requirements that prevent exploitable flaws, sorted by risk.

Four Attacks on OAuth – How to Secure Your OAuth Implementation

This article briefly introduces an emerging open-protocol technology, OAuth, and presents scenarios and examples of how insecure implementations of OAuth can be abused maliciously.  We examine the characteristics of some of these attack vectors, and discuss ideas on countermeasures against possible attacks on users or applications that have implemented this protocol.

An Introduction to the Protocol

OAuth is an emerging authorization standard that is being adopted by a growing number of sites such as Twitter, Facebook, Google, Yahoo!, Netflix, Flickr, and several other Resource Providers and social networking sites.  It is an open-web specification for organizations to access protected resources on each other’s web sites.  This is achieved by allowing users to grant a third-party application access to their protected content without having to provide that application with their credentials.

Unlike Open ID, which is a federated authentication protocol, OAuth, which stands for Open Authorization, is intended for delegated authorization only and it does not attempt to address user authentication concerns.

There are several excellent online resources, referenced at the end of this article, that provide great material about the protocol and its use. However, we need to define a few key OAuth concepts that will be referenced throughout this article:

Key Concepts

  • Server or the Resource Provider controls all OAuth restrictions and is a web site or web services API where User keeps her protected data
  • User or the Resource Owner is a member of the Resource Provider, wanting to share certain resources with a third-party web site
  • Client or Consumer Application is typically a web-based or mobile application that wants to access User’s Protected Resources
  • Client Credentials are the consumer key and consumer secret used to authenticate the Client
  • Token Credentials are the access token and token secret used in place of User’s username and password


The actual business functionality in this example is real. However, the names of the organizations and users are fictional. provides Avon Barksdale a bill consolidation service based on the trust Avon has established with various Resource Providers, including

In the diagram below, we have demonstrated a typical OAuth handshake (aka OAuth dance) and delegation workflow, which includes the perspectives of the User, Client, and the Server in our example:

Figure 1: The entire OAuth1.0 Process Workflow

Insecure Implementations and Solutions

In this section, we will examine some of the security challenges and insecure implementations of the protocol, and provide solution ideas. It is important to note that like most other protocols, OAuth does not provide native security nor does it guarantee the privacy of protected data.  It relies on the implementers of OAuth, and other protocols such as SSL, to protect the exchange of data amongst parties. Consequently, most security risks described below do not reside within the protocol itself, but rather its use.

1. Lack Of Data Confidentiality and Server Trust

While analyzing the specification, and the way OAuth leverages the keys, tokens, and secrets to establish the digital signatures and secure the requests, one realizes that much of the effort put into this protocol was to avoid the need to use HTTPS requests all together. The specification attempts to provide fairly robust schemes to ensure the integrity of a request using its signatures; however it cannot guarantee a request’s confidentiality, and that could result in several threats, some of which are listed below.

1.1 Brute Force Attacks Against the Server

An attacker with access to the network will be able to eavesdrop on the traffic and gain access to the specific request parameters and attributes such as oauth_signature, consumer_key, oauth_token, signature_method (HMAC-SHA1), timestamp, or custom parameter data. These values could assist the attacker in gaining enough understanding of the request to craft a packet and launch a brute force attack against the Server. Creating tokens and shared-secrets that are long, random and resistant to these types of attacks can reduce this threat.

1.2 Lack of Server Trust

This protocol is all about authenticating the Client (consumer key and secret) and the User to the Server, but not the other way around. There is no protocol support to check the authenticity of the Server during the handshakes. So essentially, through phishing or other exploits, user requests can be directed to a malicious Server where the User can receive malicious or misleading payloads. This could adversely impact the Users, but also the Client and Server in terms of their credibility and bottom-line.

1.3 Solutions

As we have seen, the OAuth signature methods were primarily designed for insecure communications, mainly non-HTTPS.  Therefore, TLS/SSL is the recommended approach to prevent any eavesdropping during the data exchange.

Furthermore, Resource Providers can limit the likelihood of a replay attack from a tampered request by implementing protocol’s Nonce and Timestamp attributes.  The value of oauth_nonce attribute is a randomly generated number to sign the Client request, and the oauth_timestamp defines the retention timeframe of the Nonce.  The following example from Twitter .NET Development for OAuth Integration demonstrates the creation of these attributes by the application:

Figure 2: Sample oauth_nonce and oauth_timestamp methods

2. Insecure Storage of Secrets

The two areas of concern are to protect:

  • Shared-secrets on the Server
  • Consumer secrets on cross-platform clients

2.1 Servers

On the Server, in order to compute the oauth_signature, the Server must be able to access the shared-secrets (a signed combination of consumer secret and token secret) in plaintext format as opposed to a hashed value. Naturally, if the Server and all its shared-secrets were to be compromised via physical access or social engineering exploits, the attacker could own all the credentials and act on behalf of any Resource Owner of the compromised Server.

2.2 Clients

OAuth Clients use the consumer key and consumer secret combination to provide their authenticity to the Server. This allows:

  • Clients to uniquely identify themselves to the Server, giving the Resource Provider the ability to keep track of the source of all requests
  • The Server to let the User know which Client application is attempting to gain access to their account and protected resource

Securing the consumer secret on browser-based web application clients introduces the same exact challenges as securing shared-secrets on the Server. However, the installed mobile and desktop applications become much more problematic:

OAuth’s dependency on browser-based authorization creates an inherit implementation problem for mobile or desktop applications that by default do not run in the User’s browser. Moreover, from a pure security perspective, the main concern is when implementers store and obfuscate the key/secret combination in the Client application itself. This makes the key-rotation nearly impossible and enables unauthorized access to the decompiled source code or binary where the consumer secret is stored.  For instance, to compromise the Client Credentials for Twitter’s Client on Android, an attacker can simply disassemble the classes.dex with Android dissembler tool, dexdump:

# dexdump – d classes.dex

2.3 The Threats

It is important to understand that the core function of the consumer secret is to let the Server know which Client is making the request. So essentially, a compromised consumer secret does NOT directly grant access to User’s protected data.  However, compromised consumer credentials could lead to the following security threats:

  • In use cases where the Server MUST keep track of all Clients and their authenticity to fulfill a business requirement, (charge the Client for each User request, or for client application provisioning tasks) safeguarding consumer credential becomes critical
  • The attacker can use the compromised Client Credentials to imitate a valid Client and launch a phishing attack, where he can submit a request to the Server on behalf of the victim and gain access to sensitive data

Regardless of the use case, whenever the consumer secret of a popular desktop or mobile Client application is compromised, the Server must revoke access for ALL users of the compromised Client application. The Client must then register for a new key (a lengthy process), embed it into the application as part of a new release and deploy it to all its Users. A nightmarish process that could take weeks to restore service; and will surely impact the Client’s credibility and business objectives.

2.4 Solutions

Protecting the integrity of the Client Credentials and Token Credentials works fairly well when it comes to storing them on servers. The secrets can be isolated and stored in a database or file-system with proper access control, file permission, physical security, and even database or disk encryption.

For securing Client Credentials on mobile application clients, follow security best practices for storing sensitive, non-stale data such as application passwords and secrets.

The majority of current OAuth mobile and desktop Client applications embed the Client Credentials directly into the application.  This solution leaves a lot to be desired on the security front.

Few alternative implementations have attempted to reduce the security risks by obfuscation or simply shifting the security threat elsewhere:

  • Obfuscate the consumer secret by splitting it into segments or shifting characters by an offset, then embed it in the application
  • Store the Client Credentials on the Client’s backend server.  The credentials can then be negotiated with the front-end application prior to the Client/Server handshake.  However, nothing is going to stop a rogue client to retrieve the Client Credentials from application’s back-end server, making this design fundamentally unsound

Let’s consider a better architectural concept that might require some deviation from the typical OAuth flow:

  • The Service Provider could require certain Clients to bind their Consumer Credentials with a device-specific identifier (similar to a session id).  Prior to the initial OAuth handshake, the mobile or desktop application can authenticate the User to the Client application via username and password.  The mobile Client can then call home to retrieve the Device ID from the Client’s back-end server and store it securely on the device itself (e.g. iOS Keychain).   Once the initial request is submitted to the Serve with both the Client Credentials and Device ID, the Service Provider can validate the authenticity of the Device ID against the Client’s back-end server.

The example below illustrates this solution:

Figure 3 :  alternative solution for authenticating OAuth Clients

OAuth’s strength is that it never exposes a User’s Server credentials to the Client application. Instead, it provides the Client application with temporary access authorization that User can revoke if necessary.  So ultimately, regardless of the solution, when the Server cannot be sure of the authenticity of the Client’s key/secret, it should not solely rely on these attributes to validate the Client.

3. OAuth Implementation with Flawed Session Management

As described in the first example (see figure 1), during the authorization step, the User is prompted by the Server to enter his login credentials to grant permission.  The user then is redirected back to the Client application to complete the flow.  The main issue is with a specific OAuth Server implementation, such as Twitter’s, where the user remains logged in on the Server even after leaving the Client application.

This session management issue, in-conjunction with Server’s implementation of OAuth Auto Processing, to automatically process authorization requests from clients that have been previously authorized by the Server, present serious security concerns.

Let’s take a closer look at Twitter’s implementation:

There are numerous third-party Twitter applications to read or send Tweets; and as of August of 2010, all third-party Twitter applications had to exclusively use OAuth for their delegation-based integration with Twitter APIs.  Here is an example of this flawed implementation:

twitterfeed is a popular application that allows blog feeds to user’s Twitter accounts.

Avon Barksdale registers and signs into his twitterfeed client.  He then selects a publishing service such as Twitter to post his blog.

Figure 4: client authorization page

Avon is directed to Twitter’s authorization endpoint where he signs into Twitter and grants access.

Figure 5 : Twitter’s authorization page for third-party applications

Now, Avon is redirected back to twitterfeed where he completes the feed; and then signs out of twitterfeed and walks away.

Figure 6: twitterfeed’s workflow page post server authorization

A malicious user with access to the unattended browser can now fully compromise Avon’s Twitter account; and deal with the consequences of his action!

Figure 7:  Twitter session remains valid in the background after user signs out of

3.1 The Threat

The User might not be aware that he has a Server session open with the Resource Provider in the background.  He could simply just log out of his Client session and step away from his browser.  The threat is elevated when leaving a session unattended becomes inevitable in use cases where public computers are used to access OAuth-enabled APIs.  Imagine if Avon was a student who frequently accessed public computers for his daily tweet feeds. He could literally leave a Twitter session on every computer he uses.

3.2 Solutions

Resource Providers such as Twitter should always log the User out after handling the third-party OAuth authorization flow in situations where the User was not already logged into the Server before the OAuth initiation request.

Auto Processing should be turned off.  That is, servers should not automatically process requests from clients that have been previously authorized by the resource owner.  If the consumer secret is compromised, a rogue Client can gain ongoing unauthorized access to protected resources without the User’s explicit approval.

4. Session Fixation Attack with OAuth

OAuth key contributor, Eran Hammer Lahav, has published a detailed post, referenced at the end of this article, about this specific attack that caused a major disruption to many OAuth consumers and providers.  For instance, Twitter had to turn-off its third-party integration APIs for an extended period of time, impacting its users as well as all the third-party applications that depended on Twitter APIs.

In summary, the Session Fixation flaw makes it possible for an attacker to use social-engineering tactics to lure users into exposing their data via few simple steps:

  • Attacker uses a valid Client app to initiate a request to the Resource Provider to obtain a temporary Request Token.  He then receives a redirect URI with this token:

  • At a later time, the attacker uses social-engineering and phishing tactics to lure a victim to follow the redirect link with the server-provided Request Token
  • The victim follows the link, and grants client access to protected resources. This process authorizes the Request Token and associates it with the Resource Owner

The above steps demonstrate that the Resource Provider has no way of knowing whose Request Token is being authorized, and cannot distinguish between the two users.

  • The Attacker constructs the callback URI with the “authorized” Access Token and returns to the Client:…/oauth_token= XyZ&…

  • If constructed properly, the Attacker’s client account is now associated with victim’s authorized Access Token

The vulnerability is that there is no way for the Server to know whose key it is authorizing during the handshake.  The author describes the attack in detail and proposes risk-reducing solutions. Also, the new version of the protocol, OAuth2.0, attempts to remediate this issue via its Redirect URI being validated with the authorization key exchange.


As the web grows, more and more sites rely on distributed services and cloud computing.  And in today’s integrated web, users demand more functionality in terms of usability, cross-platform integration, cross-channel experiences and so on.  It is up to the implementers and security professionals to safeguard user and organizational data and ensure business continuity. The implementers should not rely on the protocol to provide all security measures, but instead they should be careful to consider all avenues of attack exposed by the protocol, and design their applications accordingly.


Protocol Specification
OAuth Extensions and Code Sample
Google Provider Specification</a
Twitter API References
OAuth session fixation</a

About the Author

Khash Kiani is a senior security consultant at a large health care organization. He specializes in security architecture, application penetration testing, PCI, and social-engineering assessments.  Khash currently holds the GIAC GWAPT, GCIH, and GSNA certifications.  He can be reached at