Inhibiting Malicious Macros by Blocking Risky API Calls


Microsoft Office Macros have been the bane of security analysts’ lives since the late 1990s. Their flexibility and functionality make them ideal for malware authors to use as a primary stage payload delivery mechanism, and to date the challenge they pose remains unsolved. Many organisations refrain from blocking them completely due to the impact it would cause to their users, and instead rely on a combination of detection and mitigation technology to compensate for the risk they pose.

I have long thought that it would be ideal if Microsoft were able to implement granular controls via group policy over which activities macros were permitted to perform, for example allowing the blocking of process execution or network activity. In the interim, I have been experimenting with alternative methods to limit which functions a macro can call, either by redefining or patching the high risk API calls to prevent their intended outcome.

Below we can see a very basic example of what a malicious macro might look like. The fact that the subroutine is named “AutoOpen” means that it will be run as soon as the document is opened (and if required, macros are enabled).  The command “Shell” at the beginning of the line instructs Word to execute the process as specified in the parameter, so in this case the macro will launch the Windows calculator.




Option 1 – Using global templates to override built-in functions

Shell is the function built into VBA which is used to launch new processes, and as such, can often be found in malicious macros. Malware authors leveraging macros as a primary stage payload typically want to either download or drop a file, and then execute it. Alternatively, they could decide to execute an existing process on the system, such as powershell, with parameters which will take actions against their intended target. Common to both of these methods is the requirement for something to be run, and this is where Shell comes in. If we were able to somehow disable the Shell function, then we might be able to prevent the malicious macro from succeeding in it’s goal.

The first experiment involved redefining the Shell function, with the hope that the newly defined function would override the built-in one. We can achieve this by placing the code in the global template which is named normal.dotm or within an add-in (which is effectively a template that is loaded every time Word is opened), either way, our code will become part of the malicious macros execution.

Below we can see the redefined function used as part of this experiment. This was saved within normal.dotm (the global template), with the intention that when Shell was called, rather than calling the system function to launch the process, our redefined function would be called, resulting in a popup warning the user that the activity had been blocked.




Then, using the one line test macro that attempts to execute ‘cmd.exe /c calc.exe’, we can see that the redefined function worked well and blocked the execution of the process. Unfortunately, it only seemed to work when the malicious code was placed within the document and not when it was within a seperate module. The reason for this is unknown and warrants further research, as if solved, this would be the easiest way to achieve our goal.




Option 2 – Patching associated API in VBE7.dll to alter behaviour

For a more surgical approach we can look at API hooking the dll which is leveraged when macro code is executed. Looking through the dlls, which are loaded into Word at runtime, we can observe that VBE7.dll includes a large number of exports that appear related to the execution of macro code. Below we can see a snapshot from the list of exports with one in particular highlighted, rtcShell, that warranted further investigation.




Digging into this function we can see that it’s nothing more than a small wrapper around the CreateProcess Windows API call. The particular code block which executes the process is listed below. Looking at the parameters being passed to the API,  we can see that the EBX register is being used to hold a pointer to the name of the process to be executed.




At this point, if we want to alter the way the process is launched we can patch the instructions to alter the process creation flags, which is one of the parameters passed to the CreateProcess API. Alternatively, we could completely patch out the API call to prevent the execution from occurring.

In this case we are going to leverage the CREATE_SUSPENDED process creation flag to allow the process to launch, but immediately be frozen into a suspended state, preventing the code from running.




The patch sets the ECX register to 4, which is the numeric value of CREATE_SUSPENDED, and then this is pushed as the creation flags parameter to the CreateProcess API call, resulting in the process being instructed to launch in a suspended state.




Following this patch, a test macro that was designed to launch powershell was executed, and as we can see from the process explorer window, it was placed into a suspended state, thus mitigating any impact and allowing an opportunity for endpoint protection sensors to scan and identify any threatening behaviour.




These are just proof of concepts, and would require more research and work before they were used in a production environment. I still believe that the ideal state would be for Microsoft to implement more granular security controls against macros, allowing organisations to continue using them, but at the same time empowering them with the ability to limit their capabilities depending on individual risk appetites.


Any comments or feedback would be very welcome!


-Adam (Twitter: @CyberKramer)

Three Steps to Communicate Threat Intelligence to Executives.

As the community of security professionals matures there is a merging of the intel community, the incident response professionals, and security operations. One struggle folks have is how to make the threat intelligence actionable for the business. You have the large data from Recorded Future, yet, how do you apply the data in a practical way and communicate to the business?

The answer is keep it short and to the point and in the risk language, they use.  Bring solutions, not problems to the table. An executives greatest challenge is time. They want smart people who can make the day to day decisions to protect the company from attacks, in order for them to address large strategic topics.

In addition, the business media channels are talking about the security problems.  Just browse to The Wall Street Journal and see the latest cyber security news posts. That is good for our profession, so we need to be proactive. How do we be proactive? There are four steps that need to be done.

The first is know what is a business differentiator for the company.  Ask the executives what makes this organization competitive. Why is it different? Ask how might that difference be vulnerable to cyber-attack?

The second step is to do the analysis. Be proactive, being situationally aware, Recorded Future can provide access to that data. Not only knowing the threats, but how they apply to the organization. What part of the kill chain does the attack occur? Does it already map to an attacker campaign?

I’ll give a fictitious example.  You are an analyst at a power company. Reading the latest blogs on exploits and attacks you see a media release of a new type of malware attacking the power grid.  You know from business discussions that power production is critical for the business.  Is the threat, “the new malware” a risk to the organization?

Using Recorded Future, search for the first time the malware was mentioned in the score card.  In the case below of the Furtim malware, Recorded Future data shows blogs from a few years ago and a Virus Total post too. So much for the vendor hype that this is a new threat.

Ask yourself, does the organization have mitigations and controls in place to stop the threat, in this case at this point in time, the organizations anti-virus does detect the malware. Are there any other controls in place? Are there mitigations in place?  Maybe there is an IPS signature in place. If not, then run the attack in a test environment, build blue team solutions. Begin tracking the attack indicators and possible campaign by mapping the attack to the kill chain for the organization.

The third step is to communicate. Executives understand risk, so explaining the threat in terms of risk is effective and if there is not a control in place find one and communicate when it will be implemented.  Below is a canned example:

Dear All,

I’d like to make you aware of an item, in case you are asked about it. It involves a business concern, the power grid.  It appears there are a couple articles on the internet about a malware sample called Furtim, getting more media attention.

 One of the articles becoming popular is found here:

The article makes a few points to gain media attention. 

Such as:

  • “sophisticated malware campaign specifically targeting at least one European energy company”
  • “potentially shut down an energy grid “
  • “The sample appears to be targeting facilities that not only have software security in place, but physical security as well “

So I took a look to see if we are protected or if there are gaps in the organization.  What the authors forget to mention is that for the infect to work, other gaps are needed.  For example, the sophisticated malware, in the article is a “final payload” and needs to use another common malware to infect a computer before it can be harmful.  After researching the Furtim malware and with the virus total results found here the information shows Anti-virus detects the common malware as part of the infection chain.  Granted the malware can be re-coded but based on the current information I have today, this is a low risk to the organizations environment.  I’ll continue to monitor logs for specific connections out from our network that are related to command and control.

If you have questions, please feel free to contact me.

 In General, follow the three steps to apply threat intelligence. One, know the organization you are defending, what drives it? Two, be technically proactive, do the research, analyze the attack data and map it to the kill chain.  Three, communicate risk and solutions.  For more information about practical threat intelligence see Rob M. Lee’s blog and enroll in the SANS Threat Intelligence class.

CALL FOR PAPERS – SANS Cyber Threat Intelligence Summit 2017



Summit Dates: January 31, 2017 and February 1, 2017

Training Course Dates: January 25-30, 2017 Summit Venue: Renaissance Arlington Capital View Hotel — Arlington, VA

Deadline to Submit is July 29, 2016.  To submit click here




This year the CTI Summit is going old school. CTI is a relatively new field, however the intelligence field itself has been around for centuries. Throughout the summit we will focus on understanding how classic intelligence approaches and techniques apply to cyber threat intelligence – where they align and where they don’t – and will provide use cases of not only classic methods of producing Cyber Threat Intelligence, but of implementing and making decisions based off of intelligence as well.

If you are interested in presenting or participating on a panel, we’d be delighted to consider your CTI-focused proposal with use cases and communicable lessons. We are looking for presentations from both the intelligence producer and the intelligence consumer fields with a focus on topics that will be directly relatable and applicable to the summit audience.

The CTI Summit offers speakers opportunities for exposure and recognition as an industry leader. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

CFP submissions should detail how the proposed CTI case study, tool, or technique can be utilized by attendees to increase their security posture. All CTI Summit presentations will be 30 minutes with 5 minutes for questions.

The CTI Summit Advisory Board would like to encourage first time presenters to respond to the call for presentations. A diverse mix of both new and experienced speakers will raise the bar for the summit. Advisory Board members are willing to provide submission feedback prior to the CFP deadline as well as mentoring and guidance throughout the process.

We are looking for CTI Presentations that focus on:

  • Classic intelligence approaches applied to CTI analysis and production
  • Interesting perspectives or case studies that challenges CTI assumptions and results in a shift in understanding
  • Case studies on the application of cyber threat intelligence to a security or business problem
  • Innovative ways to utilize or analyze CTI with classic techniques
  • New tools developed to support or enable CTI

Benefits of Speaking

  • Promotion of your speaking session and company recognition via the CTI conference website and all printed materials
  • Visibility via the CTI post-Summit on the SANS DFIR Website
  • Top 3 presentations invited to do a full SANS Webcast
  • Full conference badge to attend all Summit sessions
  • Speakers can bring 1 additional attendee at no registration cost
  • Private speakers-only networking lunches
  • Speaker networking reception on evening before Summit
  • Presentations will be recorded and may be made available via the Internet to a wider audience

A Threat Intelligence Script for Qualitative Analysis of Passwords Artifacts

The Verizon Data Breach Report has consistently said, over the years, passwords are a big part of breach compromises. Dr. Lori Cranor, and her team, at CMU has done extensive research on how to choose the best password policies verses usability. In addition, Alison Nixon’s research describes techniques to determine valid password of an organization you are not a part of (“Vetting Leaks Finding the Truth when the Adversary Lies“). What about passwords leaked in the organization you are defending? This post will be about such a scenario.

According to former Deputy Director, of The Center for The Studies of Intelligence, Ms. Carmen Medina says “analysis in essence is putting things correctly into categories” … “insight is when you come up with a new category scheme that offers more explanatory power than the one you had” ( 18:35 min.). In essence, your hypothesis filters the data to find value and you need to constantly re-evaluate your hypothesis.

Early on in my IR forensic career, I learned the hard way, never assume, choose the hypothesis with the least assumptions. Know what your biases are, be careful to consistently keep them in check and never out rule other options.

So how can we check a password dump on the internet containing credentials purporting to be from our organization? Simple, does the password dump match our password policy? Let’s put the data in more specific buckets and improve out threat intelligence process.

The following script will ask a user to input their password policy settings (length, minimum capital letters, etc.) and compare it to the password dump file, outputting a percentage of invalid passwords. The script saves time and provides a better category scheme for analysis because it put the data into “better buckets”. A future improvement could be to script out the username prior to running he diff. The script allows the analyst to apply threat intelligence in context, making it practical. GitHub.

P.S. I’ve taken SANS 578 Cyber Threat Intelligence (beta) I highly recommend it.


# GitHub
# PW Dump Verifier
# Jamie Gambetta and John Franolich

SANS Cyber Threat Intelligence Summit – Call For Papers Now Open

 SANS Cyber Threat Intelligence Summit Call For Papers  2015.

Send your submissions to by 5 pm EST on Friday, October 24, 2014 with the subject

“SANS CTI Summit CFP 2015.”

Dates: Summit Dates: February 2 & 3, 2015    Pre‐Summit Course Dates: February 4‐9, 2015

Location:  Washington, DC


Our 3rd annual Cyber Threat Intelligence (CTI) Summit will once again be held in Washington DC.

Summit Co-Chairs:Mike Cloppert and Rick Holland

The goal of this summit will be to equip attendees with knowledge on the tools, methodologies and processes they need to move forward with cyber threat intelligence. Attendees that are either new to CTI or more mature in their CTI journey should be able to take away content and immediately apply it to their day jobs. The SANS What Works in Cyber Threat Intelligence Summit will bring attendees who are eager to hear this information and learn about tools, techniques, and solutions that can help address these needs.

Call for Speakers- Now Open
The 3rd annual Cyber Threat Intelligence Summit Call for Speakers is now open. If you are interested in presenting or participating on a panel, we’d be delighted to consider your user-presented case studies with communicable lessons.

This year, we are focusing on submissions that directly discuss the ingestion, analysis and integration of actionable and accurate CTI We are also very interested in case studies from groups that use threat intelligence in their security, detection, and response programs.  What works?  What doesn’t?

The CTI Summit offers speakers opportunities for exposure and recognition as an industry leader. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

Benefits of Speaking

  • Promotion of your speaking session and company recognition via the CTI conference website and all printed materials
  • Visibility via the CTI post-Summit on the SANS Website
  • Top presentations invited to do a full SANS Webcast
  • Full conference badge to attend all Summit sessions
  • Speakers can bring 1 additional attendee for free
  • Private speakers-only networking lunches
  • Speaker networking reception on evening before Summit
  • *Speakers may also be recorded and made available via the Internet to a wider audience (at the discretion of SANS).

Submission Requirements

  • Title of Proposed Talk
  • Author Name(s)
  • Author Title
  • Company
  • Speaker Contact Information: Address, phone number, email address
  • Biography
  • Your biography should be approximately 150 words. You may include your current position, titles, areas of professional expertise, experience, awards, degrees, personal information, etc.
  • Abstract
  • The presentation abstract should outline your presentation and what attendees will learn. All content must be strictly educational. The presentation should be relevant to: Media Exploitation Analysts, Legal, Incident Response Teams, Security Operations, and Law Enforcement professionals.
  • Twitter Handle:
  • Google+:
  • Facebook:
  • Blog:
  • YouTube videos featuring you speaking:

Session/panel length: 60 minutes

Presentation: 45 minutes

Question & Answer: 5-10 minutes

Submit your submissions to by 5 pm EST on Tuesday, October 24, 2014 with the subject “SANS CTI Summit CFP 2015”

Cyber Threat Intelligence Full Agenda – Government Pricing Announced

SANS is offering a one-time discount for the Cyber Threat Intelligence Summit to government employees (e.g., federal, state, local, DoD). This offer reduces the registration fee from $895 to $395 and will be available for a limited time only, on a first come, first served basis. Please select – Register Now on the right side of the page and use the code CTIGOV.

Join SANS for this innovative 1-day event as we focus on enabling organizations to build effective cyber threat intelligence capabilities.


 The Cyber Threat Intelligence Summit strives to bring you the most up-to-date thinking on the hottest topics.

As a result, the agenda is dynamic and subject to change.  Please check back for updates.



7:00am – 8:00am


8:20am – 8:30am

 Welcome and Introduction

 Mike Cloppert & Rob Lee – Summit Co-Chairs

Cyber Threat Intelligence is generally understood to be the collection of information about external threat actors and active external threats.

8:30am – 9:30am


 The Evolution of Cyber Threats and Cyber Threat Intelligence

The presentation will address the history of our understanding of the cyber threat landscape and provide perspective on the role of cyber threat intelligence in addressing ever evolving cyber threats.  The presentation will address the role of tactical threat intelligence in response to threats and the challenges of relating threats to operational risks and conducting strategic estimative intelligence.  The presentation will highlight the value of cyber threat intelligence for enterprises and how to leverage intelligence to improve cyber defense.  The attendees will learn why intelligence is a crucial part of cyber defense at all levels.

Speaker: Greg Rattray, Chief Executive Officer, Delta Risk LLC

Presenter Biography:

As CEO and founding partner in Delta Risk, Dr. Rattray brings an exceptional record in establishing strategies for cyber security and risk management for clients across both the government and private sectors. He also serves as the Senior Security Advisor for BITS/The Financial Services Roundtable. During his 23 year Air Force career, he served as the Director for Cyber Security on the National Security Council staff in the White House where he was a key contributor to the President’s National Strategy to Secure Cyberspace, initiated the first national cyber security exercise program involving government and the private sector, and coordinated the interagency activities related to international engagement on cyber security issues.  Greg also commanded the Operations Group of the AF Information Warfare Center and served in other command and staff positions.  In this role, he was initiated AF and DOD programs for collaboration with defense industrial base partners related to advanced persistent cyber threats and established the AF network warfare training, cyber security tactics and cyber exercise programs.  He also served from 2007-2010 as the Chief Security Advisor to Internet Corporation for Assigned Names and Numbers (ICANN) establishing the ICANN strategy for enhancing security and resiliency of the domain name system.  He was the driving force in the establishment of the Cyber Conflict Studies Association founded to ensure U.S. and international cyber security thinking are guided by a deeper well of intellectual capital involving private industry, think tanks, government and academia and serves as the Association’s President.  Dr. Rattray is a Full Member of the Council on Foreign Relations. He received his Bachelor’s Degree in Political Science and Military History from the U.S. Air Force Academy; a Master of Public Policy from the John F. Kennedy School of Government, Harvard University; and his Doctor of Philosophy in International Affairs from the Fletcher School of Law and Diplomacy, Tufts University. He is the author of the seminal book Strategic Warfare in Cyberspace as well as numerous other books and articles related to cyber and national security.

9:30am – 10:30am  If It Bleeds, We Can Kill It: Leveraging Cyber Threat Intelligence to Take the Fight to the Adversary In perhaps the greatest film ever made, Arnold Schwarzenegger’s elite team of Special Forces operators is pitted against an alien adversary who outmatches them in nearly every encounter. Hopelessly outmatched, Dutch must change his tactics to ultimately defeat the Predator. Enterprises are at a similar point; one look at the headlines makes this painfully clear. Cyber Threat Intelligence (CTI) has emerged as a new capability that enterprises can leverage to become proactive in protecting their environments.

This timely and fast-paced talk will cover:

  • Why you might not be ready for CTI
  • Building versus buying CTI capabilities
  • The vendor landscape for CTI
  • Intelligence sharing and operational security
  • Speaker: Rick Holland, Senior Analyst, Forrester Research
  • Presenter Biography:

Rick Holland is a Sr. Analyst at Forrester Research where he serves security and risk professionals. Rick works with information security leadership providing strategic guidance on security architecture, operations and data privacy. His research focuses on incident response, threat intelligence, email and web content security as well as virtualization security. Prior to joining Forrester, he was a Solutions Engineer where he architected enterprise security solutions. Previously, he worked in both higher education and the home building industry, where he focused on intrusion detection, incident handling and forensics. He is regularly quoted in the media and is a frequent guest lecturer at the UT Dallas. Rick holds a B.S. in Business Administration from UT Dallas.

10:30am -10:50am

Networking Break

10:50am – 11:50am   Expert CTI User Panel:  Best Practices in creating, delivering, and utilizing Cyber Threat Intelligence for your organization  Moderator: Mike Cloppert, Lockheed Martin


Rich Barger, Chief Intelligence Officer, CyberSquared

Chris Sperry, Lockheed Martin

Shane Huntley, Threat Analysis Group, Google

Andre Ludwig, Neustar

Aaron Wade, Senior Team Leader – GE Cyber Intelligence, General Electric Co.


11:50am- 1:00pm

Lunch Break

1:00pm – 2:00pm  Building and Operating a Cyber Threat Intelligence Team Building a response capability to Advanced Persistent Threats involves integration of people, process, technology acquisition and development, organizational structure, communications, and partnerships in a way that enables even large enterprises to be agile and responsive in order to leverage intelligence for effective computer network defense.   Although all of these elements may exist in a conventional incident response team, Lockheed Martin’s threat-oriented focus causes these elements to manifest differently in what is largely an intelligence operation.  In this talk, participants will hear how each of these elements is shaped by cyber threat intelligence, and how their careful orchestration is achieved, by the Senior Manager charged with protecting the world’s largest defense contractor from computer network exploitation.  Also discussed will be methods to evaluate the maturity of one’s own threat intelligence organization, and paths for quick evolution to counter sophisticated adversaries.

Speaker: Mike Gordon, CIRT Senior Manager, Lockheed Martin

Presenter Biography:

Mr. Gordon has over thirteen years of experience in the information security field supporting the Defense Industrial Base, and has also been a security consultant for Public, Health and Financial sectors.

Mike is currently the Senior Manager of the Computer Incident Response Team (CIRT) for Lockheed Martin Corporation. The CIRT is responsible for all protection, detection and response capabilities used in the defense of Lockheed Martin networks enterprise-wide.  The team’s scope of work includes coordination and collaboration with the government and industry partners to handle a wide variety of events related to security intelligence, incident response, intrusion detection, risk mitigation, and digital forensics support.  Mike joined Lockheed Martin in 1997 and has since held multiple positions within the Corporation supporting the Aeronautics Business Area and Corporate Information Security.  Mike has received the Lockheed Martin NOVA Award, the corporation’s high recognition in 2010 and 2011 for Cyber Security programs.

2:00pm – 3:00pm  Creating Threat Intelligence – tools to manage and leverage active threat intelligence Speaker: Reid Gilman, MITRE Threat Intelligence Team


3:00pm – 3:20pm

Networking Break

3:20pm – 4:20pm  Expert CTI Solutions Panel:  Delivering Actionable Cyber Threat Intelligence as a Solution – What Works, Pitfalls, Costs, and Skills Threat and vulnerability feeds by themselves do not produce Cyber Threat Intelligence – let alone create effective, affordable or actionable security advice. Creating an enduring “CTI as a Service” capability can enable a proactive approach to security but it takes a mixture of processes, tools, automation and architecture to assure security (and business) benefit.

In this panel we will ask leading experts from firms that have been creating and delivering CTI services to their customers for years to provide detailed lessons learned with a “What Works” perspective.  Come hear their advice and take home the knowledge to ensure success of your own CTI initiatives.

 Moderator: John Pescatore, Director of Emerging Security Trends, SANS Institute


Richard Bejtlich, Mandiant

Rocky DeStefano, Visible Risk

Adam Meyers, CrowdStrike

John Ramsey, SecureWorks


4:20pm – 5:20pm

Cyber Threat Intelligence SANS360

In one hour, 10 experts will discuss the Cyber Threat Intelligence and how they use it in their organizations.  If you have never been to a lightning talk it is an eye opening experience. Each speaker has 360 seconds (6 minutes) to deliver their message. This format allows SANS to present 10 experts within one hour, instead of the standard one Speaker per hour. The compressed format gives you a clear and condensed message eliminating the fluff. If the topic isn’t engaging, a new topic is just 6 minutes away.

360 Talks: 

  • Attribution:  The Holy Grail or Waste of Time?

Billy Leonard, Security Engineer, Google

  • Cybersecurity at the NSA

Dave Hogue, Operations Lead, National Security Agency’s NSA/CSS Threat Operations Center (NTOC)

  • Intelligence Driven Security In Action

Seth Geftic, Associate Director, Security Management & Compliance Group RSA, The Security Division of EMC

  • David Marcus, Director of Security Research and Communications, McAfee Labs
  • The Product of Intelligence

Sean Coyne, Security Solutions Director, Verizon/Terremark

  • Sean Catlett, VP of Operations, iSIGHT Partners
  • Exercising Analytic Discipline to Make Your Mission Relevant

Patton Adams, Senior Strategic & Counterintelligence Analyst, Northrop                               Grumman

  • Communication Between Teams for CTI

Enoch Long, Principal Security Strategist, Splunk

  • Crowdsourcing Threat Intelligence 

Adam Vincent, Founder & CEO, Cyber Squared

  • Curating Indicators: Bringing Smarts to Intel                                                 

Douglas Wilson, Manager – Threat Indicators Team, Mandiant

  • Battlefield Intelligence – Turning Your Adversary’s Thwarted Attacks into Attribution Gold                                                                              

Anup Ghosh, PhD, Founder & CEO, Invincea

  • The Detection Timeline

Julie J.C.H. Ryan, Associate Professor, George Washington University

Summary & Closing Remarks – Mike Cloppert & Rob Lee


Security Intelligence: Attacking the Cyber Kill Chain

Coming in much later than I’d hoped, this is the second installment in a series of four discussing security intelligence principles in computer network defense.  If you missed the introduction (parts 1 and 2), I highly recommend you read it before this article, as it sets the stage and vernacular for intelligence-driven response necessary to follow what will be discussed throughout the series.  Once again, and as often is the case, the knowledge conveyed herein is that of my associates and I, learned through many man-years attending the School of Hard Knocks (TM?), and the credit belongs to all of those involved in the evolution of this material.

JOIN SANS FOR A 1-DAY CYBER THREAT INTELLIGENCE SUMMIT headed by Mike Cloppert – 22 Mar 2013 –

In this segment, we will introduce the attack progression (aka “cyber kill chain”) and briefly descibe its intersection with indicators.  The next segment will go into more detail about how to use the attack progression model for more effective analysis and defense, including a few contrived examples based on real attacks.

On Indicators

Just like you or I, adversaries have various computer resources at their disposal.  They have favorite computers, applications, techniques, websites, etc.  It is these fundamentally human tendencies and technical limitations that we exploit by collecting information on our adversaries.  No person acts truly random, and no person has truly infinite resources at their disposal.  Thus, it behooves us in CND to record, track, and group information on our sophisticated adversaries to develop profiles.  With these profiles, we can draw inferences, and with those inferences, we can be more adaptive and effectively defend our data.  After all, that’s what intelligence-driven response is all about: defending data that sophisticated adversaries want.  It’s not about the computers.  It’s not about the networks.  It’s about the data.  We have it, and they want it.

Indicators can be classified a number of ways.  Over the years, I and my colleagues have wrestled with the most effective way to break them down.  Currently, I am of the mind that indicators fall into one of three types: atomic, computed, and behavioral (or TTP’s)

Atomic indicators are pieces of data that are indicators of adversary activity on their own.  Examples include IP addresses, email addresses, a static string in a Covert Command-and-control (C2) channel, or fully-qualified domain names (FQDN’s).  Atomic indicators can be problematic, as they may or may not exclusively represent activity by an adversary.  For instance, an IP address from whence an attack is launched could very likely be an otherwise-legitimate site.  Atomic indicators often need vetting through analysis of available historical data to determine whether they exclusively represent hostile intent.

Computed indicators are those which are, well, computed.  The most common amongst these indicators are hashes of malicious files, but can also include specific data in decoded custom C2 protocols, etc.  Your more complicated IDS signatures may fall into this category.

Behavioral indicators are those which combine other indicators – including other behaviors – to form a profile.  Here is an example: ‘Bad guy 1 likes to use IP addresses in West Hackistan to relay email through East Hackistan and target our sales folks with trojaned word documents that discuss our upcoming benefits enrollment, which drops backdoors that communicate to A.B.C.D.’  Here we see a combination of computed indicators (Geolocation of IP addresses, MS Word attachments determined by magic number, base64 encoded in email attachments) , behaviors (targets sales force), and atomic indicators (A.B.C.D C2).  To borrow some parlance, these are also referred to as Tactics, Techniques, and Procedures (TTP’s).  Already you can probably see where we’re going with intelligence-driven response… what if we can detect, or at least investigate, behavior that matches that which I describe above?

One likes to think of indicators as conceptually straightforward, but the truth is that proper classification and storage has been elusive.  I’ll  save the intricacies of indicator difficulties for a later discussion.

Adversary Behavior

The behavioral aspect of indicators deserves its own section.  Indeed, most of what we discuss in this installment centers on understanding behavior.  The best way to behaviorally describe an adversary is by how he or she does his job – after all, this is the only discoverable part for an organization that is strictly CND (some of our friends in the USG likely have better ways of understanding adversaries).  That “job” is compromising data, and therefore we describe our attacker in terms of the anatomy of their attacks.

Ideally, if we could attach a human being to each and every observed activity on our network and hosts, we could easily identify our attackers, and respond appropriately every time.  At this point in history, that sort of capability passes beyond ‘pipe dream’ into ‘ludicrous.’   However mad this goal is, it provides a target for our analysis: we need to push our detection “closer” to the adversary.  If all we know is the forged email address an adversary tends to use in delivering hostile email, assuming this is uniquely linked to malicious behavior, we have a mutable and temporal indicator upon which to detect.  Sure, we can easily discover when it’s used in the future, and we are obliged to do so as part of our due diligence.  The problem is this can be changed at any time, on a whim.  If, however, the adversary has found an open mail relay that no one else uses, then we have found an indicator “closer” to the adversary.  It’s much more difficult (though, in the scheme of things, still somewhat easy) to find a new open mail relay to use than it is to change the forged sending address.  Thus, we have pushed our detection “closer” to the adversary.  Atomic, computed, and behavioral indicators can describe more or less mutable/temporal indicators in a hierarchy.  We as analysts seek the most static of all indicators, at the top of this list, but often must settle for indicators further from the adversary until those key elements reveal themselves.  The figure below shows some common indicators of an attack, and where we’ve seen them fall in terms of proximity to the adversary, variability, and inversely mutability and temporality.


Fig 1: Indicator Hierarchy

That this analysis begins with the adversary and then dovetails into defense makes it very much a security intelligence technique as we’ve defined the term.  Following a sophisticated actor over time is analogous to watching someone’s shadow.  Many factors influence what you see, such as the time of day, angle of sun, etc.  After you account for these variables, you begin to notice nuances in how the person moves, observations that make the shadow distinct from others.  Eventually, you know so much about how the person moves that you can pick them out of a crowd of shadows.  However, you never know for sure if you’re looking at the same person.  At that point, for our purposes, it doesn’t matter.  If it looks like a duck, and sounds like a duck… it hacks like a duck.  Whether the same person (or even group) is truly at the other end of behavior every time is immaterial if the profile you build facilitates predicting future activity and detecting it.

Attack Progression, aka the “Cyber Kill Chain”

We have found that the phases of an attack can be described by 6 sequential stages.  Once again loosely borrowing vernacular, the phases of an operation can be described as a “cyber kill chain.”  The importance here is not that this is a linear flow – some phases may occur in parallel, and the order of earlier phases can be interchanged – but rather how far along an adversary has progressed in his or her attack, the corresponding damage, and investigation that must be performed.


Fig. 2: The Attack Progression


The reconnaissance phase is straightforward.  However, in security intelligence, often times this is manifested not in portscans, system enumeration, or the like.  It is the data equivalent: browsing websites, pulling down PDF’s, learning the internal structure of the target organization.  A few years ago I never would’ve believed that people went to this level of effort to target an organization, but after witnessing it happen, I can say with confidence that it does.  The problem with activity in this phase is that it is often indistinguishable from normal activity.  There are precious few cases where one can collect information here and find associated behavior in the delivery phase matching an adversary’s behavioral profile with high confidence and a low false positive rate.  These cases are truly gems – when they can be identified, they link what is often two normal-looking events in a way that greatly enhances detection.


The weaponization phase may or may not happen after reconnaissance; it is placed here merely for convenience.  This is the one phase that the victim doesn’t see happen, but can very much detect.  Weaponizaiton is the act of placing malicious payload into a delivery vehicle.  It’s the difference in how a Soviet warhead is wired to the detonator versus how a US warhead is wired in.  For us, it is the technique used to obfuscate shellcode, the way an executable is packed into a trojaned document, etc.  Detection of this is not always possible, nor is it always predictable, but when it can be done it is a highly effective technique.  Only by reverse engineering of delivered payloads is an understanding of an adversary’s weaponization achieved.  This is distinctly separate and often persistent across the subsequent stages.


Delivery is rather straightforward.  Whether it is an HTTP request containing SQL injection code or an email with a hyperlink to a compromised website, this is the critical phase where the payload is delivered to its target.  I heard a term just the other day that I really like: “warheads on foreheads” (courtesy US Army).

Compromise / Exploit

The compromise phase will possibly have elements of a software vulnerability, a human vulnerability aka “social engineering,” or a hardware vulnerability.  While the latter are quite rare by comparison, I include hardware vulnerabilities for the sake of completeness.

The compromise of the target may itself be multi-phase, or more straightforward.  As a result, we sometimes have the tendency to pull apart this phase into separate sub-phases, or peel out “Compromise” and “Exploit” as wholly separate.  For simplicity’s sake, we’ll keep this as a single phase.  A single-phase exploit results in the compromised host behaving according to the attacker’s wishes directly as a result of the successful execution of the delivered payload.  For example, if an attacker coaxes a user into running an EXE attachment to an email which contained the desired backdoor code.  A multi-phase exploit typically will involve delivery of shellcode whose sole function is to pull down and execute more capable code upon execution.  Shellcode often needs to be portable for a variety of reasons, necessitating such an approach.  We have seen other cases where, possibly through sheer laziness, adversaries end up delivering exploits whose downloaders download other downloaders before finally installing the desired code.  As you can imagine, the more phases involved, the lower an adversary’s probability for success.

This is the pivotal phase of the attack.  If this phase completes successfully, what we as security analysts have classically called “incident response” is initiated: code is present on a machine that should not be there.  However, as will be discussed later, the notion of “incident response” is so different in intelligence-driven response (and the classic model so inapplicable) that we have started to move away from using the term altogether.  The better term for security intelligence is “compromise response,” as it removes ambiguity from the term “incident.”


The command-and-control phase of the attack represents the period after which adversaries leverage the exploit of a system.  A compromise does not necessarily mean C2, just as C2 doesn’t necessarily mean exfiltration.  In fact, we will discuss how this can be exploited in CND, but recognize that successful communications back to the adversary often must be made before any potential for impact to data can be realized.  This can be prevented intentionally by identifying C2 in unsuccessful past attacks by the same adversary resulting in network mitigations, or fortuitously when adversaries drop malware that is somehow incompatible with your network infrastructure, to give but two examples.

In addition to the phone call going through, someone has to be present at the other end to receive it.  Your adversaries take time off, too… but not all of them.  In fact, a few groups have been observed to be so responsive that it suggests a mature organization with shifts and procedures behind the attack more refined than that of many incident response organizations.

We will also lump lateral movement with compromised credentials, file system enumeration, and additional tool dropping by adversaries broadly into this phase of the attack.  While an argument can be made that situational awareness of the compromised environment is technically “exfiltration,” the intention of the next phase is somewhat different.


The exfiltration phase is conceptually very simple: this is when the data, which has been the ultimate target all along, is taken.  Previously I mentioned that gathering information about the environment of the compromised machine doesn’t fall into the exfiltration phase.  The reason for this is that such data is being gathered to serve but one purpose, either immediately or longer-term: facilitate gathering of sensitive information.  The source code for the new O/S.  The new widget that cost billions to develop.  Access to the credit cards, or PII.

Analytical Approach

As we analyze attacks, we begin to see that different indicators map to the phases above.  While an adversary may attempt to use the exploit du jour to compromise target systems, the backdoor (C2) may be the same as past attacks by the same actor.  Different proxy IP addresses may be used to relay an attack, but the weaponization may not change between them.  These immutable, or infrequently-changing properties of attacks by an adversary make up his/her/their behavioral profile as we discussed in moving detection closer to the adversary.  It’s capturing, knowing, and detecting this modus operandi that facilitates our discovery of other attacks by the same adversary, even if many other aspects of the attack change.

This need for the accumulation of indicators for detection means that analysis of unsuccessful attacks is important, to the extent that the attack is believed to be related to an APT adversary.  A detection of malware in email by perimeter anti-virus, for instance, is only the beginning when the weaponization is one commonly used by a persistent adversary.  The backdoor that would have been dropped may contain a new C2 location, or even a whole new backdoor altogether.  Learning this detail, and adjusting sensors accordingly, can permit future detection when that tool or infrastructure is reused, even if detection at the attack phase fails.  Discovery of new indicators also means historical searches may reveal past undetected attacks, possibly more successful than the latest one.

Analysis of attacks quickly becomes complicated, and will be further explored in future entries culminating with a new model for incident response.

The Indicator Lifecycle

As a derivative (literary, not mathematical) of the analysis of attack progression, we have the indicator lifecycle.  The indicator lifecycle is cyclical, with the discovery of known indicators begetting the revelation of new ones.  This lifecycle further emphasizes why the analysis of attacks that never progress past the compromise phase are important.


Fig. 3: The Indicator Lifecycle State Diagram

Analysis // Revelation

The revelation of indicators comes from many places – internal investigations, intelligence passed on by partners, etc.  This represents the moment that an indicator is revealed to be significant and related to a known-hostile actor.

Search & Tune // Maturation

This is the point where the correct way to leverage the indicator is identified.  Sensors are updated, signatures written, detection tools put in the correct place, development of a new tool makes observation of the indicator possible, etc.

Discovery // Utility

This is the point at which the indicator’s potential is realized: when hostile activity at some point of the cyber kill chain is detected thanks to knowledge of the indicator and correct tuning of detection devices, or data mining/trend analysis revealing a behavioral indicator, for example.  And of course, this detection and the subsequent analysis likely reveals more indicators.  Lather, rinse, repeat.

In the next section, I will walk through a few examples and illustrate how following the attack progression forward and backward leads to a complete picture of the attack, as well as how attacks can be represented graphically.  Following that will be our new model of network defense which brings all of these ideas together.  You can expect amplifying entries thereafter to further enhance detection using security intelligence principles, starting with user modeling.

Michael is a senior member of Lockheed Martin’s Computer Incident Response Team.  He has lectured for various audiences including SANS, IEEE, and the annual DC3 CyberCrime Convention, and teaches an introductory class on cryptography.  His current work consists of security intelligence analysis and development of new tools and techniques for incident response. Michael holds a BS in computer engineering, has earned GCIA (#592) and GCFA (#711) gold certifications alongside various others, and is a professional member of ACM and IEEE.