Investigate and fight cyberattacks with SIFT Workstation

 

Digital forensics and incident response (DFIR) has hit a tipping point. No longer just for law enforcement solving cybercrimes, DFIR tools and practices are a necessary component of any organization’s cybersecurity. After all, attacks are increasing daily and getting more sophisticated – exposing millions of people’s personal data, hijacking systems around the world and shutting down numerous sites.

SANS has a smorgasbord of DFIR training, and we also offer a free Linux distribution for DFIR work. Our SANS Investigative Forensic Toolkit (SIFT) Workstation is a powerful collection of tools for examining forensic artifacts related to file system, registry, memory, and network investigations. It is also available bundled as a virtual machine (VM), and includes everything one needs to conduct any in-depth forensic investigation or response investigation.

Capture9The SIFT Workstation was developed by an international team of forensics experts, including entrepreneur, consultant and SANS Fellow Rob Lee, and is available to the digital forensics and incident response community as a public service. Just because it’s freely available and originally designed for training, though, doesn’t mean it can’t stand up to field investigations. The SIFT Workstation incorporates powerful, cutting-edge open-source tools that are frequently updated, vetted by the open source community and able to match any modern DFIR tool suite. Don’t just take our word for it. Thousands of individuals download SIFT yearly, and it’s used by tens of thousands of people all over the world, including those at multiple Fortune 500 companies. And recently, HackRead named SIFT Workstation in a list of the top 7 cyber forensic tools preferred by specialists and investigators around the world.

SIFT got its start in 2007, during the time SANS instructors were developing virtual machines (VMs) for use in the classroom. In its earliest iterations, it was available online as a download, but was hard-coded and static so whenever there were updates, users had to download a new version. By 2014, SIFT Workstation could be downloaded as an application series and was later updated to a very robust package based on Ubuntu. It can also be installed on Windows, if there is an Ubuntu subsystem running on the system.

In November 2017, SANS unveiled a new version of SIFT Workstation that allows for much more functionality, is much more stable, and is comprised of specific tools such as the Package Manager. This time the package supports rolling updates, and uses SALT, a Python-based configuration management platform, rather than a bootstrap executable and configuration tool.

The new version can work with more than 200 tools and plug-ins from third-parties, and newly added memory analysis functionality enables the SIFT Workstation to leverage data from other sources. New automation and configuration functions mean the user only has to type one command to download and configure SIFT. Because SIFT is scriptable, users can string together commands and create automated analysis, customizing the system to the needs of their investigation.

Download SIFT Workstation today, and get started on your own DFIR initatives. And look into our FOR508: Advanced Incident Response and Threat Hunting course for hands-on learning with SIFT, and how to detect breaches, identify compromised and affected systems, determine damage, contain incidents, and more.

Top 11 Reasons Why You Should NOT Miss the SANS DFIR Summit and Training this Year

The SANS DFIR Summit and Training 2018 is turning 11! The 2018 event marks 11 years since SANS started what is today the digital forensics and incident response event of the year, attended by forensicators time after time. Join us and enjoy the latest in-depth presentations from influential DFIR experts and the opportunity to take an array of hands-on SANS DFIR courses. You can also earn CPE credits and get the opportunity to win coveted DFIR course coins!

summit video pic

To commemorate the 11th annual DFIR Summit and Training 2018, here are 11 reasons why you should NOT miss the Summit this year:

1.     Save money! There are two ways to save on your DFIR Summit & Training registration (offers cannot be combined):

·       Register for a DFIR course by May 7 and get 50% off a Summit seat (discount automatically applied at registration), or

·       Pay by April 19 and save $400 on any 4-day or 6-day course, or up to $200 off of the Summit. Enter code “EarlyBird18” when registering.

2.     Check out our jam-packed DFIR Summit agenda!

·       The two-day Summit will kick off with a keynote presentation by Kim Zetter, an award-winning journalist who has provided the industry with the most in-depth and important investigative reporting on information security topics. Her research on such topical issues as Stuxnet and election security has brought critical technical issues to the public in a way that clearly shows why we must continue to push the security industry forward.

·       The Summit agenda will also include a presentation about the Shadow Brokers, the group that allegedly leaked National Security Agency cyber tools, leading to some of the most significant cybersecurity incidents of 2017. Jake Williams and Matt Suiche, who were among those targeted by the Shadow Brokers, will cover the history of the group and the implications of its actions.

·       All DFIR Summit speakers are industry experts who practice digital forensics, incident response, and threat hunting in their daily jobs. The Summit Advisory Board handpicked these professionals to provide you with highly technical presentations that will give you a brand-new perspective of how the industry is evolving to fight against even the toughest of adversaries. But don’t take our word for it, have a sneak peek, check out some of the past DFIR Summit talks.

kaplan

Immerse yourself in six days of the best in SANS DFIR training. Here are the courses you can choose from:

·       FOR500 – Advanced Windows Forensics

·       FOR585 – Advanced Smartphone Forensics

·       FOR610 – Reverse-Engineering Malware

·       FOR508 – Digital Forensics, Incident Response & Threat Hunting

·       FOR572 – Advanced Network Forensics: Threat Hunting, Analysis, and Incident Response

·       FOR578 – Cyber Threat Intelligence

·       FOR526 – Memory Forensics In-Depth

·       FOR518 – Mac and iOS Forensic Analysis and Incident Response

·       MGT517 – Managing Security Operations: Detection, Response, and Intelligence

3.    All courses will be taught by SANS’s best DFIR instructors. Stay tuned for more information on the courses we’re offering at the conference in a future article post.

4.     Rub elbows and network with DFIR pros at evening events, including networking gatherings and receptions. On the first night of the Summit, we’re going to gather at one of Austin’s newest entertainment venues, SPiN, a ping pong restaurant and bar featuring 14 ping pong tables, lounges, great food, and drinks. Give your overloaded brain a break after class and join us at our SANS Community Night, Monday, June 9 at Speakeasy. We will have plenty of snacks and drinks to give you the opportunity to network with fellow students.

5.     Staying to take a DFIR course after the two-day Summit? Attend SANS@Night talks guaranteed to enrich your DFIR training experience with us. Want to know about threat detection on the cheap and other topics? As for cheap (and in this case, that doesn’t mean weak), there are actions you can take now to make threat detection more effective without breaking the bank. Attend this SANS@Night talk on Sunday evening to learn some baselines you should be measuring against and how to gain visibility into high-value actionable events that occur on your systems and networks.

6.     Celebrate this year’s Ken Johnson Scholarship Recipient, who will be announced at the Summit. This scholarship was created by the SANS Institute and KPMG LLP in honor of Ken Johnson, who passed away in 2016. Early in Ken’s digital forensics career, he submitted to a Call for Presentations and was accepted to present his findings at the 2012 SANS DFIR Summit. His networking at the Summit led to his job with KPMG.

7.     Prove you’ve mastered the DFIR arts by playing in the DFIR NetWars – Coin Slayer Tournament. Created by popular demand, this tournament will give you the chance to leave Austin with a motherlode of DFIR coinage! To win the new course coins, you must answer all questions correctly from all four levels of one or more of the six DFIR Domains: Windows Forensics & Incident Response, Smartphone Analysis, Mac Forensics, Memory Forensics, Advanced Network Forensics, and Malware Analysis. Take your pick or win them all!

 

8.     Enjoy updated DFIR NetWars levels with new challenges. See them first at the Summit! But not to worry, you will have the opportunity to train before the tournament. You’ll have access to a lot of updated posters that can serve as cheat sheets to help you conquer the new challenges, as well as the famous SIFT WorkStation that will arm you with the most powerful DFIR open-source tools available. You could also choose to do an hour of crash training on how to use some of our Summit sponsors’ tools prior to the tournament. That should help give you an edge, right? That new DFIR NetWars coin is as good as won!

9. The Forensic 4:cast Awards winners will be announced at the Summit. Help us text2985celebrate the achievements of digital forensic investigators around the world deemed worthy of the award by their peers. There is still time to cast your vote. (You may only submit one set of votes; any additional voting will be discounted). Voting will close at the end of the day on May 25, 2018.

10.  Come see the latest in tools offered by DFIR solution providers. Summit sponsors and exhibitors will showcase everything from managed services covering advanced threat detection, proactive threat hunting, and accredited incident response to tools that deliver rapid threat detection at scale, and reports that provide insights for identifying potential threats before they cause damage.

11.  Last but not least, who doesn’t want to go to Austin?!? When you think Austin, you think BBQ, right? This city isn’t just BBQ, Austin has amazing food everywhere and there’s no place like it when it comes to having a great time. The nightlife and music include the famous 6th Street – which, by the way, is just walking distance from the Summit venue. There are many other landmarks such as Red River, the Warehouse District, Downtown, and the Market District. You will find entertainment of all kinds no matter what you’re up for. Nothing wrong with some well-deserved play after days full of DFIR training, lectures, and networking!

As you can see, this is an event you do not want to miss! The SANS DFIR Summit and Training 2018 will be held at the Hilton Austin. The event features two days of in-depth digital forensics and incident response talks, nine SANS DFIR courses, two nights of DFIR NetWars, evening events, and SANS@Night talks.

The Summit will be held on June 7-8, and the training courses run from June 9-14.

We hope to see you there!

DFIR Summit 2017 – CALL FOR PRESENTATIONS

 

Call for Presentations Now Open!

 

dfirsummit 2017Submit your proposal here: http://dfir.to/DFIR-CFP-2017
Deadline: January 16th at 5pm CT

 

The 10th Annual Digital Forensics and Incident Response Summit Call for Presentations is open through 5 pm EST on Monday, January 16, 2017. If you are interested in presenting or participating on a panel, we’d be delighted to consider your practitioner-based case studies with communicable lessons.

The DFIR Summit offers speakers the opportunity for exposure and recognition as industry leaders. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

Check out some of the DFIR Summit 2016 talks:

Capture

We are looking for Digital Forensics and Incident Response Presentations that focus on:

  • DFIR and Media Exploitation case studies hat solve a unique problem
  • New Forensic or analysis tools and techniques
  • Discussions of new artifacts related to smartphones, Windows, and Mac platforms
  • Focusing on improving the status quo of the DFIR industry by sharing novel approaches
  • Challenges to existing assumptions and methodologies that might change the industry
  • New, fast forensic techniques that can extract and analyze data rapidly
  • Focus on Enterprise or scalable forensics that can analyze hundreds of systems instead of one at a time

 

DFIR Summit 2016 – Call for Papers Now Open

FullSizeRender

The 9th annual Digital Forensics and Incident Response Summit will once again be held in the live musical capital of the world, Austin, Texas.

The Summit brings together DFIR practitioners who share their experiences, case studies and stories from the field. Summit attendees will explore real-world applications of technologies and solutions from all aspects of the fields of digital forensics and incident response.

Call for Presentations- Now Open
More information 

The 9th Annual Digital Forensics and Incident Response Summit Call for Presentations is now open. If you are interested in presenting or participating on a panel, we’d be delighted to consider your practitioner-based case studies with communicable lessons.

The DFIR Summit offers speakers the opportunity for exposure and recognition as industry leaders. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

Deadline to submit is December 18th, 2015.

Summit Dates: June 23 & 24, 2016
Post-Summit Training Course Dates: June 25-30, 2016

Submit now 

Timeline analysis with Apache Spark and Python

This blog post introduces a technique for timeline analysis that mixes a bit of data science and domain-specific knowledge (file-systems, DFIR).

Analyzing CSV formatted timelines by loading them with Excel or any other spreadsheet application can be inefficient, even impossible at times. It all depends on the size of the timelines and how many different timelines or systems we are analyzing.

Looking at timelines that are gigabytes in size or trying to correlate data between 10 different system’s timelines does not scale well with traditional tools.

One way to approach this problem is to leverage some of the open source data analysis tools that are available today. Apache Spark is a fast and general engine for big data processing. PySpark is its Python API, which in combination with Matplotlib, Pandas and NumPY, will allow you to drill down and analyze large amounts of data using SQL-syntax statements. This can come in handy for things like filtering, combining timelines and visualizing some useful statistics.

These tools can be easily installed on your SANS DFIR Workstation, although if you plan on analyzing a few TBs of data i would recommend setting up a Spark cluster separately.

The reader is assumed to have a basic understanding of Python programming.

Quick intro to Spark

This section is a quick introduction to PySpark and basic Spark concepts. For further details please check the Spark documentation, it is well written and fairly up to date. This section of the blog post does not intend to be a Spark tutorial, I’ll barely scratch the surface of what’s possible with Spark.

From apache.spark.org: Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Spark’s documentation site covers the basics and can get you up and running with minimal effort. Best way to get started is to spin up an Amazon EMR cluster, although that’s not free. You can always spin up a few VMs in your basement lab :)

In the context of this DFIR exercise, we’ll leverage Spark SQL and  Spark’s DataFrame API. The DataFrame API is similar Python’s Pandas. Essentially, it allows you to access any kind of data in a structured, columnar format. This is easy to do when you are handling CSV files.

Folks over at Databricks have been kind enough to publish a Spark package that can convert CSV files (with headers) into Spark DataFrames with one simple Python line of code. Isn’t that nice? Although, if you are brave enough, you can do this yourself with Spark and some RDD transformations.

To summarize, you’ll need the following apps/libraries on your DFIR workstation:

  • Access to a Spark (1.3+) cluster
  • Spark-CSV package from Databricks
  • Python 2.6+
  • Pandas
  • NumPy

It should be noted that the Spark-CSV package needs to be built from source (Scala) and for that there are a few other requirements that are out of the scope of this blog post, as is setting up a Spark cluster.

Analysis (IPython notebook)

Now, let’s get on with the fun stuff!

The following is an example of how these tools can be used to analyze a Windows 7 system’s timeline generated with log2timeline.

You can find the IPython Notebook over on one of my github repos

Digital Forensic SIFTing: SUPER Timeline Creation using log2timeline

This is a series of blog articles that utilize the SIFT Workstation. The free SIFT workstation, can match any modern forensic tool suite, is also directly featured and taught in SANS’ Advanced Computer Forensic Analysis and Incident Response course (FOR 508). SIFT demonstrates that advanced investigations and responding to intrusions can be accomplished using cutting-edge open-source tools that are freely available and frequently updated.

The SIFT Workstation is a VMware appliance, pre-configured with the necessary tools to perform detailed digital forensic examination in a variety of settings. It is compatible with Expert Witness Format (E01), Advanced Forensic Format (AFF), and raw (dd) evidence formats.

Super-Timeline Background

I first started teaching timeline analysis back in 2000 when I first started teaching for SANS.  It was in my first SANS@Night presentation I gave in Dec 2000 at what was then called “Capitol SANS” and I demonstrated a tool I wrote called mac_daddy.pl based off of the TCT tool mactime.  Since that point every certified GCFA has answered test questions on timeline analysis.

We have reached a new resurgence in timeline analysis thanks to Kristinn Gudjonsson and his tool log2timeline.  Kristinn’s work in the timeline analysis field will probably change the way many of you approach cases.

First of all, all of these tools will be found in the SIFT Workstation are ready to go out of the box, but you can keep them up to date at Kristinn’s website www.log2timeline.net.  Kristinn’s tool was also recently added to the FOR508: Advanced Computer Forensic Analysis and Incident Response course last year and has already been taught to hundreds analysts who are now using it in the field daily.

Kristinn’s log2timeline tool will parse all of the following data structures and more through AUTOMATICALLY recursing through the directories for you instead of having to manually accomplish this.

This is a list of the currently available formats log2timeline is able to parse.  The tool is being constantly updated so to get the current list of available input modules it is possible to let the tool print out a list:

# log2timeline -f list

Artifacts Automatically Parsed in a SUPER Timeline:

How to automatically create a SUPER Timeline

log2timeline recursively scans through an evidence image (physical or partition) and extracts artifact timestamp data gathered from the evidence that the tool log2timeline supports (see artifacts above).  This tutorial will step a user who is interested in creating their first timeline from start to finish.

Step 0 – Use the SIFT Workstation Distro

Download Latest SIFT Workstation Virtual Machine Distro: http://computer-forensics.sans.org/community/downloads

It is recommended that you use VMware Player for PCs and VMware Fusion for MACs.  Alternatively, you can install the SIFT workstation in any virtual machine or direct hardware using the downloadable ISO image as well.

Launch the SIFT workstation and login to the console by using the password “forensics”.

Step 1 – Identify your evidence and gain access to it in the SIFT Workstation

The  files used in this example.

Link your evidence files to the SIFT Workstation through the Windows File Share that is enabled by default in the /cases directory.  Either you can plug in a USB drive, mount a remote drive share, or copy the evidence to your /cases/ directory.

It should be noted that the design of the SIFT workstation has a separate drive for the /cases directory to allow for a larger virtual drive or you can connect it to an actual hard drive as well that you mount at the /cases directory.

Open in Explorer \\siftworkstation\cases\EXAMPLE-DIR-YYYYMMDD-###

If your evidence is a E01 then use this previous article on the topic to mount it correctly inside the SIFT workstation.  If your evidence is RAW go ahead and skip to STEP 2.  Access to the raw image is required as log2timeline cannot parse E01 files… yet.

http://computer-forensics.sans.org/blog/2011/11/28/digital-forensic-sifting-mounting-ewf-or-e01-evidence-image-files

  • $ sudo su –
  • # cd /cases/EXAMPLE-DIR-YYYMMDD-####/
  • # mount_ewf.py nps-2008-jean.E01 /mnt/ewf
  • # cd /mnt/ewf

Note the commands that are inputted by the forensicator are highlighted in the blue outlined box.

Step 2 – Create The Super Timeline

Manual creation of a timeline is challenging and still requires some work to get through.  We have included in the SIFT Workstation an automated method of generating a timeline via the new log2timeline tool that can simply be pointed at a disk image (raw disk).  Again, If you are examining an E01 or AFF file, please mount it first using mount_ewf.py or affuse respectively.

Creating a Super Timeline requires you to know whether or not your evidence image is a Physical or a Partition Image.  A Physical image will include the entire disk image and can be parsed by the tool mmls to list the partitions.  A Partition image will be the actual filesystem (e.g. NTFS) and can be parsed by the tool fsstat to list information about the partition.

Once you have figured out if you have a physical disk image or a partition image, then choose the correct implementation of the command to run with the correct timezone.

Critical Failure Point Note: There is confusion over what the (-z) time zone option is used for. The (-z) is the time zone of the SYSTEM. The timezone is used to baseline convert time data that is stored in “local” time and will output the data into the same time zone.

Why the need?  Certain artifacts, such as setupapi.log files and index.dat files, store times in local system time instead of UTC.  Without telling log2timeline what the local system time is, it would slurp up the data from those artifacts incorrectly.  To correct this, log2timeline converts the data into UTC but then output the data back into the same –z time zone by default.

The output of your timeline will always be the same time zone as your –z option unless you specify a different time zone using the “BIG” –Z option.  This will allow you to convert a system time of EST5EDT to UTC output if you desire to compare computers from two different time zones in a single timeline.

If your time zone includes areas that have daylight savings time, it is important to use the correct location with daylight savings time. For example, on the East Coast, the correct implementation of daylight savings for the timezone value would be EST5EDT. For Mountain Time it would be MST7MDT.  log2timeline has autocomplete enabled in the SIFT Workstation, so all you would need to do is type -z [tab tab] to see all the available timezone options that it will recognize. If you do not use this time zone setting correctly with daylight savings accounted for, any local time timeline data that is analyzed in local time will be incorrect.  So just to iterate, using EST as your timezone will treat all timestamps as if they were EST.  Using EST5EDT (and other similarly named ones) will take daylight savings into account.

In summary: It is crucial that the -z option matches the way the system is configured to produce accurate results.

log2timeline LIST-Files

The list files in log2timeline are vitally important to understand.  They can be used to specify exactly which log2timeline modules you would like to parse in a given image.  There are some default list-files already built into log2timeline found in the /usr/share/perl5/Log2t/input directory and all in with the value .lst.  The list-files will always be used with the –f list-file option and the -r (recurse directory) and will automatically parse any artifact included in the list-file chosen found in the starting or subdirectory of the location you are examining with log2timeline.

Once you understand the .lst files are just a list of artifacts you would like to examine, it is fairly simple to add your own for any type of situation.  For example, you could add one for an intrusion investigation against an IIS webserver by using only the artifacts mftevt, and iis.  This will save you a lot of time especially if there are a bunch of IIS log files on the system.

Intermediate log2timeline LIST-Files usage

If you prefer to not make a list-file for use.  log2timeline can take any number of processors, as long as they are separated with a comma.

An example: -f winxp,-ntuser,syslog–  This will load up all the modules in the winxp input list file, and then add the syslog module, and remove the ntuser one.

The same can be done here:  -f webhist,ntuser,altiris,-chrome — This will load up all the modules inside the webhist.lst file, add ntuser and altiris, and then remove the chrome module out of the list.

This is the proper way to form a command using the log2timeline list-files option in the SIFT workstation.

Now that we have an understanding of the basic functionality, it is best if we quickly take a look at some cases in where a targeted timeline could be used.

Case Study 1 – Intrusion Incident (IIS Web Server)

Perhaps you are examining a case in which you have to examine a web server for a possible residue of attack.  In this case you are not sure when the attack took place but you would like to look at not only the system’s MFT, you would also like to include the IIS log files and the system’s event logs.  This greatly reduces the amount of clutter in your timeline as you already know your attack via the web would be found in these 3 places.

Mount your disk image correctly using the SIFT workstation on /mnt/windows_mount

Now build the commands to build your initial timelin

Step 1 – Find Partition Starting Sector   # mmls image.dd calculate offset ##### (sector *512)

Step 2 – Mount image for processing   # mount -o ro, noexec,show_sys_files,loop,offset=##### image.dd /mnt/windows_mount

Step 3 – Add Filesystem Data   # log2timeline –z EST5EDT –f mft,iis,evt /mnt/windows_mount –w intrusion.csv

Once you have run the three commands you will now have your timeline built.  It is best that you now sort your timeline using l2t_process.  l2t_process is most effective when you are bounding it by two dates to limit looking at all times on the system.

Step 4 – Filter Timeline   # l2t_process -b intrusion.csv > filtered-timeline.csv

#l2t_process –b /cases/ EXAMPLE-DIR-YYYYMMDD-####/timeline.csv 01-16-2008..02-23-2008 > timeline-sorted.csv

Case Study 2 – Restore Point Examination

In this example, we are using log2timeline to sort a Windows XP restore points only looking for “Evidence of Execution” only.  This is used to show how you can use log2timeline to provide a targeted timeline of only a piece of the drive image instead of the entire system itself.  In this example, we could see historically the last execution time for many executables on each day a restore point was created.

# log2timeline -r -f ntuser -z EST5EDT /mnt/windows_mount/System\ Volume\ Information/ -w /cases/forensicchallenge/restore.csv

# kedit keyword (add userassist, runmru, LastVistedMRU, etc.)

# l2t_process -b restore.csv -k keyword.txt > filtered-timeline.csv

What if a key part of our case was determining when the last time internet explorer was last executed over time?  It is now easily visible each time IE was last executed on a specific day using timeline analysis techniques like those I showed above.  Here you can easily track the execution of a specific program across multiple days thanks to quick analysis using the restore point data (NTUSER.dat hives) and log2timeline.

Case Study 3 – Manual Super Timeline Creation

In some cases, you might not want to use the full super timeline.  To understand what log2timeline is stepping through automatically; it might be useful to accomplish the same output by hand.  The following is the steps that log2timeline takes care of us in a single command instead of 3 steps.

In Summary:

Timeline analysis is hard.  Understanding how to use log2timeline will help engineer better solutions to unique investigative challenges.  The tool was built for maximum flexibility to account for the need for both targeted and overall super timeline creation. Create your own preprocessors for targeted timelines.  Use log2timeline to only collect the data you need.   Or use it to collect everything.

In the next article we will talk about more efficient ways of analyzing data collected from log2timeline

CONGRATS!

You just created your first SUPER Timeline… now you get to analyze thousands of entries!  (Wha???)

In another upcming  article, I will discuss how to parse and reduce the timeline efficiently so you can analyze the data easier.  SUPER-TIMELINES obtain much data from your operating system, but learning how to parse it into something useable is extremely valuable.  In my SANS360 talk, I will take this technique even further.  Of course, we go through all these techniques in our full training courses at SANS specifically FOR508: Advanced Computer Forensic Analysis and Incident Response.

Keep Fighting Crime!

Rob Lee has over 15 years of experience in digital forensics, vulnerability discovery, intrusion detection and incident response. Rob is the lead course author and faculty fellow for the computer forensic courses  at  the SANS Institute and lead author for FOR408 Windows Forensics and FOR508 Advanced Computer Forensics  Analysis and Incident Response.

 

 

Digital Forensics Case Leads: Registry Forensics, Volume Shadow Copies and Windows 8

It’s the “better late than never” edition of Case Leads and I’ve got lots of great stuff for you this week.  Lots of great articles and papers to read, including a very cool post by Andrew Case on recovering registry hives from a system that’s been reformatted and had the OS reinstalled, as well as several how to articles by Harlan Carvey. Rob Lee also checks in with an excellent article on timelines and volume shadow copies. Yes, all that and more, so let’s get started! Oh, by the way, if you have an item you’d like to contribute to Digital Forensics Case Leads, please send it to caseleads@sans.org.

<strong>Tools:</strong>
<ul>
<li><a href=”http://dfsforensics.blogspot.com/2011/09/announcnig-registry-decoder.html”>Registry Decoder</a> I know this was mentioned in last week’s Case Leads, but I wanted to mention it again. This is an excellent tool and will have updates and new features in the very near future.</li>

<li>Windows 8 Developer Preview Download  The pre-beta release of the next version of Windows. Time to start testing!</li>
</ul>

<strong>Good Reads:</strong><ul>
<li>Recovering and Analyzing Deleted Registry Files Very cool article by Andrew Case on the DFS blog detailing the recovery of registry files–well worth reading

<li>Shadow Timelines And Other Volume Shadow Copy Digital Forensics Techniques with the Sleuthkit on Windows   An excellent article by our own Rob Lee

The next three are all from Harlan Carvey’s Windows IR blog. I really enjoy reading “how to’s” on forensic blogs and these don’t disappoint.
<li>How to: Creating Mini Timelines For those times when a Super Timeline is more than you really need</li>
<li>How to: File Extension Analysis Determining which application a file “belongs” to </li>
<li>How to: Mount and Access VSC’s Working with Volume Shadow Copies</li></ul><strong>News:</strong>
<ul>
<li>Paraben Device Seizure 4.5 released</li>
<li>NYU-Poly CSAW High School Forensics Competition</li>
</ul><strong>Levity:</strong>
<ul>
<li>Dilbert Bad connection?</li>
</ul><strong>Coming Events:</strong>

Digital Forensics Case Leads is a (mostly) weekly publication of the week’s news and events relating to digital forensics. If you have an item you’d like to share, please send it to caseleads@sans.org.

<em>Digital Forensics Case Leads for 17, September 2011 was compiled by Ken Pryor, GCFA. Ken is a police officer and does computer forensic investigations for his and several other police departments in his area. He is also an adjunct instructor for Lincoln Trail College in Robinson, IL, teaching computer forensic and related courses.</em>

Digital Forensics Case Leads: Registry and Malware Analysis Tools, Preparing to Testify, and Virtual Machine Technology on Mobile Devices

This week’s edition of Case Leads features a number of new tools and updates for a few of the old standbys.  We have a collection of  tools designed for studying malware found on Windows or Android platforms and a couple of new applications for registry analysis.

Virtual machine technology is heading for Android based devices as a couple of vendors team up to make it happen.  We also feature articles about testifying at trial, “breaking in” to the field of Digital Forensics and the plethora of personal information associated with mobile applications.

As always, if you have an item you’d like to contribute to Digital Forensics Case Leads, please send it to caseleads@sans.org.

Tools:

Good Reads:

News:

Levity:

Coming Events:

Call For Papers:

Digital Forensics Case Leads is a (mostly) weekly publication of the week’s news and events relating to digital forensics. If you have an item you’d like to share, please send it to caseleads@sans.org.

 

Digital Forensics Case Leads for 20110908 was compiled by Ray StrubingerRay regularly leads digital forensics and incident response efforts and when the incidents permit, he is involved in aspects of information security ranging from Data Loss Prevention to Risk Analysis.

Ultimate Windows Timelining

Recently, I was considering material for an internal knowledge transfer session on timelining, when it occurred to me that the subject matter was likely of broader interest, and so, without further ado…

First, a note about the way I personally use timelines. I find them a great way to identify dated tidbits which one might not otherwise realize are associated with activity of interest, once investigation has been focused down to a restricted timeframe. When I do this, I typically extract all timeline data, and then filter it for times of interest to avoid information overload.

The  best general purpose timelining utility of which I’m aware is Kristinn Gudjonsson’s log2timeline tool. This great program handles a huge selection of input filetypes, and can output in several standardized formats, most notably; Bodyfile. I consider the output from this utility (when run against a complete image) to be the foundation of an ultimate timeline for the target system. There are a few gaps in the coverage of log2timeline, however, and at least one little-known tweak (which I’ve requested Kristinn add or at least document in the tool) can enhance its utility to those examiners who, for one reason or another, need to do most or all of their analysis under Windows. The tweak to which I refer is a way to get log2timeline to compile and run under the Windows cygwin environment. There’s one prerequisite for log2timeline which won’t compile under cygwin no matter what I’ve tried, the Net::Pcap perl module. (Technically, there are two, but the other one is Gtk2, and that’s only required to build the GUI, which I don’t use. It may actually be possible to build that, but I haven’t tried very hard.) In any case, the Net::Pcap module is only used to add timestamps for packets in network captures, which I’ve found to be infrequently necessary in my usage. Fortunately, with one simple change, it’s possible to disable this functionality, and allow the remainder of the utility to be built successfully under cygwin. All you have to do is remove the file ‘lib/Log2t/input/pcap.pm’ from your log2timeline source folder after you untar it, but before you begin the build process.

The few sources of timestamp data that I really miss from a log2timeline scan are as follows:

  • Filesystem metadata such as NTFS timestamps. However there are a number of other tools which are capable of extracting this information and outputting it in a compatible format, including fls and ils from the Sleuthkit.
  • The redundant NTFS timestamps from the long and short Filename attributes are a related but separate category from the above, and can be more difficult to access. I could have sworn you could dump them with the Sleuthkit, but a cursory look back at the documentation only indicates the data is available via the istat command, which doesn’t output a compatible format. However, I just did a quick Google search, and it appears that David Kovar has recently added Bodyfile output format support to his analyzeMFT Python script. This script also extracts the timestamps for allocated files, so it’s a one-stop-shop for MFT dates. Alternatively, EnCase users with current support can get at this data using the MFT Date and Time Comparator (V2) EnPack.
  • NTFS timestamps stored in the INDX structures in the parent folder beneath which a file resides. I’m not currently aware of any forensic tool that can parse through all the NTFS timestamps stored in INDX structures and output the data in a standardized format. The data itself is available to EnCase users via either the ‘index buffer reader’ EnScript that ships in the Examples folder, or else in the ‘42 LLC INDX Extractor Enpack‘ from 42LLC.
  • While selected Registry Key last written timestamps are extracted by log2timeline, I sometimes find it useful to have the complete list of timestamp data from the registry. The regtime.pl perl  script by Harlan Carvey, which is part of the SIFT Kit, can extract all this data and dump it in Bodyfile format.
  • All of the above in unallocated space or physical memory dumps. This is where it gets kind of hairy. As with the INDX data mentioned above, there are tools which can identify and extract the data, but they don’t deliver it in nice neat packages. I dealt with this issue somewhat in my previous posting, Dates from Unallocated Space, but a few points are worth revisiting and/or expanding upon.
    • (New!) Enscript from Lance Mueller to find classic (.evt) event log entries in unallocated & output as a new .evt file. It doesn’t work well on physical memory images, surprisingly, but I posted on Extracting Event Logs or Other Memory Mapped Files from Memory Dumps a few months.
    • Enscript from Lance Mueller that can find old registry key names/dates in unallocated (or anywhere else, for that matter), restricted by date.
    • Enscript from Lance Mueller that can find a date-restricted windows timestamp anywhere. In this case, however, you’re on your own figuring out exactly what the date refers to. Careful examination of the surrounding data can indicate that it’s part of a MFT entry, an INDX entry, a registry key, a log entry, a file fragment, or just random bits.
    • The INDX Enscripts mentioned above work fine on unallocated space as well.
    • Any file which is successfully carved from unallocated space may contain useful dates, so try parsing all such carved files again with log2timeline or possibly exiftool to extract any useful application metadata.

As always, you’re welcome to leave commentary if you liked this article or want to call me on the carpet for some inaccuracy. And please do so if you know of something useful that I’ve missed.

Computer Forensic Artifacts: Windows 7 Shellbags

Windows ShellbagsAs Windows Registry artifacts go, the “Shellbag” keys tend to be some of the more complicated artifacts we have to decipher.  But they are worth the effort, giving an excellent means to prove the existence of files and folders along with user knowledge.  Shellbags can be used to answer the difficult questions of data enumeration in intrusion cases, identify the contents of long gone removable devices, and show the contents of previously mounted encrypted volumes.   Information persists for deleted folders, providing an invaluable reference for items no longer part of the file system.

A Brief Overview

Windows uses the Shellbag keys to store user preferences for GUI folder display within Windows Explorer.  Everything from visible columns to display mode (icons, details, list, etc.) to sort order are tracked.   If you have ever made changes to a folder and returned to that folder to find your new preferences intact, then you have seen Shellbags in action.  In the paper Using shellbag information to reconstruct user activities, the  authors write that “Shellbag information is available only for folders that have been opened and closed in Windows Explorer at least once” [1].  In other words, the simple existence of a Shellbag sub-key for a given directory indicates that the specific user account once visited that folder.  Thanks to the wonders of Windows Registry last write timestamps, we can also identify when that folder was first visited or last updated (and correlate with the embedded folder MAC times also stored by the key).  In some cases, historical file listings are available.  Given much of this information can only be found within Shellbag keys, it is little wonder why it has become a fan favorite.

The Shift from Windows XP

The architecture of Shellbag keys within Windows XP is well understood and has been broadly covered [1,2].  However this is not the case with the Windows 7 format.  I have recently had good luck using Shellbags within computer intrusion cases to show evidence of file system enumeration by attackers using compromised accounts.  These systems have largely been Windows XP or Server 2003 and when I first sat down to review a Windows 7 system I was severely disappointed.  Following the trend of many of our favorite Registry keys being updated in Windows 7, the Shellbag keys underwent a major transformation.  Gone are the familiar Shell, ShellNoRoam, and StreamMRU categories that respectively denoted network, local, and removable device folders.  Data from all of these locations still appears to be collected, but all three artifact categories are now stored within the Shell subkey.  The keys themselves are stored as a slightly different binary format making manual deciphering even more painful.  I was beating my head against the wall trying to reverse engineer the new format when Rob Lee suggested I do something really smart: leverage an existing tool and work backwards.  He introduced me to the Tzworks Shellbag Parser and I was hooked.  Besides being the only true Windows 7 Shellbags parser I am aware of, it does a remarkable job of parsing Shellbag structures.

What you need to know

Along with updating the Registry keys, Windows 7 also gave us a completely new user-specific Registry hive named USRCLASS.dat.  This hive supports the new User Access Control (UAC) and the mandatory access control integrity levels now baked into the operating system.  In oversimplified terms, it is used to record configuration information from user processes that do not have access to write to the standard registry hives.  In order to get all Shellbags information, we now need to parse both NTUSER.dat and USRCLASS.dat for each user account.  These Registry hives are located in the %user profile% and %user profile%&#092AppData&#092Local&#092Microsoft&#092Windows folders respectively.  The specific Shellbag keys  are:

 

USRCLASS.DAT&#092Local Settings&#092Software&#092Microsoft&#092Windows&#092Shell&#092BagMRU

USRCLASS.DAT&#092Local Settings&#092Software&#092Microsoft&#092Windows&#092Shell&#092Bags

NTUSER.DAT&#092Software&#092Microsoft&#092Windows&#092Shell&#092BagMRU

NTUSER.DAT&#092Software&#092Microsoft&#092Windows&#092Shell&#092Bags

 

Microsoft documents additional Shellbag keys that may be present on Win7 systems, but after a review of several Win7, Vista, and 2008R2 systems I have been unable to find any evidence of them being used [4].  For reference purposes, the additional keys are (Wow6432Node keys are located on x64 systems):

 

NTUSER.DAT &#092Software&#092Microsoft&#092Windows&#092ShellNoRoam&#092BagMRU

NTUSER.DAT&#092Software&#092Microsoft&#092Windows&#092ShellNoRoam&#092Bags

USRCLASS.DAT&#092Wow6432Node&#092Local Settings&#092Software&#092Microsoft&#092Windows&#092Shell&#092Bags

USRCLASS.DAT&#092Wow6432Node&#092Local Settings&#092Software

 

Parsing Shellbag Keys Using TZworks Sbag.exe

Using the TZworks tool to parse Shellbags is trivial.  It is a command-line tool with just a few parameters.  If you are working on a live system, you can use the “listhives” parameter to have it identify the available user Registry hives.

Livehives Option in Sbag.exe

Once you know the paths of the hives you wish to parse,  execute Sbag.exe and redirect the results to a .csv file (results are “|” delimited).

Parsing Shellbag Artifacts with Sbag.exe

 

Finally, import the files into your favorite spreadsheet and start your analysis.

Shellbag Results

What information should you expect to gather from the Shellbags keys?  Each folder will have:

  • Bag Number [Identifies the “Bags” subkey containing user preferences – aka Nodeslot]
  • Registry key last write time [First access of folder or last preference change]
  • Folder name
  • Full path
  • Embedded creation date / time [Stored at the time the BagMRU key was created]
  • Embedded modify date / time [Stored at the time the BagMRU key was created]
  • Embedded access date / time [Stored at the time the BagMRU key was created]

My initial testing of TZworks Sbag.exe has found it to be highly reliable.  In comparison to my previous go-to tool, Windows Registry Analyzer (which only accurately parses XP Shellbags), it does a more complete job, particularly with regard to timestamps.  It works with both  XP and Windows 7 artifacts, can parse both live and exported Registry hives, and the output is extremely easy to work with.  Versions for Windows, Linux, and Mac OS X are available.  If you haven’t incorporated Shellbag review into your examinations, now is the time!  Also, keep in mind that XP and Windows 7 Shellbag analysis is now included  in the SANS 508 Advanced Forensics class.

[1]  Zhu, Gladyshev, James (2009).  Using shellbag information to reconstruct user activities, Digital Investigation, 6, S69-S77

[2]  Shellbags Registry Forensics, John McCash

[3] Tzworks Sbag.exe

[4] Microsoft Support Article: Shellbag Keys

Chad Tilbury, GCFA, has spent over twelve years conducting computer crime investigations ranging from hacking to espionage to multi-million dollar fraud cases. He teaches FOR408 Windows Forensics and FOR508 Advanced Computer Forensic Analysis and Incident Response for the SANS Institute. Find him on Twitter @chadtilbury or at http://ForensicMethods.com.