What makes an expert?

I have recently been involved in a case where the argument came to one of who is an expert. This is not an uncommon attack when the issues at hand are not really in dispute and the opposing team wants to focus the case on other things. It may seem strange that a person with multiple post graduate degrees, SANS/GIAC certifications (and others) up the wazzoo and years of experience can be challenged on these grounds, but it is not unusual in this industry.

I did not specify anything stating that I am Forensic focused on my CV. I have too much for that and even for courts it is necessary to limit one’s experience. That said, I did list all the SANS certifications and several Master’s degrees.

So, how could it be possible to attack one’s standing as an expert when you have a GSE, GSM and multiple IT Masters degrees in security?

Simple, none of these are a degree in “forensics”. This is the argument I was faced with. It is not a good argument, but it is something that we can expect to see more and more in coming years. In my circumstance, I teach/lecture at an Australian University presenting a Master’s degree specialising in Digital forensics. However, I do not have a Master’s degree in digital forensics. There is a reason for this, they did not exist when I started in IT Security and Forensics and hence the reason a number of years back for my putting a proposal into the University to create one (which I now teach).

But is that an issue?

This is the issue that is really at point. Many people coming into forensics think that being an expert involves having a digital forensic qualification. There are times when this could be necessary. In the acquisition of data, having a provable skill is essential, but this is not necessarily a degree in forensics.

In the case I am on, I am acting as an expert on software security. I will attest to this due to post graduate qualification in software design and coding as well as numerous peer reviewed papers on the topic.

This is the issue we need to consider and address. An expert is an expert in a particular field. In many circumstances, this is simply an expertise in finding and analysing data, but others will involve analysing software, code and intrusions. This is in part why I tell people that they cannot stop learning in this field. There are so many more things coming up each year, which you cannot ever learn too much.

So, can you be an expert in court without having forensic qualification?

This is something many do not realise; you do not need a forensic qualification to provide forensic evidence to a court. In fact, most expert witnesses do not have forensic training. An expert witness is an expert in a particular field. If you are talking to the court on software integrity or security issues, you need to be an expert in software development and coding, not a forensic expert.

Having both helps, but it is not essential.

We need to start thinking about what an expert really is and not focus on the issue at hand. When analysing and recovering data, we need one set of skills, but this does not provide expertise in everything.

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Erasing drives should be quick and easy

In the past years, I have seen many many false and misleading statements about what is needed to securely erase or wipe a hard drive. The FUD surrounding this topic with many still purporting to have a means of recovering data using SEMs and AFM (electron microscopy will do) is incredible.

The problem is that it hurts us all.
This year alone (and we are not even through the first month) I have read supposedly reputable security professionals stating that X-Ray machines and scanners will erase a drive. I have read how you need to use a forklift to drive over them.

With the help of a few colleges, I tested the theory (as that was all it ever was) that a SEM or AFM could be used to recover data. There is a reason that NO organization has ever done this, it is not possible. Science is based on empirical testing. Before that point it is not science and is just a hypothesis. Data recovery from a single wipe is not possible. It is up to those who sell the snake oil to prove it. This is science people.
As for a couple of the other versions of FUD in drive wiping I noted…

  • An airport body scanner will do nothing to any hard drive. I travel several times a year and I am yet to lose any data from an airport scanner.
  • Driving over a drive with a roller (let alone a fork lift) will not damage all the platters and will leave a good level of recovery in most instances. The drive is not always crushed. Using a 2.5 tonne roller, I was able to recover the platters from 45% of drives without trying too hard.

And the Government cannot read your wiped drives either…
Although somebody like the NSA might be able to use some sort of system to read the magnetic markings on the rest of the platter.
No, they cannot. The NSA does not do this, I hate these silly conspiracy theories. FUD hurts us all! Modern drives use a glass platter with a foil coating. They shatter with the right impact. They do not need to be broken though. They just need to have a secure process to wipe the information…

A secure process to wipe hard drives exists!
The simplest manner is to use the wipe function in the drive. On an ATA, SATA, PATA etc drive there is the firmware Secure Erase command. This is also supported in all good SCSI drives. Not all SCSI and Fibre Channel disk drives support a “Fast SecureErase” capability, but all good modern versions have an Erase function.

Secure Erase (SE) is a positive, simple data destruction process. It is in effect “Electronic data shredding.” SE completely erases all possible data areas on a supported drive (and it is difficult to find platter based drives that do not support this command set any more) by overwriting.

A full erase using SE can take 30 minutes to over an hour to complete. The thing is that the drive will restart the wipe if it is power cycled. So just restarting the host will not stop the process.

To ensure that a user cannot take the platter and move it to another drive case (and new firmware) the Fast SE complete phase changes a key and effectively makes the drive unrecoverable in our lifetime.

Basically it is quick. It is non-recoverable. It saves all the BS. It removes the need for the FUD that still surrounds us.

The process is simple:
1. The user wanting to wipe the drive issues the SE security command
a. Set User Password, Security =Maximum (Master Password = Blank)
2. The drive completes a Fast SE process and changes an encryption key locking the drive
3. The SE process is run to do an in-depth wipe (taking 30 minutes to over an hour)
4. The drive is wiped and ready to use.

Once the SE security wipe starts, it cannot be stopped.
The BS and that is what it is around small bits of information being recovered using microscopy from shattered drives is also FUD. Think for a moment, what is there on an isolated 512bit section of drive that you randomly select that you can actually use?

So, how do I wipe my drive?
The utility hdparm [1] will allow this (replace /dev/sda with the drive you seek to wipe).
1 Make sure you are logged into the system as root. You can use a boot disk.
2 Issue the hdparm command as root and check the drive is not security frozen
a. hdparm -I /dev/sda
b. The result should contain the words “not frozen”
3 Issue the command to
a. hdparm –user-master u –security-set-pass Eins /dev/sda
4 Confirm the process
a. hdparm -I /dev/sda
b. look for the word “enabled” in the output
5 Issue the AT SE command
a. hdparm –user-master u –security-erase Eins /dev/sda
6 When the drive is erased, the output verification will return “not enabled”, check using the command:
a. hdparm -I /dev/sda

A DOS/ Windows version of this command also exists [2].
References

[1] http://sourceforge.net/projects/hdparm/
[2] http://cmrr.ucsd.edu/people/Hughes/SecureErase.shtml

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Linux Programming Tools

Digital forensics practitioners, incident responders and *nix system administrators should be aware of programming tools that can aid attackers. It is simple for an attacker to load code when compilers or other tools are installed on a system. In this event, the attacker can simply add any tools that are desired by compiling them on the host. Source code can be uploaded over ASCII connections such as telnet, so even a console can be used to load one’s favorite tools when compilers are installed.

In many cases, compilers and other similar tools have been restricted or (ideally) not installed on production systems. Where this is the case, it is still common to discover many related tools (including disassemblers) on a host. Some of these tools are covered in this section. These may allow an attacker to create and load code on a system, so when analysing a compromised host, you need to think beyond gcc and the common compilers.

In many instances, systems will not have tools at one’s disposal that can easily be used to test privilege escalation. In this instance, an attacker may “roll their own” exploits. Stack and Heap overflows are all too common in software. Even where patches are available, it is all too common to find patches missing. This can be a result of legacy systems not functioning when the patch is applied, or a simple failure for any reason to have applied the patch.

In these instances, an attacker could exploit a flaw in the software to gain additional privileges on the system (maybe even root). The following tools are a small sample of common Linux tools that may be found and used on systems that have restricted compiler access.

GDB / DBX

“gdb” is a software debugger in Linux and “dbx” is essentially the same in UNIX. These commands are commonly found on systems where compilers have been removed as many system administrators are uncertain of their use.

There are many useful tutorials on the web for both gdb and dbx. Some of these include:

1. http://www.ece.unm.edu/faculty/jimp/310/nasm/gdb.pdf

2. http://dirac.org/linux/gdb/

These are highly advanced tools, so I have left them to the end of this paper. The boon of finding them on a system cannot be beaten. These tools are primarily used when looking for exploitable flaws on a system.

If you can copy an executable from the system, this can be run and verified on another *NIX system. Any exploitable flaws can then be discovered and used in the testing and validation process.

objdump

The “objdump” command is a disassembler similar to gdb. It is not a debugger. This difference means that you can disassemble the executable binary without actually having to execute it. This can come in handy when you are looking for poorly constructed binaries (e.g. those with stack overflows) but are not ready to execute these.

This also gets around the issue where a binary has read privileges for a user account used by the tester but not execute rights.

readelf

The “readelf” command is similar to “objdump” with more detailed information being provided on ELF headers (Executable and Linking Format). It is used in the analysis of executable binary files to view the GOT (Global Offset Table) and the PLT (Procedural Linkage Table).

ltrace / strace

The “ltrace” tool is used to intercept and record library calls. It is similar to “strace”. The “ltrace” command executes a program recording all of the library calls made and any signals that are received. “strace” also records system calls as well as library calls.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

NDIFF for incident detection

A good way to see changes to the network is with a tool called ndiff.

Ndiff is a tool that utilizes nmap output to identify the differences, or changes that have occurred in your environment. Ndiff can be downloaded from http://www.vinecorp.com/ndiff/.  The application requires that perl is installed in addition to nmap.  The fundamental use of ndiff entails combining ndiff with a baseline file. This is achieved by using the “-b” option to select the file that is the baseline with the file to be tested using the  “-o” option. The “-fmt” option selects the reporting format.

Ndiff can query the system’s port states or even test for types of hosts and Operating Systems using the “-output-ports” or “-output-hosts” options.

The options offered in ndiff include:

ndiff   [-b|-baseline <file-or-:tag>]      [-o|-observed <file-or-:tag>]

[-op|-output-ports <ocufx>]      [-of|-output-hosts <nmc>]

[-fmt|-format <terse | minimal | verbose | machine | html | htmle>]

Ndiff output may be redirected to a web page:

ndiff –b base-line.txt –o tested.txt –fmt machine | ndiff2html > differences.html

The output file, “differences.html”, may be displayed in a web browser. This will separate hosts into three main categories:

  • New Hosts,
  • Missing Hosts, and
  • Changed Hosts.

The baseline file (base-line.txt) should be created as soon as a preliminary network security exercise has locked down the systems and mapped what is in existence. This would be updated based on the change control process. In this, any authorized changes would be added to the “map”. Any unauthorized changes or control failures with the change process will stand out as exceptions.

If a new host has appeared on the network map that has not been included in the change process and authorization, it will stand out as an exception. This reduces the volume of testing that needs to be completed.

Further, if a host appears in the “Changed Hosts” section of the report, you know what services have been added. This is again going to come back to a control failure in the change process or an unauthorized change. This unauthorized change could be due to anything from an internal user installing software without thinking or an attacker placing a trojan on the system. This still needs to be investigated, but checking an incident before the damage gets out of hand is always the better option.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

An anti-forensics dd primer

dd is the swiss army knife of file tools – with /dev/tcp it can also be a network tool (but nc is simpler).

First we need the basics for dd. For this we have the man page and some definitions. I have taken (blatantly paraphrased) the man file info for dd and included this below (which is simple to obtain – “man dd”).

For the purpose of a task such as reversing files and swapping them, we need to concentrate on the following options:

  • bs – This is block size. Setting “bs=1” means that we can use dd as a bit level (instead of a block level tool). Although it does slow down the process from a block copy, we are not looking at how fast we can copy here.
  • skip – this tells us to skip “n” blocks. In our case, we want “n” bits.

What we are going to do is start at the value of “n” set to our last bit in the file. We will loop the dd function to next copy bit “n – 1”, then “n – 2”, … to “n=1”. This means n gets copied to bit 1, “n – 1” to bit 2, …, bit 1 to bit n.

In other words we need to copy bit “n – i” in the source file to bit “i – n” in the destination file.

How to reverse a file with dd

Reversing a file is actually fairly simple, a small shell script code executed in with the length of the file (based on the sector size) is all that is required. You can either use a default block size (where the individual blocks will be moved into a reverse order), or set the block size to 1 in order to completely reverse the file. The flag, “bs=1” is added in order to copy the entire file in reverse – bit by bit.

If the size of the file and its name are known beforehand, the script is particularly simple (note that this script uses the ‘count’ command, which is not found on all systems, you may try seq instead):

$j = [file_size]
$F=[file to copy]
for i in `count 0 $j`; do
    dd conv=noerror bs=1 count=1 skip=($i) if=$F > /($j).out
done

 

In the event that you do not know the size of the file, the following script can be used (or if you want to incorporate this in to a script that changes multiple files at once) you need to feed more information into the script (including a file descriptor). This script is a little messy (I have not made any effort to tidy it up), but does the trick.

#! /bin/bash
# This is a small utility script that will reverse the 
# file that a user inputs
# It is not coded securely and presumes the directory for a 
# number of command - change
# this to run it in a real environment. The main thing is
# a proof of concept anti-forensic tool.
# This script by reversing files will make the file
# undetectable as a type of file by commercial
# file checkers. Run it in reverse to get the original back.
#
# Author: Craig S Wright

#Set the file to reverse
echo "Enter the name (and path if necessary) of the file you want to reverse:"; read FILE

#i Work out the file size
SIZE_OF_FILE=`/bin/ls -l $FILE | awk '{print $5}'`
i=0

#The script - not pretty - but the idea was all I was
# aiming at

K=`expr $SIZE_OF_FILE - $i`

/bin/dd conv=noerror bs=1 skip=$K if=$FILE count=1 > $FILE.out
i=`expr $i + 1`

J_Plus=`expr $SIZE_OF_FILE + 1`

while [ "$i" != "$J_Plus" ] do
    K=`expr $SIZE_OF_FILE - $i`
    /bin/dd conv=noerror bs=1 skip=$K if=$FILE count=1 >> $FILE.out
    i=`expr $i + 1`
done

 

To go a little further and add some options, I have included the following example. I have NOT added input checking or other NECESSARY security controls. This is a quick and nasty only. Please fix the paths and input checking if you want to run it.

The following script is called reverse.sh:

#! /bin/bash
#
# reverse.sh
#
# Set the file to reverse - I DO NOT check if the file
# actually exists - you should!
echo "Enter the name (and path if necessary) of the file you want to reverse:"; read FILE

# Default File output = FILE.out
FILE_OUT=$FILE.out

# Set the file where the reversed file is to be saved - I DO
# NOT check if the file actually exists - you should!

echo "Enter the name (and path if necessary) of the file you want the output saved as (must be different to the input):"; read $FILE_OUT

#Set the Block Size. This will default to BS=1 for dd
BS_SIZE=1
echo "Enter the Block Size (the default = 1 bit):"; read BS_SIZE

# Work out the file size
SIZE_OF_FILE=`/bin/ls -l $FILE | awk '{print $5}'`

i=0

#The script - not pretty - but the idea was all I was
# aiming at

K=`expr $SIZE_OF_FILE - $i`

/bin/dd conv=noerror bs=$BS_SIZE skip=$K if=$FILE count=1 > $FILE_OUT

i=`expr $i + 1`

J_Plus=`expr $SIZE_OF_FILE + 1`

while [ "$i" != "$J_Plus" ] do
    K=`expr $SIZE_OF_FILE - $i`
    /bin/dd conv=noerror bs=$BS_SIZE skip=$K if=$FILE count=1 >> $FILE_OUT
    i=`expr $i + 1`
done

# The end...

 

To use the previous script enter:

$ ./reverse.sh

Enter the name of the file you want to reverse and the block size (best left at 1 bit). This will return the bitwise reversed file. If you want to verify it – run it twice and use “diff” to validate that the same file is returned. This will reverse the reverse and get the original back.

This works on text and binary files and with a little tweeking, you can reverse headers but leave the body the same, reverse the body after skipping the file header and many more options.

I am yet to find a forensic tool that will find reversed text if you are not looking for it. Also, this is a simple way of passing tools when an IDS/IPS is in use. The reversed files are not found in default scans. This has been tested with several of the leading IDS products. In all cases, it was possible to sent tools without setting an alert.

With time and practice, you can create a loader script that will take the reversed file and execute it directly into memory. This leaves no copy of the original file to be uncovered with a Host based IDS.

The script example above has the file output written without checking if a file exists. The following is an example of how you can add a small amount of script to verify that you are not overwriting an existing file:

if [ -f $FILE ] then
    echo "The file [$FILE] that you are seeking write already exists"
    echo "Do you want to overwrite the existing file? ( y/n ) : \c"
    read RESPONSE
    if [ "$RESPONSE" = "n" ] || [ "$RESPONSE" = "N" ] then
        echo "The file will not be overwritten and the process will abort!"
        exit
    fi
fi

 

It is also a good idea to use the full path in a script. Users can change the path variables they are exposed to and unless you set these (either explicitly or by adding a profile for the script to use) an attacker could use a system script to run their own binary.

The key to successfully testing a system and validating the security state of that system is to think outside the box. For instance, there are several reasons why you would want to reverse a file for testing:

  1. Attackers could do this to bypass filters, controls and other protections
  2. Anti-forensics, finding the needle in a haystack is difficult – esp. when the tools do not help
  3. Pen Testing – just as in point 1 for attackers, the tester can use this to load tools without being detected by filters or through malware detection engines

Once a file has bypassed the perimeter controls, getting it to work inside an organization is simple. Hence a means to bypass controls is of interest to those on the attack side of the equation (both validly and less so).

Next, it is a concern to the forensic professional. Hiding files through reversing them makes the process of discovery a proverbial search for the needle in a haystack.

An interesting effect to try is to maintain the header on a bitmap file (i.e. skip the first portion of the file and reverse the later parts). What ends up occurring is that the image can be recreated upside down. All types of interesting effects can be found.

As always, the cards are stacked in favor of the attacker. When in a contest that pits rules against open morality, rules lose more than not. This does not mean that we give up, only that we have to understand the odds that are stacked against us and that it is also the case that people naturally err. This is when we (the “good” guys) win.

For security professionals to be successful, we need to think outside the box.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Understanding *NIX File Linking (ln)

The “ln” command is an important tool in any Unix admin’s arsenal and attackers use it too, so it is essential that forensics analysts understand it. It is used to either:

  1. Create a link to a target file with a selected name.
  2. Create a link to a target file in the current directory.
  3. Create links to each target in a directory.

The “ln” command will by default produce hard links. Symbolic links are created with the “–symbolic” option set (or “-s”). In order to create a hard link; the target file has to exist. The primary formats of the commands are:

  • ln [OPTION]… [-T] <TARGET> <LINK_NAME>
  • ln [OPTION]… <TARGET>
  • ln [OPTION]… -t <DIRECTORY> <TARGET>

Some malicious uses of ln are in hiding files, though perhaps not very well, and creating subterfuge by wrapping legitimate programs. The “ln” command need not alter any files on the host. Yet, it can be used to redirect a user or process to another file. It can also be used to replace a file or to create one that is linked to a rogue binary in another location.

It is not only binaries. Configuration files and even devices (/dev/*) can be linked. As an example, many users do not check their path. It is all too common to see users and even administrators run a binary without specifying the path to the file. For instance, running “sh” instead of entering “/bin/sh”. Worse, path environment variables and many directories are often poorly configured.

In previous security assessments, I have noted this issue many times. The following example is from a real financial system.

  • The path for root was “/usr/local/bin:$PATH”
  • The directory permissions on the “/usr/local/bin” directory were 775 allowing common group members on the system to create files.
  • The shell “/bin/sh” existed, but there is no “/usr/local/bin/sh”.

An attacker could create a malicious script on a virtual file system and add a link to it at “/usr/local/bin/sh”. For instance,

“ln –s /dev/.. /ramcache/evil_script /usr/local/bin/sh”

Where the file “/dev/.. /ramcache/evil_script” can be as simple as a script that does a set of commands and returns to the shell such as:

#!/bin/sh
/dev/.. /ramcache/do.evil.stuff.sh
/bin/rm -rf /usr/local/bin/sh
rm –rf /dev/.. /ramcache
/bin/sh

 

The question arises, “why not just copy a file directly to the link location“. The answer is anti-forensics. It is possible to load files into memory space and into memory file systems (e.g. /dev/.. /ramcache). It is far more difficult to recover memory file systems than to recover deleted files on disks.

The symbolic link file will be recoverable, but possibly not its contents.

In the example above, as the wrapper shell is first in the path, the linked file in “/dev/.. /ramcache” will run instead of the desired “/bin/sh” shell command. This could occur through poor scripting (where the full path is not included in the script header) or when a “lazy” user or administrator types “sh” rather than “/bin/sh”.

Another example is to link the “touch” command into a memory location. The “touch” command can be used to change the last modified and accessed times on a file. This can hide the fact that a command was executed or that a file was read. The problem is that the touch command will display a modified access time. This is an indication of its being run (logs may also exist, but these are checked less often).

By creating a “hard link” into the *NIX memory file system, a copy of the command can be created in full. Hence, the touch command can be run and leave only volatile data as evidence of its use.

An attacker may make use of this technique and it is important to understand the implications.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Unix System Accounting and Process Accounting

Accounting reports created by the system accounting service present the *NIX administrator with the information to assess current resource assignments, set resource limits and quotas, and predict future resource requirements. This information is also valuable to the forensic analyst and allows for the monitoring of system resourcing. This data can be a means of finding what processes and resources have been used and by which user.

When the system accounting has been enabled on a *NIX system, the collection of statistical data will begin when the system starts or a least from the moment that the accounting service is initiated. The standard data collected by system accounting will include the following categories:

  • Connect session statistics
  • Disk space utilization
  • Printer use
  • Process use

The accounting system process starts with the collection of statistical data from which summary reports can be created. These reports can assist in system performance analysis and offer the criteria necessary to establish an impartial customer charge back billing system or many other functions related to the monitoring of the system. A number of the individual categories of statistics collected have been listed in the sections that follow.

Connect Session Statistics

Connect session statistics allow an organization to bill, track or charge access based on the tangible connect time. Connect-session accounting data, associated with user login and logout, is composed by the init and login commands. When a user logs into the *NIX system, the login program makes an entry in the “wtmp” file. This file will contain the following user information:

  • Date of login/logout
  • Time of login/logout
  • Terminal port
  • User name

This data may be can be utilized in the production of reports containing information valuable to the computer forensic investigator, system security tester and system administrator. Some of the information that can be extracted includes:

  • Connect time seconds used,
  • Date and starting time of connect session,
  • Device address of connect session,
  • Login name,
  • Number of prime connect time seconds used,
  • Number of nonprime connect time seconds used,
  • Number of seconds elapsed from Jan 01st 1970 to the connect-session start time,
  • Process Usage,
  • User ID (UID) associated with the connect-session.

It is also possible to gather statistics about individual processes using system accounting. Some areas that may be collected include:

  • Elapsed time and processor time consumed by the process,
  • First eight characters of the name of the command,
  • I/O (Input/output) statistics,
  • Memory usage,
  • Number of characters transferred,
  • Number of disk blocks read or written by the process ,
  • User and group numbers under which the process runs.

Many *NIX systems maintain statistical information in a “pacct” or process account database or accounting file. This database is commonly found in the “/var/adm/pacct” file, but like many *NIX log files, this will vary from system to system. The accounting file used by many of the system and process accounting commands. When a process terminates, the kernel writes information explicit to the particular process into the “pacct” file. This file consists of the following information:

  • Command used to start the process
  • Process execution time
  • Process owner’s user ID

When system accounting is installed and running on a *NIX system, commands to display, report, and summarize process information will be available. Commands such as “ckpacct” can be used by the administrator or system security tester to ensure that the process accounting file (“pacct”) remains under a set size and thus is stopped from either growing too large or possibly impacting system performance in other ways.

Disk Space Utilization

System accounting provides the ability for the system security tester to receive information concerning the disk utilization of the users. As it is possible to restrict users to a specified disk usage limit, the system security tester may need to validate usage through a disk quota system. This may be monitored and tested to ensure users are adhering to limits. This allows an unwary client to be charged fees that are correctly associated with another account. Disk usage commands perform three basic functions:

  • Collect disk usage by filesystem,
  • Gather disk statistics and maintain them in a format that may be used by other system accounting commands for further reporting,
  • Report disk usage by user.

Note: it is necessary to be aware that users can avoid charges and quota restrictions for disk usage by changing the ownership of their files to that of another user. The “chown” command provides a simple method for users to change ownership of files. Coupled with the ability to set access permissions (such as through the use of the “chmod” command), a user could create a file owned by another party that they could still access.

Printer Usage

Printer usage data is stored in the “qacct” file (this is commonly located in “/var/adm/qacct” on many systems though this varies). The “qacct” file is created using an ASCII format. The qdaemon writes ASCII data to the “qacct” file following the completion of a print job. This file records printer queue data from each print session and should at a minimum contain the following fields:

  • User Name
  • User number(UID)
  • Number of pages printed

Automatic Accounting Commands

To accumulate accounting data, the *NIX system needs to have a number of command entries installed into the ”crontab” file (e.g. the “/var/spool/cron/crontabs/adm” file on many *NIX’es but this will change from system to system). The cron file of the adm user is configured to own the whole of the accounting files and processes. These commands have been designed to be run using cron in a batch mode. It is still possible to execute these commands manually from a command line or script.

  • ckpacct   Controls the size of the /var/adm/pacct file. When the /var/adm/pacct file grows larger than a specified number of blocks (default = 1000 blocks), it turns off accounting and moves the file off to a location equal to /var/adm/pacctx (x is the number of the file). Then ckpacct creates a new /var/adm/pacct for statistic storage. When the amount of free space on the filesystem falls below a designated threshold (default = 500 blocks), ckpacct automatically turns off process accounting. Once the free space exceeds the threshold, ckpacct restarts process accounting.
  • dodisk    Dodisk produces disk usage accounting records by using the diskusg, acctdusg, and acctdisk commands. By default, dodisk creates disk accounting records on the special files. These special filenames are usually maintained in “/etc/fstab” or “/etc/filesystems”.
  • monacct   Uses the daily reports created by the commands above to produce monthly summary reports.
  • runacct   Maintains the daily accounting procedures. This command works with the acctmerg command to produce the daily summary report files sorted by user name.
  • sa1  System accounting data is collected and maintained in binary format in the file /var/adm/sa/sa{dd}, where {dd} is the day of the month.
  • sa2  The sa2 command removes reports from the “../sa/sa{dd}” file that have been there over a week. It is also used to write a daily summary report of system activity to the “../sa/sa{dd}” file.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Finer Points of Find

The *NIX “find” command is probably one of the system security tester’s best friends on any *NIX system. This command allows the system security tester or digital forensic analyst to process a set of files and/or directories in a file subtree. In particular, the command has the capability to search based on the following parameters:

  • where to search (which pathname and the subtree)
  • what category of file to search for (use “-type” to select directories, data files, links)
  • how to process the files (use “-exec” to run a process against a selected file)
  • the name of the file(s) (the “-name” parameter)
  • perform logical operations on selections (the “-o” and “-a” parameters)

One of the key problems associated with the “find” command is that it can be difficult to use. Many experienced professionals with years of hands-on experience on *NIX systems still find this command to be tricky. Adding to this confusion are the differences between *NIX operating systems. The find command provides a complex subtree traversal capability. This includes the ability to traverse excluded directory tree branches and also to select files and directories with regular expressions. As such, the specific types of file system searched with his command may be selected.

The find utility is designed for the purpose of searching files using directory information. This is in effect also the purpose of the “ls” command but find goes far further. This is where the difficulty comes into play. Find is not typical *NIX command with a large number of parameters, but is rather a miniature language in its own right.

The first option in find consists of setting the starting point or subtrees under which the find process will search. Unlike many commands, find allows multiple points to be set and reads each initial option before the first ”-“ character. This is, the one command may be used to search multiple directories on a single search. The paper, “Advanced techniques for using the *NIX find command” by B. Zimmerly provides an ideal introduction into the more advanced features of this command and is highly recommended that any system security tester become familiar with this. This section of the chapter is based on much of his work.

The complete language of find is extremely detailed consisting of numerous separate predicates and options. GNU find is a superset of the POSIX version and actually contains an even more detailed language structure. This difference will only be used within complex scripts as it is highly unlikely that this level of complexity would be effectively used interactively.:

  • -name     True if pattern matches the current file name. Simple regex (shell regex) may be used. A backslash (\) is used as an escape character within the pattern. The pattern should be escaped or quoted. If you need to include parts of the path in the pattern in GNU find you should use predicate ”wholename”
  • “-(a,c,m)time”      as possible search may file is last “access time”, “file status” and “modification time”, measured in days or minutes. This is done using the time interval in parameters -ctime, -mtime and -atime. These values are either positive or negative integers.
  • -fstype type True if the filesystem to which the file belongs is of type type. For example on Solaris mounted local filesystems have type ufs (Solaris 10 added zfs). For AIX local filesystem is jfs or jfs2 (journalled file system). If you want to traverse NFS filesystems you can use nfs (network file system). If you want to avoid traversing network and special filesystems you should use predicate local and in certain circumstances mount
  • “-local”     This option is true where the file system type is not a remote file system type.
  • “-mount”  This option restricts the search to the file system containing the directory specified. The option does not list mount points to other file systems.
  • “-newer/-anewer/-cnewer baseline”         The time of modification, access time or creation time are compared with the same timestamp in the file used as a baseline.
  • “-perm permissions”    Locates files with certain permission settings. This is an important command to use when searching for world-writable files or SUID files.
  • “-regex regex”     The GNU version of find allows for file name matches using regular expressions. This is a match on the whole pathname not a filename. The “-iregex” option provides the means to ignore case.
  • “-user”  This option locates files that have specified ownership. The option” –nouser” locates files without ownership. In the case where there is no user in “/etc/passwd” this search option will find matches to a file’s numeric user ID (UID). Files are often created in this way when extracted from a tar acrchive.
  • “-group” This option locates files that are owned by specified group. The option, “-nogroup” is used to refer to searches where the desired result relates to no group that matches the file’s numeric group ID (GID) of the file
  • “-xattr”     This is a logical function that returns true if the file has extended attributes.
  • “-xdev “ Same as the parameter “-mount primary”. This option prevents the find command from traversing a file system different from the one specified by the Path parameter.
  • “-size”   This parameter is used to search for files with a specified size. The “-size” attribute allows the creation of a search that can specify how large (or small) the files should be to match. You can specify your size in kilobytes and optionally also use + or – to specify size greater than or less than specified argument. For instance:
    • find /usr/home -name “*.txt” -size 4096k
    • find /export/home -name “*.html” -size +100k
    • find /usr/home -name “*.gif” -size -100k
    • “-ls”     list current file in “ls –dlis” format on standard output.
    • “-type”   Locates a certain type of file. The most typical options for -type are:
      • d    A Directory
      • f    A File
      • l    A Link

Logical Operations

Searches using “find“ may be created using multiple logical conditions connected using the logical operations (“AND”, “OR” etc). By default options are concatenated using AND. In order to have multiple search options connected using a logical “OR” the code is generally contained in brackets to ensure proper order of evaluation.

For instance \(-perm -2000 -o -perm -4000 \)

The symbol “!” is used to negate a condition (it means logical NOT) . “NOT” should be specified with a backslash before exclamation point ( \! ).

For instance find . \! -name “*.tgz” -exec gzip {} \;

The “\( expression \)” format is used in cases where there is a complex condition.

For instance find / -type f \( -perm -2000 -o -perm -4000 \) -exec /mnt/cdrom/bin/ls -al {} \;

Output Options

The find command can also perform a number of actions on the files or directories that are returned. Some possibilities are detailed below:

  • “-print”   The “print” option displays the names of the files on standard output. The output can also be piped to a script for post-processing. This is the default action.
  • “-exec”    The “exec” option executes the specified command. This option is most appropriate for executing moderately simple commands.

Find can execute one or more commands for each file it has returned using the “-exec” parameter. Unfortunately, one cannot simply enter the command.

For instance:

  • find . -type d -exec ls -lad {} \;
  • find . -type f -exec chmod 750 {} ‘;’
  • find . -name “*rc.conf” -exec chmod o+r ‘{}’ \;
  • find . -name core -ctime +7 -exec /bin/rm -f {} \;
  • find /tmp -exec grep “search_string” ‘{}’ /dev/null \; -print

An alternative to the “-exec” parameter is to pipe the output into the “xargs” command. This section has only just touched on find and it is recommended that the system security tester investigate this command further.

A commonly overlooked aspect of the “find” command is in locating files that have been modified recently. The command:

find / -mtime -7 –print

displays files on the system recursively from the ‘/’ directory up sorted by the last modified time. The command:

find / -atime -7 –print

does the same for last access time. When access is granted to a system and with ever file that is run, the file times change. Each change to a file updates the modified time and each time a file is executed or read, the last accessed time is updated.

These (the last modified and accessed times) can be updated using the touch command.

A Summary of the find command

Effective use of the find command can make any security assessment much simpler. Some key points to consider when searching for files a detailed below:

  • Consider where to search and what subtrees will be used in the command remembering that multiple piles may be selected
    • find /tmp /usr /bin /sbin /opt -name sar
  • The find command allows for the ability to match a variety of criteria
  • -name     search using the name of the file(s). This can be a simple regex.
  • -type     what type of file to search for ( d — directories, f — files, l — links)
  • -fstype typ   allows for the capability to search a specific filesystem type
  • -mtime x File was modified “x” days ago
  • -atime x File was accessed “x” days ago
  • -ctime x File was created “x” days ago
  • -size x       File is “x” 512-byte blocks big
  • -user user    The file’s owner is “user”
  • -group group The file’s group owner is “group”
  • -perm p   The file’s access mode is “p” (as either an integer/symbolic expression)
    • Think about what you will actually use the command for and consider the options available to either display the output or the sender to other commands for further processing
    • -print    display pathname (default)
    • -exec     allows for the capability to process listed files ( {} expands to current found file )
      • Combine matching criteria (predicated) into complex expressions using logical operations -o and -a (default binding) of predicates specified.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Finding out about other users on a Linux system

These commands are used to find out about other users on a *NIX host. When testing the security of a system covertly (such as when engaged in a penetration test) it is best to stop running commands when the system administrator is watching. These commands may also be useful for digital forensics investigators and incident response personnel.

w

The ‘w’ command displays any user logged into the host and their activity. This is used to determine if a user is ‘idle’ or if they are actively monitoring the system.

who

The ‘who’ command is used to find both which users are logged into the host as well as to display their source address and how they are accessing the host. The command will display if a user is logged into a local tty (more on this later) or is connecting over a remote network connection.

finger <user_name>

The ‘finger’ command is rarely used these days (but does come up from time to time on legacy and poorly configured systems). The command provides copious amounts of data about the user who is being “fingered”. This information includes the last time that user read their mail and any log in details.

last -1 <user_name>

The ‘last’ command can be used to display the “last” user to have logged on and off the host and their location (remote or local tty). The command will display a log of all recorded logins and log-offs if no options are used.

When the <user_name> option is provided, this will display all of the user’s log-ins to the system. This is used when profiling a system administrator to discover the usual times that person will be logged into and monitoring a system.

whoami

This command displays the username that is currently logged into the shell or terminal session.

passwd <user_name>

The ‘passwd’ command is used to change your password (not options) or that of another user (if you have permissions to do this).

kill PID

This command “kills” any processes with the PID (process ID) given as an option. The ‘ps’ command (detailed later in the paper) is used to find the PID of a process.

This command can be used to stop a monitoring or other security process when testing a system. The ‘root’ user can stop any process, but other users on a host can only stop their own (or their groups) processes by default.

du <filename>

The ‘du’ command displays the disk usage (that is the space used on the drive) associated with the files and directories listed in the <filename> command option.

df

The ‘df’ command is used to disaplay the amount of free and used disk space on the system. This command displays this information for each mounted volume of the host.

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Unix Network and System profiling

It is essential to identify network services running on a UNIX host as a part of any review. To do this, the reviewer needs to understand the relationship between active network services, local services running on the host and be able to identify network behavior that occurs as a result of this interaction. There are a number of tools available for any UNIX system that the reviewer needs to be familiar with.

Netstat

Netstat lists all active connections as well as the ports where processes are listening for connections.  The command, “netstat -p -a –inet” (or the equivalent on other UNIX’es) will print a listing of this information.   Not all UNIX versions support the “netstat –p” option for netstat. In this case other tools may be used.

Lsof

The command, “lsof” allows the reviewer to list all open files where “An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, or a stream or network file”.

LSOF is available from  ftp://vic.cc.purdue.edu/pub/tools/unix/lsof/lsof.tar.gz

Ps

The command, “ps” reports a snapshot of the current processes running on UNIX host. Some Examples from the “ps” man page of one UNIX system are listed below.

To see every process on the system using standard syntax:

  • ps -e
  • ps -ef
  • ps -eF
  • ps -ely

To see every process on the system using BSD syntax:

  • ps ax
  • ps axu

To print a process tree:

  • ps -ejH
  • ps axjf

To get info about threads:

  • ps -eLf
  • ps axms

To get security info:

  • ps -eo euser,ruser,suser,fuser,f,comm,label
  • ps axZ
  • ps -eM

To see every process running as root (real & effective ID) in user format:

  • ps -U root -u root u

To see every process with a user-defined format:

  • ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
  • ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
  • ps -eopid,tt,user,fname,tmout,f,wchan

Print only the process IDs of syslogd:

  • ps -C syslogd -o pid=

Print only the name of PID 42:

  • ps -p 42 -o comm=

Craig Wright is a Director with Information Defense in Australia. He holds both the GSE-Malware and GSE-Compliance certifications from GIAC and completed the GSE as well. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial lawand ecommerce law, A Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Stuart University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.