Detecting Audit Prep versus Good Processes in Place – Part 2

1
Filed under Compliance, How To, Unix Auditing

This is second part of two postings that touch upon the notion of an organization’s IT personnel preparing for an audit versus having good practices in place. While the previous posting focused on the Windows environment (see Detecting Audit Prep versus Good Processes in Place – Part 1 ), this one will focus more on the Linux/UNIX side of things.

To somewhat mirror the previous blog entry, we’ll look at when passwords were last changed, where some of the password controls may be, as well as take a look at when patching was last performed. One thing with UNIX/Linux though, one particular technical control may not be implemented exactly the same from flavor to flavor- that is, CentOS and openSUSE may do things one way, whereas Ubuntu does something a little differently, as will Solaris, as will Debian, as will OS X, etc… And things can be arguably more complex in the UNIX world, as in some cases, you can enforce a given control in more than one way. So when auditing UNIX/Linux systems, your scripts may require that you execute different commands (sometimes with slightly different switches) look in more places, and files to obtain a desired output for your analysis.

For the purpose of this posting though, we’ll focus on just one or two ways or Linux variants to try and determine if audit preparation may have taken place for a given technical control. Again, do keep in mind that in some cases, there may be other ways to enforce a particular control, and just because you don’t find evidence in one place doesn’t mean it’s not being done in a different way somewhere else. But, this is what prompts dialogue with the client, which is an important part of the audit service you perform- facilitating a transparent analysis of the data, while providing as much meaningful and actionable feedback to the client.

Regarding Simple Password Controls and User Accounts

Continuing along with the previous posting, we’ll continue to audit systems for a large organization with decentralized IT.   First, let’s look at one of the locations where “simple” password controls may be set for some RedHat-based Linux OSes (e.g. CentOS, openSUSE, SUSE Linux Enterprise Server, Fedora, etc.). The file /etc/login.defs can contain password aging controls and the minimum password length for local user accounts.Obtaining the contents of login.defs is as simple as using the cat command to concatenate the file’s contents to a text file for off-line analysis. An example of how to do this is below:

cat /etc/login.defs > ./loginDefs.txt

Keep in mind that the above is a simplistic example. Ideally, collecting information from a Linux/UNIX-based system will be done via a script that collects dozens if not hundreds of individual system artifacts for offline analysis so as to get a more accurate picture of this system- and possibly to be compared with information gathered from others (e.g. for comparison, and to try and determine if an issue may be systemic).

The relevant information gathered from the target system’s login.defs file is below (note that some Linux/UNIX Oses may have more relevant controls in this file or in other locations):

...
PASS_MAX_DAYS 180
PASS_MIN_DAYS 0
PASS_MIN_LEN 5
PASS_WARN_AGE 7
...

So OK… We have some password-related parameters enforced by the OS for local accounts. The first question to ask, is “is this consistent with organization policy?”. If not, that may be a finding already. And we notice real quickly that the minimum password length is pretty low (PASS_MIN_LEN is set to 5). Note that other password parameters/requirements may be in places like the PAM configuration files…

So now, let’s look at the timestamps of when the passwords were last changed. This is stored in the /etc/shadow file. First though, a word of caution: this is a very sensitive file, and should be treated with utmost care and prudence, as it contains the password hashes for the local user accounts. You do not want to be the auditor that loses control of this information… That being said, it does allow for password testing, which can be an important part of the audit. Do keep in mind your audit objectives and scope- this is not a pen test. It’s an audit. Your objective is (most likely) not to try and crack all of the passwords, but rather to test for poor and default passwords if this kind of testing is performed at all.

For those not familiar with it, the shadow generally has a line for each user account that has the following construction:

username:passwd:last:min:max:warn:expire:disable:reserved

Where:

  • username is the username
  • passwd is the password hash
  • last is the days since the epoch (January 1, 1970) the password was last changed
  • min is the minimum number of days that must pass before a password can be changed
  • max is the maximum number of days that a password can be used (maximum age)
  • warn is the number of days that a user is warned prior to the password expiring
  • expire is the number of days after a password expires that an account is disabled
  • disable is the number of days since the epoch that the account has been disabled
  • reserved is a reserved field…

Let’s say we pulled the shadow file, and we have the two user account entries below:

user2:$1$xqbWUyd/$RmZ5k7.r9BAFQgiLeQ1vJ0:13789:0:99999:7:::
user3:$1$Cz8s9PHr$qYtIn1Pp38/QKsc3HNk920:14507:0:99999:7:::

Note that maximum password age is set to 99999 for each of them. This indicates that the passwords never expire… So a few questions should come to mind: “Is this prudent for these user accounts?” and “Is this consistent with the organization’s policy?”.   Also note that the password age is not the 180 that was set in the login.defs file…  But now, let’s focus on the date the password was last set (the third field).

Now let’s translate/convert the password last changed dates:

  • 13789 converts to  Wednesday, October 03, 2007
  • 14507 converts to Sunday, September 20, 2009

So both are well outside the 180 day maximum password age noted in /etc/login.defs – and by quite a long time…

Note that a formula that works for the epoch time value noted above is <epoch value>+25569 .  This resultant value can be used in an Excel spreadsheet, and then formatted as a date.  The number 25569 is “January 1, 1970” when formatted as a date in Excel…  Also bear in mind that some Linux/UNIX variants may have the timestamp as a much longer number, and in that case a slightly different formula is used to convert it to a human-readable date.

To help determine whether or not the accounts with passwords that have not been changed in a significant amount of time may be in use, you can use the last command to see what accounts have logged in recently:

#last -200

which will display the last 200 logins:

...
user1  sshd   10.10.10.3  Mon Apr 15 10:54 - 10:57 (00:02)
user2  sshd   10.10.10.13 Mon Apr 15 10:53 - 10:54 (00:00)
user3  sshd   10.10.21.23 Mon Apr 15 10:47 - 10:53 (00:06)
oracle pts/22 10.20.10.11 Mon Apr 15 10:37 still logged in
...

So we can see that user2 and user3 have both been logging in recently, thus, they are indeed actively used.  This is an example of some of the due diligence you need to do to come to well-supported conclusion (and if possible, with little that can be refuted).

Another thing we could look at is the timestamp as to when the login.defs file was last changed.  As noted in “Part 1“, let’s say the information was gathered on April 15, 2011. To get date the file was last modified, we simply use the ls command with a few switches:

ls -latr /etc/login.defs

which has the following output:

-r-------- 1 root root 1301 Apr 11 11:31 /etc/login.defs

Notice that the timestamp is April 11 at 11:31 am. Due to the close proximity of the file’s timestamp as to when the information was gathered, this may raise an eyebrow regarding the coincidence of the timestamp date relative to when the audit started (remember your professional skepticism).  Never underestimate the information that timestamps can tell you…  When aggregated with all of the information you analyze, it can help to “tell the story” the system holds.

So from the above information (which yes, is a rather simple example), it appears that:

  • The password controls were recently changed- just prior to the audit;
  • Accounts have passwords with no expiration;
  • There are active user accounts with passwords that have not been changed in years; and
  • If you attempt to crack the MD5 password hashes, you’d find out that they are using rather poor passwords. 😉

Regarding Patching

OK…  so now you have an idea of how to tell when a password was last changed, and where the maximum password age is set for some Linux variants…  This compliments the one of the two items noted in  “Part 1“.  The second item is that of patching.  Unfortunately, especially depending on the Linux/UNIX variant may not be as straight forward as Windows.  Some variants will keep track of the latest installation date for the most current version of a particular package- that is, no real history is tracked per se, just a timestamp of when the latest version was installed.  A powerful command for Linux variants that use rpm (the RedHat Package Manager), is:

rpm -qa --queryformat "%{installtime} (%{installtime:date}) %{name}-%{version}.%{RELEASE}\n"

This command will produce output like this:

...
1331924432 (Fri 16 Mar 2012 02:00:32 PM CDT) yum-fastestmirror-1.1.16.21.el5.centos
1331924436 (Fri 16 Mar 2012 02:00:36 PM CDT) sos-1.7.9.62.el5
1304132632 (Fri 29 Apr 2011 10:03:52 PM CDT) xorg-x11-util-macros-1.0.2.4.fc6
1304132663 (Fri 29 Apr 2011 10:04:23 PM CDT) ncurses-5.5.24.20060715
1304132667 (Fri 29 Apr 2011 10:04:27 PM CDT) diffutils-2.8.1.15.2.3.el5
...

Notice you get a longer form of an epoch date (which would use a different formula than the one noted above) as to when the package was installed/upgraded, its human-readable equivalent, and the pack name with version.  Again though, the information pulled from the rpm database is only that of the most recent version of a package installed, so no real history is retained.  That is, if a box is updated every month, and a particular package is updated every month, you will only be able to see its latest install date, with no information retained as to the previous package’s installations.  In the case of trying to detect audit preparation though, you are looking to see if a large number of the packages were installed right before the audit began.

For Solaris, determining when patches were installed can be fairly simple.  Just do a directory listing of /var/sadm/patch:

#ls -latr /var/samd/patch

which contains entries like this:

...
drwxr-xr-- 2 root root 512 Aug  3  2009 118683-03
drwxr-xr-- 2 root root 512 Jul 27  2009 118777-14
drwxr-xr-- 2 root root 512 Jul 27  2009 118959-04
drwxr-xr-- 2 root root 512 Jul 27  2009 119059-47
drwxr-xr-- 2 root root 512 Jul 27  2009 119254-66
drwxr-xr-- 2 root root 512 Nov  5 14:48 119254-76
...

This is a listing of the patches/updated package directories along with their creation dates when correspond with the time the patch/updated package was installed (do note that someone can manually remove or move these directories)- notice that these were installed some time ago…  There is also generally a readme file in each of the subdirectories.  Its timestamp will give you an idea of when the patch was created by the vendor (Sun/Oracle).  From there, you can determine the time between creation and implementation.  The nice thing about the method Solaris uses is that you have a more complete history of patching than some other Linux/UNIX variants, as previous patch version information is retained.  This may help give you an idea as to how regularly a particular Solaris-based system is patched.

And to give one more Linux example, Ubuntu has a relatively easy way to get the timestamps of its installed packages when trying to determine is the system is updated on a regular basis- look at the contents of the dpkg.log file:

#cat /var/log/dpkg.log

which contains entries like:

...
2011-03-17 19:12:15 status unpacked empathy 2.30.3-0ubuntu1.1
2011-03-17 19:12:15 status half-configured empathy 2.30.3-0ubuntu1.1
2011-03-17 19:12:15 status installed empathy 2.30.3-0ubuntu1.1
2011-03-17 19:12:15 configure libevdocument2 2.30.3-0ubuntu1.3 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 status unpacked libevdocument2 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 status half-configured libevdocument2 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 status installed libevdocument2 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 configure libevview2 2.30.3-0ubuntu1.3 2.30.3-0ubuntu1.3
...

Wrapup…

So there you go.  Two small examples of items that, along with other information gleaned from the system information you’ve gathered, can help determine if audit preparation is at play versus an organization’s system custodians having good processes in place.  Keep in mind, it can make the information gathering process go quickly if you have a script built that pulls all of the information you need into a single output file.  Then, when you’ve performed your analysis, and aggregated the relevant information, have a non-confrontational discussion with the relevant parties (following whatever rules of engagement have been defined) regarding your potential findings.  This and the previous posting are not meant to be the de-facto indicators of audit preparation, but are meant to foster meaningful and thoughtful analysis of the data you’ve gathered in order to provide greater value to the client- to validate whether or not the controls in place are indeed effective in their environment, and not just thrown together at the last moment to “prepare for the audit”…

Learning Powershell: How to Extract User Objects from Active Directory using Powershell!

1
Filed under Continuous Monitoring, How To, Scripting, Security, Windows Auditing

Welcome to the show notes for our next episode!  Last time we were talking about Powershell, demonstrating some different ways that we could use it to begin to automate some of our audit and administrative tasks.  For example, pulling some information out of our Active Directory.

In this week’s AuditCast we’re going to continue on and try to modularize some of the code that we wrote last week.  At the same time, we’ll try to simplify, clean it up, and finally generalize it just a bit, to create something that we can use in many different tasks that we’ll be examining over the next couple of weeks.  Before starting the AuditCast, I actually did do one or two things that I’ve done ahead of time.

The first thing is that I took some of the code that we were working with last week, the code that actually got the handle for doing a domain search, and I moved that into what’s called a “function.” This week we’ll see how we can leverage these sorts of things. You should be able to see that see that this code is essentially exactly the same code we wrote last week; the only difference is it’s in a function.

function Get-DirectoryServiceSearcher
{
#————————————————————————————-#
# The following block of code can be used repeatably within a script to obtain a
# DirectorySearcher object for the current domain.
#————————————————————————————-#
# Set up an AD Directory Searcher within the current domain
$domain = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$domainSearcher = New-Object System.DirectoryServices.DirectorySearcher

# Configure the searcher
$domainSearcher.SearchScope = “subtree”
# Scope choices are:
# Base – Search only the specified scope
# OneLevel – Search only the children one level beneath the scope
# Subtree – Starting from the specified scope, search it and all descendants

# Set the starting point for the scope
$domainSearcher.SearchRoot = “LDAP://$Domain”

#————————————————————————————-#
# Finally, return the searcher object
$domainSearcher
}

The very last thing that the function does may look a little odd.  To return a value from a function in Powershell, you can either use the “return” keyword, or just put the value as the last item in the function!  If you’d like details on what’s happening in this function, I’d suggest that you go back and review last week’s AuditCast where this code was first developed.

While we could just paste this function into the code that we’re going to write this week, our real goal is to see how to start creating little reusable blocks of code that can then be assembled for almost any need.  This means that I’m going to need a way to actually access this code from another script.  That’s where our second piece of pre-written code comes in.

function Get-ScriptDirectory
{
$invocation = (Get-Variable MyInvocation -Scope 1).Value
Split-Path $Invocation.MyCommand.Path
}

To use this code, all we need to do is paste this function into each script that will try to leverage our little blocks of code.  What this code does is figure out what the correct path is for the script that is running.  You may recall that Powershell requires that you use a fully distinguished  or specify a relative path when you want to reference a piece of code or run a piece of code.  This function allows us to solve this problem.  With this function copied into a script and the Get-DirectoryServiceSearcher.ps1 sitting in the same directory, all we need to do to import that code is this:

$import = Join-Path(Get-ScriptDirectory) “Get-DirectoryServiceSearcher.ps1”
.$import

This defines a variable named $import and uses Join-Path, which is a built-in command. This allows me to create a path; essentially I’m going to tell it to join the path of whatever the current script directory and prepend it to the name of a file that like to grab the file (in this case, DirectoryServiceSearcher.ps1).

What can we do with this?  Well, I’d recommend that you have a look at the related AuditCast where we use these simple pieces to develop an extremely hand function that allows us to extract arbitrary objects out of Active Directory and turn them into custom-built objects right inside of Powershell!

If you’re looking for the source code for the function that we wrote, look no further!

function Get-ADObjects($LDAPFilter=”(&(objectCategory=Person)(objectClass=User))”)
{
$domainSearcher = Get-DirectoryServiceSearcher
$domainSearcher.Filter = $LDAPFilter
$domainSearcher.SizeLimit=100
$results = $domainSearcher.FindAll()
$aDObjects = @()

foreach($result in $results)
{
$thisObject = New-Object System.Object
foreach ($property in $result.Properties.PropertyNames)
{
$thisObject | Add-Member -type NoteProperty -Name $property -Value $result.Properties.Item($property)
}
$aDObjects += $thisObject
}
$aDObjects
}

David Hoelzer teaches several full week courses ranging from basic security through to advanced exploitation and penetration testing. For a thorough treatment of this specific issue and a discussion of controls to mitigate this and similar issues, consider attending thefull week course on Advanced System & Network Auditing. More information can be found here:AUD 507 course. AUD 507 gives both auditors and security professionals powerful tools to automate and manage security over time.

Identifying Inactive and Unnecessary User Accounts in Active Directory (with Powershell!!!)

11
Filed under Continuous Monitoring, Scripting, Tools, Windows Auditing

A common question in an audit of information resources is whether or not accounts for users are being properly managed. One aspect of that is determining whether or not the accounts created are needed while another is looking for evidence that accounts for terminated users are being disabled or deleted in a timely fashion. An easy way to answer both of these questions is through the use of Active Directory queries! This screencast demonstrates exactly how to do just that.

While it’s true that the information that we’re looking for can be obtained directly from the Active Directory using tools like DSQuery and DSGet, in the long term I think it’s far wiser to learn a little bit of basic scripting that will allow you to perform just about any kind of query you’d ever want to in Active Directory, even if your admins have customized the Active Directory Schema!

Learning to write Powershell scripts, though, can seem daunting. Not only will we have to face the differences between different versions of Powershell and the .NET requirements that sometimes lead to software conflicts when we’re still using some legacy code, but some Powershell scripts just look downright confusing! Not to worry.

Rather than trying to learn everything that there is to know about Powershell and directory queries, there’s a great deal of value in learning some basic “recipes” that can be used to extract useful data using a script. Once we’ve got a good handle on the recipe, it’s much easier to just adjust the “ingredients”, if you will, to get at what we’re looking for.

In the various classes that I teach for Auditors, whenever there’s an opportunity to do so, I strongly recommend that auditors take some time to learn some basic scripting. This screencast is a perfect example. Once you’ve got a few of the basics in the script, you can easily modify the script to look for just about anything you’d want to. Not only that, you can make those modifications without ever really getting a deep understanding of exactly what an Active Directory Search object is and how it works!

To get you started, I’ve included the source code written in the screencast that goes with this posting.  This code will determine your current domain and then extract all objects that are of class “User” (which includes computers, by the way) and give you CSV format output of the fully distinguished name, the SAM account name and the date of the last logon.  To use this code, just create a new Powershell script file as described in the screencast and have at it!

$Domain = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
$ADSearch = New-Object System.DirectoryServices.DirectorySearcher
$ADSearch.PageSize = 100
$ADSearch.SearchScope = "subtree"
$ADSearch.SearchRoot = "LDAP://$Domain"
$ADSearch.Filter = "(objectClass=user)"
$ADSearch.PropertiesToLoad.Add("distinguishedName")
$ADSearch.PropertiesToLoad.Add("sAMAccountName")
$ADSearch.PropertiesToLoad.Add("lastLogonTimeStamp")
$userObjects = $ADSearch.FindAll()
foreach ($user in $userObjects)
{
 $dn = $user.Properties.Item("distinguishedName")
 $sam = $user.Properties.Item("sAMAccountName")
 $logon = $user.Properties.Item("lastLogonTimeStamp")
 if($logon.Count -eq 0)
 {
 $lastLogon = "Never"
 }
 else
 {
 $lastLogon = [DateTime]$logon[0]
 $lastLogon = $lastLogon.AddYears(1600)
 }

 """$dn"",$sam,$lastLogon"
 }

David Hoelzer teaches several full week courses ranging from basic security through to advanced exploitation and penetration testing. For a thorough treatment of this specific issue and a discussion of controls to mitigate this and similar issues, consider attending thefull week course on Advanced System & Network Auditing. More information can be found here:AUD 507 course. AUD 507 gives both auditors and security professionals powerful tools to automate and manage security over time.

Detecting Audit Prep versus Good Processes in Place – Part 1

0
Filed under Compliance, How To, Windows Auditing

This is the first of two postings that will begin to address some ways to test something that hopefully you will only encounter rarely in your auditing career, but it something that I have seen in a number of organizations that are in the earlier stages of the IT maturity model- audit preparation versus good practices in place.  In some cases, you may be tipped off to this by how personnel are behaving before your technical test-work ever gets started…  Too often I hear, “we’re ready for the audit- we’ve spent the last month getting everything ready.”  In some of these cases, it can cause one to wonder whether or not they “get it”.  If you have prudent practices and processes in place, then there’s little to “prepare” for.

In this blog entry, I’ll focus on just two of the many possible scenarios where technical testwork may indicate signs of audit prep rather than good processes in place.  You’ll have to look at all of your testwork collectively in order to try and determine whether or not that is indeed the case, or what exactly was the root cause of a particular issue(s).

So let’s set this up…  Let’s say you work with the internal audit department for a large organization, and you are charged with performing an IT review of one of the unit members.  They have never had a technical review, and you hear that they have been preparing for the audit for the last few weeks.

As the review starts, you notice that the help desk is handling a lot of support tickets regarding personnel having difficulty when setting or changing their passwords…  This could be a tip-off that a new password policy (which you have already reviewed) along with its corresponding technical control has only recently been put in place prior to the onset of the audit/review.  At this point, asking a few employees a few questions to help confirm this notion may be prudent.  And always keep in mind that in many cases, you are concerned with the entire audit/review period, not just the day you went in to  gather your information and perform your testwork.

As an auditor, part of what you are charged with is to validate (confirm) that the administrative controls that are in place (e.g. policies) are effective  and that wherever possible, technical controls are in place that correspond to and enforce those polices, business objectives, and/or regulatory obligation(s).  To achieve these two objectives, first you may need to review the policies and procedures that the organization has in place. You should compare them against applicable laws, regulations, and other prudent business/leading practices.  In some cases, putting together a policy and procedure gap analysis (sometimes referred to as a policy and procedure crosswalk or diagnostic) may be an effective way to accomplish the first objective.

As the review progresses, you may interview personnel to get a better understanding of their operations, what kind of systems and/or data the business depends on, and you will inevitably request a listing of servers and information resources.  This listing of systems may include attributes such as:

  • a description of the system;
  • the installed applications and their version;
  • what operating system is installed;
  • whether or not the system is mission critical;
  • whether or not it contains or touches a protected type;
  • etc…

From this list, you will select a meaningful sample to then go and gather system and configuration information from for offline analysis (and possibly information from some of the applications themselves).  The analysis you preform will involve the testing the effectiveness of the organization’s technical implementation of their administrative controls. That is, have they implemented technical controls to ensure that the organization is doing what their policies state they will do?  Thus, the analysis of the system and configuration information should at the very least compare the system settings and configuration against the organization’s policies and any other criteria they have provided (e.g. regulatory obligation, laws, etc.).

For example, if an organization has a policy in place that states that employees change their passwords every 90 days, you need to be able to validate that (1) the technical enforcement mechanisms are in place, and (2) if possible, that they have been in place for the review/audit period (as some things may lend themselves to being able to discern this more than others…). Let’s assume, for this example, that the organization is a Windows shop.  One quick way to determine what the password controls that are enforced is to run the secedit command on one of the domain controllers. For example:

secedit /export /cfg seceditOutput.txt

This command has the secedit utility export the configuration to a text file named “seceditOutput.txt” (which will be output into the directory the command is run from, as well create a log file on the server it is run from). Three relevant lines from the output are shown below:


MaximumPasswordAge = 90
MinimumPasswordLength = 8
PasswordComplexity = 0

Note the effective password policy is basically, 90 days maximum password age, with a minimum length of at least eight (8) characters, and complexity is not enforced…

Keep in mind that for some organizations may have multiple password criterion, and/or fine-grained password controls in place, and/or use third-party tools to enforce password polices, and you may need to do further due diligence. However, as in the previous blog post (see IT Auditing with Minimal Intrusiveness), the information provided by the [scripted] output may not necessarily present the whole picture in and of itself, but it’s a starting point- although in many cases the output from secedit will in fact be exactly what is in place, and the only password policy in place…

If the organization’s policies state that passwords should be changed every 90 days, and that the minimum password length is 8 characters, and that complexity should be enforced, then it looks like all criterion with the exception of complexity is in place…   So already, one observation should be noted.   However, lets should not stop there- your professional skepticism should kick in dictate that you go a step further and determine when exactly users did last change their passwords.  Of course, this can be accomplished in a variety of ways…  Since I am a big advocate of scripting commands to generate useful output in order to validate controls, I will focus on getting the information via the command line and PowerShell (although VBS scripts could be used as well).   In this case, you could use either a standard command prompt and built-in command line tools on the Active Directory Controller, or use the Active Directory modules for PowerShell (standard on 2008r2 Domain Controllers, but can be added on 2003r2 and 2008 ADC’s) to generate the output you could use for your analysis.

For this example though, we’ll focus on the output from the csvde command, as it can provide a wealth of information, as has the ability to export a comma separated listing  of the domain objects in the Active Directory that can be sifted, filtered, and explored…   For this example, we’ll just focus on three data elements from the output to determine when users actually changed their password last:

  • userAccountControl – filter to show values like 512 (account enabled) and 66048 (account enabled, password doesn’t expire)
  • sAMAccountName – the login/userID
  • pwdLastSet – date the password was last set

Let’s look at a very small sampling of the output below:

userAccountControl pwdLastSet Password Last Set (derived via formula) sAMAccountName
512 129473442650201000 Friday, April 15, 2011 jdoe
512 129473442650245000 Friday, April 15, 2011 dbrown
512 129468332650201000 Friday, April 9, 2011 astone
66048 127573582597498000 Thursday, April 07, 2005 msmith
66048 127075148908458000 Monday, September 08, 2003 fancyexec
512 129451837342799000 Monday, March 21, 2011 joeworker
66048 129260123886088000 Wednesday, August 11, 2010 newguy
66048 128793619729609000 Tuesday, February 17, 2009 administrator

Let’s say the information was extracted on April 15, 2011, and that the above eight entries is indicative of the entire data set…  Based upon the derived “password last set” dates in the table above, do you see anything that looks out of sync with the organization’s policy that states that passwords should be changed every 90 days, and the technical controls that you observed (MaxPasswordAge = 90)?  I suspect that you do.  At this point, these are technically not yet observations, but are talking points with your point of contact as to why the discrepancy exists…  Based upon the help desk issues that were noted when you began (the high number of password reset issues), and some of the  dates that popped up, it may appear that the technical controls were only recently put in place for the audit as opposed to being in place for at least the review period (note that the csvde output will have other attributes to help you lean more towards one way of thinking versus another- but that’s another blog post).  You may also want to inquire as to why certain users are allowed to circumvent policy by having non-expiring passwords (those with the userAccountControl value of 66048)- keeping in mind that they may have a documented, valid business case, but they should have other compensating controls in place for these accounts as well…

Now, the above example may be considered a bit simplistic or contrived, but it’s for illustrative purposes, and you probably get the general idea…  One would expect though, that for a domain controller that’s been in place for a while to have “password last set dates” drifting apart from one another at each password expiration (since users are generally given a week or so warning before their password expires) even if they were all originally set at the same time.  That being said, I have walked into places where a new ADC was just deployed, and everyone just reset their passwords the week prior (not a lot of history to go on there).  But I digress…  :)

For the next test, you want to vet the patch management process for several things- (1) to see if it’s occurring at all, (2) if it occurs on a timely basis (note that this may be based upon their own definition of “timeliness” in their policy), and (3) if it’s consistent.  During your policy review, you noted that their policy states that “Windows patches will be applied within 90 days of release, and that Windows security patches will be applied within 30 days of release.”  One would expect some consistency with respect to when patches are applied, as even if the custodians of the Windows systems waited 90 days to apply a particular patch(es), they would still be applying patches almost every month (yes, you need to do some due diligence to determine when patches are released and when there are month(s) when patches are not released).  A simple command to extract this information from a Wind0ws operating system is:

wmic /output:”patchlist.csv” qfe list /format:csv

The command will produce output that can easily be pulled into Excel, and sorted by the “InstalledOn” attribute.  This can help you determine if patches are being installed in a consistent and timely manner.  If you see a huge number of patches all installed just as the audit/review was kicking off, this may indicate audit prep rather than good processes in place.  Keep in mind though there could be many reasons for what you see, and that things have to be put in context.  Once you’ve seen a couple hundred of these, you start to get a feel for things…  :)  You also come to know the idiosyncrasies of various Windows operating systems and how they report the “InstalledOn” date (as two in particular provide dates in 64-bit hex format using the wmic command, but not when using PowerShell…).

Anyway, these are just two simple ways to start recognizing the possible issue of “audit prep versus good processes in place” at a client.  If indeed that ends up being the case after you’ve done your due diligence, then meaningful recommendations would need to be made to address this…

That’s it for now.  Next time I’ll cover a little bit more “audit prep versus good processes in place” for UNIX/Linux in the next post, and then I’ll start on handy commands and scripting…

 

IT Auditing with Minimal Intrusiveness

0
Filed under Compliance, Scripting, Unix Auditing, Windows Auditing

The information gathering process in an IT audit can be a painstaking process. In the past (and in some cases, this is still true), an auditor may sit with an IT admin and have them take screenshot after screenshot, and/or take notes while asking the custodian to show them the settings they are testing. This is a slow and tedious process, and inevitably limits the number of systems and settings to be validated- it also leaves a lot left on the table, and can give a false sense of security, as there are a lot of things that cannot be easily validated and tested using this method.

Today, there are a number of solutions available to assist with system configuration analysis and database testwork, but they can require purchasing expensive applications, the installation of third-party software on the systems to be tested, and in some cases, cost quite a bit of money. One less expensive alternative, and what I’ve been doing for a number of years now, is to use scripts that leverage built-in commands, utilities, existing text-based configuration files, queries, and stored procedures to extract system-generated information for off-line analysis. Using scripts for the information gathering process has a number of advantages, such as:

  1. It’s an easily repeatable process;
  2. It’s consistent- the same commands are executed each time, and in the same order;
  3. Reduces human error, as there are no typos (provided you’ve done your own due diligence);
  4. It’s time efficient- you have the ability to gather information from many systems in a short amount of time;
  5. The process requires little effort on the client’s part versus other methods; and
  6. Using scripts that leverage what’s already there does not introduce any outside applications or programs.

Depending on the script you’ve developed or put together, you can potentially get a lot of relevant and targeted system or application-generated information to analyze. Keep in mind that the script output may not get absolutely everything, as that depends on the limitations of the operating system’s built-in commands (for example, you may not be able to get password hashes for some OSes without introducing third-party software). In many cases though, the output will be more than enough to come to a conclusion or form an opinion as to whether or not the entity’s controls are effective. You can also use the output for your workpapers, which is an important part of preserving the evidence which you base your conclusions on. It’s also worth noting that some output may lend itself to showing trends over time as well, which helps you get a better feel for how a process has worked over time as opposed to a client merely performing actions for the sake of audit preparedness…

In order to ensure that the scripts do what exactly what is intended, research and testing are key. The last thing you want to happen as an auditor is to cause harm to a system or database from something that you have not adequately tested. Testing on as many different operating systems at various service pack levels and with various databases is key. Taking advantage of virtual machines will help you in this endeavor (and having access to some physical ones where virtualizing an OS may not be entirely possible, such as Solaris on SPARC, openVMS, or AS400). For me, I have dozens of virtual machines, some with active directory services, some without, some joined to a domain, others stand-alone, some with databases, etc. An MSDN Professional subscription can be a great help in order to test on all the various Microsoft products as well (it’s worth noting that an educational institution may be able to get hefty discounts on MSDN subscriptions for those that run audit shops in Higher Ed).

Your testing and validation of the scripts, as well as your process for gathering and transporting the information may also help getting buy-in from clients. For example, a lot of my information gathering is done while onsite, so I have an IronKey to collect all of the information with.  For one large client, I provide the ISO with a copy of all my scripts for validation help ensure that they will not cause harm to any of the operating systems or database that I may select for testing.  With that ISO’s sign-off, I can then use the scripts to gather information more readily from the rest of the organization’s decentralized units.

In the end, the idea is to have a set of tested and validated scripts in order to quickly and easily gather a lot of relevant information for off-line analysis in order to provide as much benefit to the client as possible.

In a future posting, I’ll go into more detail as to some of the script components, and validation resources for your workpapers.

Continuous Monitoring with PowerShell

0
Filed under Continuous Monitoring, Scripting, Windows Auditing

Welcome back from the holidays! I imagine many of you are just returning from the holidays and are ready get started on those new year’s resolutions. If one of them was to implement continuous monitoring or learn more about scripting, do I have a treat for you! Now that I’m back from some holiday travel myself, I think it’s time to continue our series on automating continuous monitoring and the 20 Critical Controls.

I don’t want these blog posts on an introduction to PowerShell. There are plenty of fine references on that available to you. In talking with Jason Fossen (our resident Windows guru), I have to agree with him that one of the best starter books on the topic is Windows PowerShell in Action, by Bruce Payette. So if you’re looking to get started learning PowerShell, start here, or maybe try some of the Microsoft resources available at the Microsoft Scripting Center.

But let’s say you’ve already made a bit of an investment in coding and you already know what tasks you’d like to perform. For example, maybe you wonder who is a member of your Domain Admins group, so you use Quest’s ActiveRoles AD Management snap in to run the following command:

(Get-QADGroup ‘CN=Domain Admins,CN=Users,DC=sans,DC=org’).members

Or on the other hand, maybe you are concerned about generating a list of user accounts in Active Directory who have their password set to never expire, you’d likely have code such as:

Get-QADUser -passwordneverexpires

Or maybe even you want to run an external binary, like nmap, to scan your machines, you might have a command such as:

Nmap –sS –sV –O –p1-65535 10.1.1.0/24

In any case, the first step is to come up with the code you want to automate. That’s step one.

Next, you don’t just want to run the code, you want the code to be emailed to you on a regular basis, say once a day or once a week. The next step is to use a mailer to email you the results of your script. Now you have a few choices here. One option is to use a third party tool like blat to generate your email. But since we’re using PowerShell, let’s stick with that. Version 2.0 of PowerShell also has some built in mailing capabilities in this regard.

The easiest way to get started is to save the output of the commands you want run to a temporary text file, mail the text file as the body of an email message, and then delete the temporary file. An easy way to do this to get started would be to use the following commands:

$filename = sometextfilewithoutputresultsinit.txt

$smtp = new-object Net.Mail.SmtpClient(“mymailserver.sans.org”)

$subject=”SANS Automated Report – $((Get-Date).ToShortDateString())”

$from=”automation@sans.org”

$msg = New-Object system.net.mail.mailmessage

$msg.From = $from

$msg.To.add(“automation@sans.org”)

$msg.Subject = $subject

$msg.Body = [string]::join(“`r`n”, (Get-Content $filename))

$smtp.Send($msg)

remove-item $filename

Save your data as an appropriate PS1 file, automate the command to run once in a while using Task Scheduler, and you’re off to the races!

We certainly have more to discuss, but hopefully this inspires some thinking on the matter. I’ll post again soon with some other steps to consider, before we move on the Bash. There’s a lot we can talk about here. Until next time…

Scripting Automation for Continuous Auditing

2
Filed under Tools, Unix Auditing, Windows Auditing

One of the topics that we have been discussing with organizations a great deal lately is the idea of automation in regards to continuous auditing. Said a different way, the standard audit model involves auditors making a list of audit scopes that they want to cover in the course of a year. Then, one at a time, the auditor interviews people, examines the output from technical tools, and in general manually collects audit evidence.

What if the act of evidence collection could be automated? Then it would seem that auditors could spend less time collecting data and spend more time analyzing evidence for risk. Maybe even an auditor could consider additional scopes in the course of a year if less time was being spent manually collecting such evidence.

In the next few blog posts I’d like to consider tools that could be used to assist us in automating the collection of audit evidence. Ideally we would use tools suited to this purpose, but realistically I know many of us will be left on our own to develop tools that can accomplish this purpose. Therefore we will focus on scripts and built in operating system tools to achieve this end.

To begin this discussion, let’s identify some tools that we can use for these purposes. Next time we can start to delve into a couple of these at a time. But in the meanwhile, check these out as a way to automate data collection. Consider each of these a piece of a bigger recipe for continuous auditing.

Windows Automation Tools:

PowerShell (scripting language)

Task Scheduler (scheduling tasks)

Advanced Task Scheduler (scheduling tasks)

Blat (command line mailer)

Bmail (command line mailer)

Unix Automation Tools:

Bash Scripting (scripting language)

Cron (scheduling tasks)

Anacron (scheduling tasks)

Mail (command line mailer)

Mailsend (command line mailer)

As you can see this is just a start. We will need to use other tools to actually collect our data sets, but these tools will form the building blocks of each of our scripts. If you have other tools you would like to suggest, don’t be shy. We’d all love to hear what tools have helped you to be a more successful auditor.

IT Security Audit: What About WPAD?

0
Filed under Compliance, How To, Security

How hard is it for someone to insert a proxy between you and the rest of the Internet without you knowing? Will running a Mac or Linux protect you?

In this article, and the accompanying screencast, we combine the concepts from Episode 20 with the WPAD style attack that was discussed back in Episode 17, creating a quick and easy how-to when it comes to creating a man in the middle attack that will work against any system that has Automatic Proxy Discovery enabled.

This feature is sometimes thought to be a Windows specific issue, but as we demonstrate here by transparently creating a man in the middle proxy for a Mac, it really does apply everywhere. There are just a few simple pieces that you need to accomplish this attack and there are some quick and easy things that you can do to defend yourself or that you can look for during an audit.

Ingredients:

  • Web server with a “wpad.dat” file in the web root.  (If you’re looking for a sample, check here)
  • Ability to poison an upstream DNS server for WPAD requests..  This ingredient is hard to find.  I strongly recommend you use the alternative DNS Spoofer…
  • DNS Spoofer – By far the easiest way to accomplish this attack.  If you’re on a shared medium (wireless network) or upstream from the network in a place where you can see DNS queries from either clients or upstream servers, this is piece of cake.
    • Jack in
    • Configure DNS Spoof to use the address of the proxy as the block_web_addr
    • Configure DNS Spoof to watch for “wpad” in blocked_strings
    • Start it up!
  • A proxy to intercept whichever protocols you’re interested in.

That’s it!  The video actually walks you through this quickly.

Defenses/Audit Questions

So what do we do to defend?  The answers are really the same as what we came up with in the Windows WPAD entry.  For systems that travel, seriously consider adding an entry for WPAD into your hosts file, as shown in the video.  This is by far the simplest defense.  As an auditor in an environment that uses WPAD internally, I would strongly recommend this entry exists on all laptops.

Of course, if the organization is not using proxy auto-discovery then nothing should be configured to send WPAD requests.  How can I see if the organization is compliant with this?  While this is more difficult, our DNS Spoofer turns out to be very handy here too.  Remember that in addition to spoofing responses, it creates a log entry for every single DNS lookup that it witnesses.  Why not configure the spoofer to block absolutely nothing, which essentially turns it into a DNS logger.  With this done, you can let it run side by side with the actual DNS server for a few days and then check to verify that there have been no WPAD lookups.  If there have been then you also know which systems are misconfigured!

The source code for the DNS Spoofer can be obtained here.  It requires that you register for a free SANS portal account.

David Hoelzer teaches several full week courses ranging from basic security through to advanced exploitation and penetration testing. For a thorough treatment of this specific issue and a discussion of controls to mitigate this and similar issues, consider attending thefull week course on Advanced System & Network Auditing. More information can be found here:AUD 507 course. AUD 507 gives both auditors and security professionals powerful tools to automate and manage security over time.

DNS Sinkhole for Malware Defense and Policy Enforcement

5
Filed under Compliance, How To, Security

BIND is usually the go-to DNS solution if you’re looking to set up a DNS sinkhole to contain and identify malware. While I love BIND as much as the next guy, I find that it’s a real pain in the neck to get everything set up just right and the maintenance involved in adding a new authoritative zone is just more than I’m willing to do.
As a solution to this, I’ve revived a tool that I wrote more than a decade ago for Internet usage policy enforcement. As it turns out, it already was a DNS sinkhole, I just never called it one!

(The accompanying webcast that demonstrates the tool and technique discussed in this article can be found here.  Source code for the DNS Sinkhole can be downloaded here.)

DNS Sinkholes

The idea of a DNS Sinkhole is that if you know about some bad domain names that malware might use or that users might try to visit, you can create authoritative “Zone” files on your DNS server.  This allows you to return any result you want to for any domain name or host you want to.  The unfortunate downside is that you cannot create “wildcard” DNS zones to do substring matches.  Also, the creation of a new zone, while not difficult, is definitely non-trivial.

About two years ago I taught the SANS 503 Advanced Intrusion Detection Immersion class for the first time in about 8 years.  As I was reading through the section on DNS Sinkholes on the last day, I just kept thinking to myself, “Why are we creating new difficult solutions to problems that we solved years ago?”  How is this a solved problem?

 

 

 

Controlling Internet Access

Back in the days before SurfControl, BlueCoat, SurfWatch and all of these other Internet filters were really mainstream and sorta worked, we had to come up with our own solutions to these types of problems.  We weren’t really concerned about C&C (Command and Control) servers at the time, we were just concerned about Internet abuse.  While SurfControl and the like were busy creating lists of URLs that were “bad,” I sat back and though about the problem and came up with a different solution.

The idea of SEO isn’t a new one.  Even in those days, you could be pretty sure that something about the domain name would tip you off as to what was involved.  I’m sure that this will be no surprise to you, but if you’re trying to prevent employees from going to pornographic websites the word “girl” and the letters “xxx” come up pretty often in the domain name.  Why not create a simple tool to find substrings in domain names and then just spoof the response?

The result is the DNS Block tool (which you can download here).  It operates by watching the network for DNS queries.  Any time it finds one, it compares the queried name against a list of known bad sites and then checks each part of the name against a list of substrings.  If there’s a match, the tool immediately spoofs an authoritative response.  The reason that this works is that it’s much faster for me to figure out it’s bad and spoof a response than it is for your real DNS server to recurse out to the Internet to find the real site.  This means that since the spoofed response arrives first, it’s the response that gets accepted.  When the real response arrives, it’s discarded since the port is no longer listening for a response.

There are other ways to use this tool for evil, of course.  I’ll discuss a few of those in some upcoming AuditCasts episodes.

David Hoelzer teaches several full week courses ranging from basic security through to advanced exploitation and penetration testing. For a thorough treatment of this specific issue and a discussion of controls to mitigate this and similar issues, consider attending thefull week course on Advanced System & Network Auditing. More information can be found here:AUD 507 course. AUD 507 gives both auditors and security professionals powerful tools to automate and manage security over time.

Parsing Lynis Audit Reports

1
Filed under How To, Unix Auditing

Last week we passed along some information on a Unix audit tool called Lynis, maintained by Michael Boelen (http://www.rootkit.nl/projects/lynis.html). The value of this tool is that it is an open script that auditors can give to system administrators to run on their Unix servers in order to assess specific technical security controls on the system.

If an auditor chose to use this tool to gather data, likely the process would look something like this:

  1. The auditor gives a copy of the script to the data custodian (system administrator).
  2. The system administrator runs the script on the target machine (note, it does not work across the network).
  3. Lynis produces an output file: /var/log/lynis-report.dat (location can be customized).
  4. The system administrator gives the auditor a copy of this output file.
  5. The auditor parses the file for relevant information to include in their findings report.

Unfortunately the output report isn’t quite as pretty as you might hope. The auditor will need to plan on spending some time parsing the outputs of the report file in order to gather useful information that can be included in their audit report. The data can most certainly be copy and pasted and manually edited, but why not use a little command line scripting to make our lives easier.

At a minimum, an auditor will want to parse a list of all the warnings and suggestions automatically made by the tool. Let’s use built in commands, like the Linux cat, grep, and sed commands. A simple command line command that will give an auditor the ability to parse out only the warnings made in the report file is the following:

cat /var/log/lynis-report.dat | grep warning | sed –e ‘s/warning\[\]\=//g’

The same can be done with the suggestions as well using the following command line syntax:

cat /var/log/lynis-report.dat | grep suggestion | sed –e ‘s/suggestion\[\]\=//g’

A couple other commands that you might want to play with to parse this file are the following commands as well. The following gathers a list of all installed software packages on the system:

cat /var/log/lynis-report.dat | grep installed_package | sed –e ‘s/installed_package\[\]\=//g’

The following command gathers a list of all the installed shells on the system from the report:

cat /var/log/lynis-report.dat | grep available_shell | sed –e ‘s/available_shell\[\]\=//g’

You get the idea. The system administrator can provide the auditor one output file, and the auditor could easily write a parsing script based on this syntax to completely parse the file into a manageable output that is a little more friendly to read.

We hope this inspires you to continue writing your own scripts to automate the audit process. The more we can automate, the faster our audit analysis. The faster our audit analysis, the more technical audits we can perform and the more likely our systems will be properly secured. Until next time…