Digital forensics and incident response (DFIR) has hit a tipping point. No longer just for law enforcement solving cybercrimes, DFIR tools and practices are a necessary component of any organization’s cybersecurity. After all, attacks are increasing daily and getting more sophisticated – exposing millions of people’s personal data, hijacking systems around the world and shutting down numerous sites.
SANS has a smorgasbord of DFIR training, and we also offer a free Linux distribution for DFIR work. Our SANS Investigative Forensic Toolkit (SIFT) Workstation is a powerful collection of tools for examining forensic artifacts related to file system, registry, memory, and network investigations. It is also available bundled as a virtual machine (VM), and includes everything one needs to conduct any in-depth forensic investigation or response investigation.
The SIFT Workstation was developed by an international team of forensics experts, including entrepreneur, consultant and SANS Fellow Rob Lee, and is available to the digital forensics and incident response community as a public service. Just because it’s freely available and originally designed for training, though, doesn’t mean it can’t stand up to field investigations. The SIFT Workstation incorporates powerful, cutting-edge open-source tools that are frequently updated, vetted by the open source community and able to match any modern DFIR tool suite. Don’t just take our word for it. Thousands of individuals download SIFT yearly, and it’s used by tens of thousands of people all over the world, including those at multiple Fortune 500 companies. And recently, HackRead named SIFT Workstation in a list of the top 7 cyber forensic tools preferred by specialists and investigators around the world.
SIFT got its start in 2007, during the time SANS instructors were developing virtual machines (VMs) for use in the classroom. In its earliest iterations, it was available online as a download, but was hard-coded and static so whenever there were updates, users had to download a new version. By 2014, SIFT Workstation could be downloaded as an application series and was later updated to a very robust package based on Ubuntu. It can also be installed on Windows, if there is an Ubuntu subsystem running on the system.
In November 2017, SANS unveiled a new version of SIFT Workstation that allows for much more functionality, is much more stable, and is comprised of specific tools such as the Package Manager. This time the package supports rolling updates, and uses SALT, a Python-based configuration management platform, rather than a bootstrap executable and configuration tool.
The new version can work with more than 200 tools and plug-ins from third-parties, and newly added memory analysis functionality enables the SIFT Workstation to leverage data from other sources. New automation and configuration functions mean the user only has to type one command to download and configure SIFT. Because SIFT is scriptable, users can string together commands and create automated analysis, customizing the system to the needs of their investigation.
Download SIFT Workstation today, and get started on your own DFIR initatives. And look into our FOR508: Advanced Incident Response and Threat Hunting course for hands-on learning with SIFT, and how to detect breaches, identify compromised and affected systems, determine damage, contain incidents, and more.
Learning doesn’t stop when you leave the SANS classroom. Instructors Domenica “Lee” Crognale, Heather Mahalik and Terrance Maguire answer some of the most common questions from FOR585 Smartphone Forensics course students in these short videos:
1) Using Hashcat to Crack an Encrypted iTunes Backup: Acquiring a locked iOS can be difficult so an iTunes backup may be the best evidence to examine. The iTunes backup files might be encrypted so this mini webcast outlines how to use HashCat to crack the encrypted iTunes backup files.
2) An Overview of Third Party App Examination: There are millions of applications (Apps) that can be used on a smartphone. This mini webcast outlines an approach to examining these applications.
3) Why Every Examiner Needs a Test Device?: In a perfect world, we would always be examining rooted Androids and jailbroken iOS devices, but unfortunately, full access to the file system is becoming a thing of the past. This mini webcast highlights the importance of populating test devices with user data so you can better speak to the artifacts that you ARE able to access on your next examination.
4)What if Nothing Supports Android Pie (v9)? The latest versions of Android are not commonly supported for acquisition by our tools. What can you do? Use ADB and interact with the live device. This mini webcast will teach you how to use ADB to extract information from Android devices and will discuss the traces some tools leave behind and why that trace is required if you want to obtain data.
5) iOS Malware – Where to Begin: It’s notably more difficult to pinpoint malware on your non-jailbroken iOS device without access to the application packages. This mini webcast outlines some of the best practices in analyzing the files you can access to provide indications of suspicious activity and the applications and services that are likely responsible.
6) Two Major Plists: This mini webcast will discuss how to determine if iMessage is disabled on an iPhone and how to determine if an iCloud restore occurred. Simple questions like this can make a difference in your investigation. We will discuss the file locations, which acquisition methods provide access to the files of interest and most importantly, how to parse the data.
This in-depth smartphone forensic course provides examiners and investigators with advanced skills to detect, decode, decrypt, and correctly interpret evidence recovered from mobile devices. The course features 27 hands-on labs, a forensic challenge, and a bonus take-home case that allow students to analyze different datasets from smart devices and leverage the best forensic tools, methods, and custom scripts to learn how smartphone data hide and can be easily misinterpreted by forensic tools. Each lab is designed to teach you a lesson that can be applied to other smartphones. You will gain experience with the different data formats on multiple platforms and learn how the data are stored and encoded on each type of smart device. The labs will open your eyes to what you are missing by relying 100% on your forensic tools. More information: http://sans.org/FOR585 | Next course runs: sans.org/u/Mht
FOR585 Mobile Forensics Poster: Use this poster as a cheat-sheet to help you remember how to handle smartphones, where to obtain actionable intelligence, and how to recover and analyze data on the latest smartphones and tablets. This poster was created by FOR585 Advanced Smartphone Forensics course authors & Certified Instructors Heather Mahalik, Cindy Murphy and Domenica “Lee” Crognale with support from the SANS DFIR Faculty. Download it here
You are being exposed to malicious scripts in one form or another every day, whether it be in email, malicious documents, or malicious websites. Many malicious scripts at first glance appear to be impossible to understand. However, with a few tips and some simple utility scripts, you can deobfuscate them in just a few minutes.
SANS Instructor Evan Dygert conducted a webcast on October 3rd, 2018. This webcast teaches you how to cut through the obfuscation techniques the script authors use and not spend a lot of time doing it. Evan also demonstrates how to quickly deobfuscate a variety of malicious scripts.
The samples of the scripts he provided during the webcast can be downloaded here: https://dfir.to/MaliciousScripts. Please note the password for the samples.zip folder is: “infected”
Tips for Reverse-Engineering Malicious Code – This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. Download Here
REMnux Usage Tips for Malware Analysis on Linux – This cheat sheet outlines the tools and commands for analyzing malicious software on the REMnux Linux distribution Download Here
Cheat Sheet for Analyzing Malicious Documents – This cheat sheet presents tips for analyzing and reverse-engineering malware. It outlines the steps for performing behavioral and code-level analysis of malicious software. Download Here
Malware Analysis and Reverse-Engineering Cheat Sheet – This cheat sheet outlines tips and tools for analyzing malicious documents, such as Microsoft Office, RTF and Adobe Acrobat (PDF) files Download Here
This is going to be a series of blog posts due to the limited amount of free time I have to allocate to the proper research and writing of an all-inclusive blog post on iOS 11. More work is needed to make sure nothing drastic is missing or different and to dive deeper into the artifacts that others have reported to me as currently being unsupported by tools.
From what I have seen thus far, I am relieved that iOS 11 artifacts look very similar to iOS 10. This is good news for forensicators who see iOS devices and have adapted to the challenges that iOS 10 brought. Prior to writing this, I was referred to a blog post on iOS 11,that was an interesting read (thanks Mike). I suggest you also check it out as it pinpoints what is new in iOS 11 in regards to features: https://arstechnica.com/gadgets/2017/09/ios-11-thoroughly-reviewed/5/
Understanding what the OS is capable of doing helps us determine what we need to look for from a forensic standpoint. From what I have seen so far, the major artifact paths have not changed for iOS 11. Key artifacts for normal phone usage appear to be in the same locations:
– Contacts- /private/var/mobile/Library/AddressBook/AddressBook.sqlitedb
– SMS – /private/var/mobile/Library/sms.db
– Maps – /private/var/mobile/Applications/com.apple.Maps/Library/Maps/History.mapsdata – Still missing? Refer to my blog post from Dec.
When I test an update to a smartphone OS, I normally start with basic user activity (create a new contact, place some calls, send messages, ask for directions, etc.) and then I dump my phone and see what the tools can do. For this test, I created both encrypted and unencrypted iTunes backups, used PA Methods 1 and 2 and did a logical extraction with Oxygen Detective. What I found is that not all tools parsed the data in the same manner, which is to be expected. (I also plan to test more methods and tools as time allows and for my FOR585 course updates.)
To get this post done in a timely manner, I found one item that has always been parsed and jumped out as “missing” or not completely supported.
iMessages and SMS in iOS 11 were the first items that jumped out as “something is off…” and I was right. I sent test messages and could not locate them in the tools as easily as I have done in the past. I normally sort by date, because I know when I send something. Up until this release of iOS, we could rely on our tools to parse the sms.db and parse it well. The tools consistently parsed the message, to/from, timestamps, attachments and even deleted messages from this database. Things have changed with iOS11 and it doesn’t seem that our tools have caught up yet, at least not to the same level they were parsing older iOS versions.
One of the most frustrating things I find is that the tools need access to different dumps in order to parse the data (as correctly as it could for this version). For example, Oxygen didn’t provide access to the sms.db for manual parsing, nor did it parse it for examination when the tools was provided and iTunes backup. This had nothing to do with encryption, because the passcode was known and was provided. UFED isn’t the same as PA Method 1 and 2 (you have heard this from me before), but it’s confusing because most don’t know the difference. This is what it looked like when I imported the iOS 11 backup into Oxygen. Believe me, there are more than 3 SMS/iMessages on my iPhone.
However, I when I dumped my iPhone logically using Oxygen Detective, it parsed the SMS and provided access to the sms.db. When I say “parsed” the sms.db, I am not referring to timestamp issues at all, those will be addressed in a bit. Here is what my device looked like when I dumped it and parsed it in Oxygen.
Spot the differences in the messages? Yep, you now see 48,853 more! Crazy… all because the data was extracted a different way. I also tested adding in the PA, Method 1 image and those message numbers were different, but the sms.db was available and parsed. You really have to dump these devices in different ways to get the data!
Bottom line – add the sms.db to something you need to manually examine for iOS 11 to ensure your tool is grabbing everything and parsing it. The rest of this blog is going to focus on just that – parsing the sms.db in regards to changes found in iOS 11.
Let’s take a look at what is the same (comparing iOS 11 to iOS 10):
• SMS and iMessages are still stored in the sms.db
• Multiple tables in this database are required for parsing/joining the messages correctly
What is different (comparing iOS 11 to iOS 10):
• Additional tables appear to be used?
• The timestamp is different for iOS 11 – SOMETIMES!
Here is what I found (so far). The tools are hit or miss. Some tools are parsing the data, but storing the messages in a different location, others are parsing the message content, but not the timestamp… you catch my drift… What I recommend? Go straight to the database and take a look to make sure the tool(s) you rely on are not missing or misinterpreting the messages (wait… didn’t I just say that – YES, I did.)
The timestamp fields for the sms.db are all over the place now. What I am seeing is that the length of the Mac Absolute value varies between two formats and both of these formats can be stored in the same column. This is why the tools are struggling to parse these dates. Additionally, the tables in the sms.db differ in how they are storing the timestamp. So, if your tool is parsing it correctly, excellent – but still take a look at the tables.
Here are some examples of what this mess looks like. The column below is from the chat table in the sms.db. Notice how it has the traditional Mac Absolute value (number of seconds since 01/01/2001), while others are a 18 digit Mac Absolute values and some are 0 (sent messages).
Additionally, I was seeing some that were 19 digits that were not appended with 00s at the end. The “conv start date” on the left column is from the messages table in sms.db and this timestamp has not changed. As expected, your tools handle this one nicely. The table on the right column is from the chat_message_join table, and this caused a little havoc as well due to the variety of timestamps in the column. Converting this wasn’t fun! Thanks Lee for your help here. You, my friend, ROCK!
When I first ran my SQL query, I noticed this one pesky date that wasn’t converting. This is because it was the timestamp highlighted above and I needed to beef up my query to handle this. If you see a date that looks like the one below, something is up and you aren’t asking for the data to be rendered correctly. The query below will handle this for you.
Don’t believe me that this causes issues yet, take a look at how it looked in one tool.
The dates and times are not parsed correctly. I found that the dates and times appear to be consistent when the tools are parsing the 9 digit Mac Absolute timestamps from specific tables. Otherwise, expect to have to do this yourself. Here is where it was correct, but this wasn’t the case for all of my messages sent using iOS 11.
If you need a sanity check, I always like to use the Epoch Converter that I got for free from BlackBag to make sure I am not losing my mind when dealing with these timestamps. Below, you can see it was parsing it correctly (Cocoa/Webkit Date). Also, I love that it gives you both localtime and UTC.
This leads me to the good news -below is the query that will handle this for you. This query is a beast and “should” parse all sms and iMessages from the sms.db REGARDLESS of the iOS version, but only columns that I deemed interesting. (Note that I state should, because this has only been run across a few databases and you should report any issues back to me so they can be fixed.) Take this query and copy and paste it into your tool of choice. Here, I used the DB Browser for SQLite because it’s free. I limited some columns to the ones I care about the most, so you should make sure this query isn’t missing any columns that may be relevant to your investigation.
SELECT message.rowid, chat_message_join.chat_id, message.handle_id, message.text, message.service, message.account, chat.account_login, chat.chat_identifier AS “Other Party”, datetime(message.date/1000000000 + 978307200,’unixepoch’,’localtime’) AS “conv start date”, case when LENGTH(chat_message_join.message_date)=18 then datetime(chat_message_join.message_date/1000000000+978307200,’unixepoch’,’localtime’) when LENGTH(chat_message_join.message_date)=9 then datetime(chat_message_join.message_date +978307200,’unixepoch’,’localtime’) else ‘N/A’ END AS “conversation start date”, datetime(message.date_read + 978307200,’unixepoch’,’localtime’) AS “date read”, message.is_read AS “1=Incoming, 0=Outgoing”, case when LENGTH(chat.last_read_message_timestamp)=18 then datetime(chat.last_read_message_timestamp/1000000000+978307200,’unixepoch’,’localtime’) when LENGTH(chat.last_read_message_timestamp)=9 then datetime(chat.last_read_message_timestamp +978307200,’unixepoch’,’localtime’) else ‘N/A’ END AS “last date read”, attachment.filename, attachment.created_date, attachment.mime_type, attachment.total_bytes FROM message left join chat_message_join on chat_message_join.message_id=message.ROWID left join chat on chat.ROWID=chat_message_join.chat_id left join attachment on attachment.ROWID=chat_message_join.chat_id order by message.date_read desc
Here is a snippet of what this beauty looks like. (Note: this screenshot was taken prior to me joining attachments – aka MMS).
I always stress that you cannot rely on the tools to be perfect. They are great and they get us to a certain point, but then you have to be ready to roll up your sleeves and dive in.
What’s next – applications, the image/video files that apparently aren’t parsing correctly, interesting databases and plists new to iOS 11 and the pesky maps. That one is still driving me crazy! Stay tuned for more iOS 11 blogs and an upcoming one on Android 7 and 8.
Thanks to Lee, Tony, Mike and Sarah for keeping me sane, sending reference material, testing stuff and helping me sort these timestamps out. Like parenting, sometimes forensicating “takes a village” too.
Many years ago, I started this series of blog posts documenting the internals of the EXT4 file system. One item I never got around to was documenting how directories were structured in EXT. Some recent research has caused me to dive back into this topic, and given me an excuse to add additional detail to this EXT4 series.
If you go back and read earlier posts in this series, you will note that the EXT inode does not store file names. Directories are the only place in traditional Unix file systems where file name information is kept. In EXT, and the classic Unix file systems it is evolved from, directories are simply special files that associate file names with inode numbers.
Furthermore, in the simplest case, EXT directories are just sequential lists of file entries. The entries aren’t even sorted. For the most part, directory entries in EXT are simply added to the directory file in the order files are created in the directory.
Let’s create a small directory as an example:
$ cd /tmp
$ mkdir testing
$ cd testing
$ touch this is a simple directory
a directory is simple this
When I list the directory with “ls”, the file names are displayed in alphabetical order. But when I look at things in my trusty hex editor, you can see the directory entries in their actual order:
To the right, I have highlighted the ASCII file names. As you can see, they appear in the directory file in the order I created the files with the “touch” command.
I have also used highlighting to show the fields in several directory entries. Every directory must start with the “.” and “..” entries. These links point to the directory itself (“.”) and the parent directory (“..”).
Each directory entry contains five fields:
Byte 0-3: Inode number
4-5: Total entry length
6 : File name length
7 : File type
8- : File name
Directory entries are variable length, and the second field tracks the total length of the entry. Entries must be aligned on four byte boundary. So in the case of the “.” entry, the total entry length is 12 bytes, even though the entry could fit into 9 bytes. Field three is the length of the file name– one byte in this case. The extra three bytes in the directory entry just contain nulls.
In classic Unix file systems, the file name length field was two bytes. But since Linux doesn’t allow file names longer than 255 characters, the EXT developers decided to use only one byte for the file name length and to use the second byte to hold the file type. While the file type is also stored in the inode, having this information in the directory entry is more efficient for operations like “ls -F” where the file type is displayed along with the file name.
File type is a numeric field defined as follows:
1: Regular file
3: Character special device
4: Block special device
5: FIFO (named pipe)
In the case of the “.” and “..” links, the file type is “2”, which is “directory”. This is the only case where Unix file systems allow hard links to directories.
Finally, note the entry length field of the final file entry in the directory. The entry size is 0x0FB4, or 4020 bytes. This consumes all remaining bytes to the end of the block. The last directory entry is always aligned to the end of the block. Directory entries may not cross block boundaries.
Now let’s observe what happens when I delete the file “simple” from our example directory:
The entry length for the file “a”, which had previously been 12 bytes, is now 28 bytes (0x1C), effectively consuming the entry for the deleted file “simple”. The file name length for the “a” file entry is still only one byte. The remainder of the entry is just “slack” space, but holds the old entry for the deleted file.
This is the standard behavior when files are deleted in classic Unix file systems. The entry before the deleted file simply grows to consume the “deleted” entry. But the entry for the deleted file is otherwise unchanged and can be carved for.
However, this unused “slack” space from deleted directory entries can be reused when new files are added to the directory. For example, here’s what happens when I add a file named “new” to our example directory:
The entry length for the “a” file is once again 12 bytes. Now there’s a new file entry for our “new” file which is 16 bytes long. You can see the last three bytes of the original “simple” file name after “new”. EXT apparently doesn’t bother with null filling the extra space in new directory entries.
One of the consistent criticisms of classic Unix file systems has been that as directories get large, performance suffers. If directories are nothing more than unsorted lists of files, searching for the entry you want means sequentially parsing a large number of directory entries. As the directory grows, the average search time increases linearly.
More modern file systems solve this issue by organizing directory entries in some sort of searchable data structure. For example, NTFS uses B-trees. Starting in EXT3, developers created a hashed tree system, dubbed “htree”, for organizing directory entries. This is now standard in EXT4 and can be seen when the directory grows larger than a single block.
Here’s the first block of my /usr/share/doc directory:
First you’ll notice that the “.” and “..” entries still appear at the beginning of the block. This is for backwards compatibility. Notice that the length of the “..” entry is 0x0FF4 or 4080 bytes, consuming the rest of the block. This is also a backwards compatibility feature, so that the htree data that follows is hidden from any old code that is trying to parse the directory as a simple sequence of file entries.
Things start getting really interesting after the first 24 bytes used by the “.” and “..” entries. The rest of the block is used by the dx_root structure that defines the root of the htree. Technically, the “.” and “..” entries are part of dx_root as well and the data structure consumes the entire first block, but the interesting fields start 24 bytes into the block:
Byte 0-23 : "." and ".." entries
24-27: Reserved (zero)
28 : Hashing algorithm used
29 : Size of dx_entry records (normally 8)
30 : Depth of tree
31 : Flags (unused, normally 0)
32-33: Max dx_entry records possible
34-35: Actual number of dx_entry records to follow
36-39: Relative block number for "zero hash"
rest : dx_entry records
After four null bytes at offset 24-27, a single byte specifies the hash algorithm used by the htree. The byte codes are documented here, but 0x01 seems to be the standard, which is a hashing algorithm based on MD4.
Next comes the size of the dx_entry records, which are used to index the various blocks in the htree. These records are always 8 bytes long and will be described further below.
Bytes 32-33 document the maximum number of dx_entry records that can be stuffed into this block after the initial fields of the dx_root structure. The dx_entry records start 40 bytes into the block, so you might assume that this max value is the block size of 4096 bytes, minus the 40 bytes of dx_root fields, divided by the 8 byte size of dx_entry records. That would get you a max value of 507. The actual value is 0x01FC or 508 because the “zero hash” entry in bytes 36-39 counts as an extra dx_entry record.
Given that a single dx_root block can index over 500 htree blocks, and that those blocks can contain hundreds of file name entries, it is rare for an htree to ever need more than a single level. So in practice, the “depth of tree” byte at offset 30 is always 0x00, indicating a flat tree. The specification does allow for a nested tree, but I’ve never seen one in a real world application.
Bytes 34-35 are the actual number of dx_entry records to follow, again counting the “zero hash” record in bytes 36-39 as one of the dx_entry records. Each dx_entry record is a four byte hash value followed by a four byte relative block offset from the beginning of the directory file. For clarity, let’s take a look at the first several dx_entry records from this example in tabular form:
The dx_entry records are a lookup table sorted by hash value. The initial “zero hash” entry means that all files whose names hash to values less than 0x0F3FFEA2 can be found in block number 1 of the directory file. File names with a hash value greater than or equal to 0x0F3FFEA2 but less than 0x1D8171F2 can be found in block 16 of the file, and so on. The dx_entry records are sorted by hash value so that the EXT file system code can do binary search to find the appropriate block offset more quickly.
After the last dx_entry record, the rest of the block is slack space. Typically, this slack space contains directory entries from when the directory was small enough to fit into a single block. For example, right after the end of the last dx_entry record, you can see most of an original entry for a subdirectory (file type 0x02) named “alsa-utils”. Then there’s an entry for another subdirectory named “anacron” at inode 0x000C14D7 (791767). Typically, you will also find live entries for these directories in whatever htree block they got hashed into. But it’s possible that these directories were later deleted, and that these entries in the directory slack may be the only record of their existence.
The rest of the directory file is “leaf blocks” of the htree. These blocks are full of normal directory entries and are simply read sequentially. The htree format was specifically designed for backwards compatibility, so that older code could still do a normal sequential search through the directory entries.
If you’re thinking about carving for deleted directory files, be aware that directory files larger than one block are usually fragmented. The EXT block allocation algorithm prefers to put files into the same block group with their parent directory. By the time a directory grows larger than a single block, the nearby blocks have all been consumed.
Hal Pomeranz is an independent Digital Forensic Analyst and Expert Witness. He thinks that any day spent looking at a hex editor is a good day.
The preparation of a malware analysis environment can often be a lengthy and repetitive process. I am not referring to setting up a virtual machine which contains all of your tools, but rather recognising that each sample you analyse may have very specific environmental requirements before it is willing to execute fully. For example, it may require a certain number of files to be present in the My Documents directory, or may check for a specific registry key.
If we consider how many analysts around the world were looking at the same major threats in 2016, it seems rather silly for each of them to have to manually setup their environments as required by the sample. One solution may be sharing of VMs, but this comes with issues such as operating system licensing and the shear size of the package to be exchanged, which got me thinking, there’s got to be an easier way.
Accordingly, I have been developing an open source tool ‘rapid_env’ (as in, ‘rapid environment’) which allows for the instant, template based provisioning of a Windows environment. This can include elements such as files, registry keys, processes and mutex, all of which can alter the way that many of the current threats behave.
There is a huge range of features now controlled / enabled by current generation automotive infotainment and telematics systems (Figure 1 – Source), including but not limited to:
Satellite (GPS) navigation
Bluetooth connectivity (the vehicle has its own phone number that SMS messages can be sent to and some systems will even read your SMS text messages to you)
Audio player – on CD, MP3, USB or Bluetooth
Internet access (Hotspot) – enables web browsing for multiple passengers via an in-built Wi-Fi connection, and can also provide real-time traffic updates for GPS navigation systems
Satellite TV tuners – for passengers or for everyone as long as the car is parked
Cameras – an array of cameras literally showing a bird’s-eye view of the car, making maneuvering in tight spaces even easier then ever before
Screen mirroring – wirelessly connect mobile devices to the automobile and mirror its user interface on the car’s larger touchscreen
As automotive infotainment and telematics systems evolve and become more powerful, the value of the historical data they contain from an evidence perspective grows as well.
Automotive Infotainment and Telematics Systems Are Not Crash Data Recorders
It is important to understand that automotive infotainment and telematics systems are not the same as crash data recorders (CDR), or event data recorders (EDR). In a CDR, safety sensor data such as brake position, speed, steering wheel position and airbag deployment is recorded at high frequency but only for a matter of seconds leading up to a crash. In an automotive infotainment and telematics system data is collected from primarily non-safety related components (i.e. speed and coordinates from GPS at a lower frequency but for a substantially longer time period). Hence while CDR systems can determine a point of impact an automotive infotainment and telematics system can perhaps show the longer term driving habits of the vehicle’s driver.
Abundant Information but Difficult to Get To
While there is an abundance of available information, vendors of automotive infotainment and telematics systems have not made them easy to acquire. The forensic product vendor Berla (https://berla.co/) use various methods to extract the data. To get to the data, one must use Berla’s iVe kit, which is composed of iVe software and hardware components for accessing numerous systems from various automakers (i.e. Ford, GM, FCA, BMW, Toyota, and Volkswagen to name a few). For some systems it is as simple as plugging a USB or on-board diagnostics (OBD-II) cable from the iVe kit into a system running the iVe desktop application and walking through the on-screen steps for performing an acquisition. For some other supported systems, an iVe device interface board (DIB) from the kit is attached to the infotainment/telematics module’s PCB as outlined in the in-app instructions. The DIB is then connected to a computer running the iVe application, as well as the kit’s power supply (for certain modules). Depending on the particular type of system being acquired, iVe will offer the option for either a physical image, logical image, or both. For certain modules, one must also remove the protective solder mask from certain pads on the module’s PCB prior to connecting the DIB, though a scratch pen is included in the iVe kit, and instructions with photos showing the specific pads to scratch are included in the application.
It’s the Wild Wild West All Over Again
It is also important to note that a CDR has a definitive government requirement (CFR-2011-title49-vol6-part563) that defines not only what data is to be stored but also the format in which that data is stored. In contrast, infotainment and telematics system vendors are all over the map regarding what data is stored and how and where it is stored. Furthermore, specifically what data is stored can vary from one vehicle model to another, even when the same system appears present in two different vehicles. This requires the forensic tool developer to have a deep understanding of the data structure for each vendor’s product as well as for each car model in order to be effective. It reminds the author of the early days of mobile device forensics.
The following is a broad example of available data types for iVe-supported systems. Any given manufacturer’s system will have a select subset based on features present for that particular system. The data stored may also vary based on the vehicle’s use, actions of the occupant(s), which features were used, etc. The types of data stored can also change when a given manufacture updates the firmware of a system.
To see if a particular vehicle is supported, and what information may be available on the system, use the iVe supported vehicle lookup on Berla’s website. The lookup is also included in the iVe application itself.
Vehicle / System Information
Original VIN Number
Installed Application Data
Wireless Access Points
Tracklogs and Trackpoints
Active and Inactive Routes
Access Point Information
GPS Time Syncs
Oh My! Guess What I Found on eBay?
An eBay seller was parting out a wrecked 2015 Silverado pickup truck (Figure 2) including its infotainment system, an NG 2.0 HMI module (Figure 3, 4, 5).
SMSC OS81092AM MOST Bus Controller – 50 Mbps, Automotive
Texas Instruments DS90UR905QSQ Serializer – FPD-Link II, 24-Bit Color, Up to 65MHz, Automotive
Lets acquire some data
Preparation for acquisition (Figure 6) involves scratching insulating material away from specific PCB pads, as specifically outlined in iVe’s instructions, to permit connectivity with the PC board traces. The fiberglass scratch pen has strands that tend to come apart during the removal process, so gloves and safety glasses are highly recommended. The iVe DIB is then connected to the PCB. Proper alignment of the DIB pins on the PCB is critical.
The PCB is powered with the variable power supply (Figure 7) that is included in the iVe kit. It is important to ensure the voltage is adjusted to 12V prior to connecting the leads to the PCB power connector.
The iVe application includes an acquisition wizard to walk the user through each step for setting up the acquisition.
The iVe DIB is connected to the computer running iVe, and power is applied. After successfully testing the hardware connections by clicking the ‘Detect’ and ‘Test’ buttons (Figure 8) in the software, the acquisition can be started. For the HMI module, iVe allows for a logical image to be acquired.
Once extraction has completed, analysis can be performed, and reports can be generated. iVe’s data export functionality supports .csv, tab-delimited, and .kml for GPS data, and reports can be exported in HTML or PDF format.
Below is some of the data collected by iVe for the HMI device in this test.
Attached Devices (Figure 9)
SMS Messages (Figure 10)
Call Logs (Figure 11)
Contacts (Figure 12)
Device Events (Figure 13)
Voice Recordings (Figure 14)
Carved Files (Figure 15)
Music (Figure 16)
Summary of HMI Device
No crash data but good data to establish habits and patterns of the driver
Examples of available historical data included
Some GPS Information
Media (i.e. Music)
More Can Possibly Be Parsed from Recovered DB Files
Another Visit to eBay
We already imaged an NG HMI so this time I was looking for an OnStar Gen 9 device to analyze (Figure 17).
SMSC OS81092AM MOST Bus Controller – 50 Mbps, Automotive
Texas Instruments DS90UR905QSQ Serializer – FPD-Link II, 24-Bit Color, Up to 65MHz, Automotive
Lets acquire some data
As with the previous acquisition, the iVe DIB is attached to the PCB and the computer running iVe. The variable power supply is tested to ensure it is set at 12V before connecting it to the PCB power connector. The step-by-step acquisition wizard in the iVe software is followed to begin the data extraction (Figure 18). iVe allows for a physical extraction on the OnStar Gen 9.
Below is some of the data collected by iVe for the OnStar Gen 9 device.
In this webinar, Eric covered several tools that can be used to show evidence of execution as well as document creation and opening. He also provided an overview of bstrings and Timeline Explorer and provided demonstrations of how those tools can be used to add value to investigations. Here is a webcast summary:
Timeline Explorer allowed us to load one or more CSV or Excel files into a common interface and apply advanced sorting, filtering, and conditional formatting rules to our data.
Several useful shortcuts include:
CTRL-t: Tag or untag selected rows
CTRL-d: Bring up the Details window for super timelines
CTRL-C: Copy the selected cells (with headers) to the clipboard. Hold SHIFT to exclude the column header
Evidence of execution programs
AppCompatCacheParser, AmcacheParser, and PECmd parse forensic artifacts related to evidence of execution.
AppCompatCacheParser extracts shimcache data from each ControlSet found in the SYSTEM Registry hive and exports them to CSV format.
AmcacheParser extracts file and program information from the Amcache.hve hive to CSV format.
PECmd processes Windows prefetch files and extracts information such as the total number of times a program was run and up to the last 8 times a program was executed. Prefetch files also track the files and directories a program referenced when it was run.
Lnk file internals
We started exploring lnk files by looking at the header and unpacking what each piece of the header meant and how to process it.
From here we looked at each of the structures present based on the data flags section of the header, including the Target Id lists. The raw target Id lists looked like this:
And once we processed and decoded each one, we end up with this:
Document creation and opening programs
Now that we had a decent understanding of the internals of lnk files, we took a look at several tools to extract data from these valuable forensic artifacts.
LECmd and JLECmd process lnk files and jump lists and displays information related to the document opened such as the target documents created, modified, and last accessed time stamps, the volume serial number and type of drive, target Id lists, and more.
LECmd fully supports decoding all available structures including embedded shell items. It also added additional functionality like calculating the absolute path of the target file based on the shell items in the target Id list. Finally, LECmd resolved MAC addresses to the vendor based on an internal lookup table included with LECmd.
JLECmd provides the same data extraction capabilities as LECmd, but in the context of the lnk files being wrapping in another data structure.
In the case of custom destinations jump lists, this wrapping structure was merely a file that contained one or more concatenated lnk files. Automatic destinations jump lists used an OLE CF container to track embedded lnk files.
JLECmd allows for dumping of all embedded lnk files which in turn allows for those lnk files to be analyzed with any lnk parsing tool.
Finally, we took a look at bstrings and saw many examples of how to extract email addresses, URLs, UNC paths, and more from a given file using built in regular expressions. We also discussed how to extract strings from any code page and how to limit the amount of data returned by bstrings.
I hope you enjoyed the webinar and get much use out of the tools in your investigations.
Thank you again for attending! Feel free to reach out via twitter for feedback or questions
Article originally posted in forensicfocus.com
Author: Alissa Torres
It’s October, haunting season. However, in the forensics world, the hunting of evil never ends. And with Windows 10 expected to be the new normal, digital forensics and incident response (DFIR) professionals who lack the necessary (memory) hunting skills will pay the price.
Investigators who do not look at volatile memory are leaving evidence at the crime scene. RAM content holds evidence of user actions, as well as evil processes and furtive behaviors implemented by malicious code. It is this evidence that often proves to be the smoking gun that unravels the story of what happened on a system.
Although Microsoft is not expected to reach its Windows 10 rollout goal of one billion devices in the next two years, their glossiest OS to date currently makes up 22% of desktop systems according to netmarketshare.com1. By this time, as a forensic examiner, you have either encountered a Windows 10 system as the subject of an investigation or will in the near future. Significant changes introduced with Windows 10 (and actually with each new subsequent update) have required some “re-education” to learn what the “new normal” is.
Let’s jump in and check out the differences that Windows 10 has brought to the world of forensics by examining some key changes in the process list. In performing memory analysis, an investigator must understand the normal parent-child hierarchical relationships of native Windows processes. This is the essence of “know normal, find evil” and allows for effective and efficient analysis. Most of you have used the Edge browser which was released with Windows 10 in Summer 2015. Whereas Internet Explorer is typically launched by explorer.exe (run by default as the user’s initial process), Edge is spawned by the Runtime Broker process, which has a parent process of svchost (a system process). Edge runs as a Universal Windows Platform (UWP) application, one of the many Windows apps built to run on multiple types of devices. Runtime Broker manages permissions for Windows apps. This hierarchical process relationship deviates from one of the traditional analysis techniques we have relied on in past versions of Windows: System processes will have a parent/grandparent of the SYSTEM process and normal user processes, like browsers, will have parent lineage to explorer.exe. The screenshot below shows the hierarchical structures of a Win10 RTM system Build 10240 using Process Hacker tool.
Figure 1. Typical Hierarchy of Internet Explorer Process
Figure 2. Hierarchical Structure of Microsoft Edge and SearchUI Processes
Other new additions to the Windows process list are SearchUI.exe, the Search and Cortana application and ShellExperienceHost.exe, the Start menu and Desktop UI handler, . As Windows apps, they are both spawned from the same Runtime Broker process as Edge. In this screenshot above, the SearchUI and ShellExperienceHost processes are in gray, indicative of suspended processes. Only one Windows app is in the foreground at a time, those that are out of focus are suspended and swapped out, with process data being compressed and written to the swapfile.sys in the file system2.
Prepare for Internet connections to automatically be spawned by some of these new Win10 processes. OneDrive (formerly known as SkyDrive) has a connection to port 80 outbound and SearchUI (Cortana) creates outbound network connections as well when the user accesses the Start Menu. An example of network activity from the SearchUI process is shown below.
Figure 3. SearchUI.exe Network Connections
The memory data compression behavior first seen in Windows apps on Windows 8 has been implemented on a wider scale in Windows 10. Now when the memory manager detects “memory pressure”, meaning there is limited availability for data to be written to physical memory, data is compressed and written to the page file.3 Why is this relevant to the forensic examiner? Analysis of page file data can yield fruit, uncovering trace artifacts that indicate the malware at one point resided on the system. Remember that the contents of the page file was once in physical memory. This data, though highly fragmented, is great for string searches and yara signature scans. With the implementation of Windows 10 memory compression, a new obstacle exists for such analysis.
If you have done investigations involving nefarious command line activity, it is useful to know that the cmd.exe process now spawns its own conhost.exe process as of Windows 8. This is notable because in previous Windows versions, conhost is spawned by the csrss.exe process. I am always leery of a command shell running on an endpoint, particularly one to which a web browser has a handle.
It is often difficult to discern what version of Windows 10 your target system was running at the time memory was acquired. Two significant updates have been pushed since Windows 10 initial release, Threshold 2 in November 2015 and the Anniversary edition in July 2015. Shown below is imageinfo plugin output from Rekall Memory Forensic Framework (1.5.3 Furka)3 detailing the Build Version. With so many different features added between Windows versions as well as significant changes rolled out in updates, having a tool that uses the publicly available Windows symbols, like Rekall, is key. When profiles have to be created in order to support new versions of Windows as seen in analysis tools, there is lag time. Rekall automatically detects the Windows version and uses the hosted profile from its repository by default.
Xbox runs on Windows 10 now and you may be among those celebrating that you can now stream console games to your computer. But how does this effect our forensic findings? Expect to see Xbox gaming services present even if they are not being used. Since malware commonly instantiates new services or hijacks existing ones as a method of persistence, again, it is good to know what normal looks like.
Hopefully a recap on how things have changed in recent versions of Windows will speed your analysis as you work to unravel the story of what evil happened on a system. Happy hunting!
SANS FOR526: Memory Forensics In-Depth course provides you with the advanced skills you need to understand the newest Windows OS changes and find the evidence that might be left in the crime scene otherwise. Learn these critical skills and master the advanced investigative methods to find evidence in volatile memory with course author Alissa Torres at SANS Security East 2017
The technical basics of the case is that FBI is trying to compelApple Inc. to help create a new capability installed on the suspect’s iPhone that would enable with the following degraded security mechanisms:
Allow the FBI to submit passcode “electronically via the physical device port”
Will not wipe underlying data after 10 incorrect passcode attempts
Will not cause any delays after an incorrect passcode attempts
Here is what we know about the device in question so far. From court documentation we know it is an Apple iPhone 5C, model A1532, running iOS 9 , with serial number FFMNQ3MTG2DJ and IMEI 358820052301412. Using the serial number and IMEI, we can garner some additional information that’s crucial to answering the technical questions by using free website imei.info and other resources. This particular model of iPhone contains an A6 chip. The provider is Verizon and the data capacity could be either 16 or 32 GB. The device is locked with a user generated, numeric passcode .
Whether or not we’re able to break the passcode on an iPhone depends a great deal on what version of iOS is installed on the device, as well as what version of hardware is involved. The combination of these two factors can mean the difference between being able to break the passcode without Apple’s assistance or having to rely on Apple for help.
For many years, mobile forensic analysts had it easy. With iOS devices using the A4 chip (iPhone 4, iPad) and older (running iOS 7 or older) we were able to make physical images handily. A physical image is the closest thing we get to a bit by bit forensic image of the entire device. These are sometimes referred to as “the good ole days” by those of us doing mobile forensics. A physical image was considered ideal, because it gave us access to all the data stored on the phone. As time has moved forward, it has gotten more and more difficult to pull complete physical images from mobile devices of all varieties. More often, we’re able to obtain forensic backups or logical extractions of portions of the data from mobile devices.
Starting with Apple A5 chip (iPhone 4S, iPad 2) and any iOS version we can still acquire a physical and/or logical image, but we will need the passcode as well as jailbreak software, or a device that’s been previously jailbroken by the user.
The San Bernardino iPhone central to this discussion contains the A6 chip (found in the iPhone 5, iPhone 5C) and based on court documentation from the case, some version of iOS 9 is installed on the device. For this particular device, we would still need the passcode and jailbreak software to get a physical dump or just the passcode to get a logical extraction or forensic backup of the file system.
These “it depends” scenarios get complicated, and sometimes a great reference document is needed to keep track of it all. Devon Ackerman, Special Agent/Forensic Examiner provided the great spreadsheet shown in the below images to us, and has given permission to share it.
Why can’t the FBI dump the iPhone like a hard drive?
iOS devices offer increased levels of encryption with each new release. The Apple A5 (and newer) chips offer the greatest amount of encryption and are the most difficult Apple devices so far to access when locked. These iOS devices are encrypted with 256bit AES encryption at the hardware level. The encryption key is stored between the flash memory and system area on the iOS device. This area of is referred to as “effaceable storage.” Hardware level encryption important because resulting extractions will only allow viewing of the file system and metadata, while leaving file contents unreadable due to encryption. Using the passcode to unlock the device prior to acquisition allows decryption of the data and renders it usable.
The iPhone 4S, which debuted the A5 chip, featured beefed up security which decreased the number of commands available in the boot loader, and consequently made it more difficult to physically acquire the device. The A6, A7, and A8 chips have followed suit and have become increasingly secure with each new chip release. The iPhone 4 has an bootrom exploit (24kpwn) which is used by commercial mobile forensic tools and non-commercial methods to physically access the device and bypass a lock. In the A5, A6, and A7 chips this exploit has been patched. This patch occurred well before the release of the iPhone 6 and the A8 chip.
Additionally, most logical files stored on the user partition of the disk are protected by a per-file encryption called Data Protection. This means that if we were able to get a dump of the data from the phone via chip-off or JTAG techniques (either of which could be destructive to the device) the contents of the files would still be encrypted. The following screenshots shows a physical image from a device with Data Protection. The file system is readable, however the file contents are encrypted.
iPhone Physical Image – Note the file system structure and associated metadata:
As far as we are aware, it is not possible at this time to extract the iPhone hardware keys along with a brute force of the passcode to decrypt the file contents.
What about those auto-magical password cracking tools I see on CSI Cyber?
Just because you see actor-investigators, actor-analysts, and actor-technicians quickly and effortlessly solve all the nearly-impossible tech problems presented to them in the span of an hour long episode does not necessarily mean things ever work that way in real life. Remember, these shows are largely fiction with a few juicy tidbits of truth thrown in here and there for good measure. Brute force password cracking on TV rarely fails, and is usually successful just barely in the nick of suspense filled time.
Three (real) and popular commercially available password cracking solutions for iPhone are described below:
1. One of the auto-password guessing tools is called the IP-BOX. The IP-BOX is a black box that originates from phone unlocking, hacking, and repair market which can be used to defeat simple 4 digit pass codes on iOS devices running versions of iOS through iOS8.1.2. Following testing and validation, this box has been used by law enforcement (with valid legal authority) quite effectively to unlock iPhones for evidentiary purposes, until the exploits being leveraged the device were patched by Apple. Again, from the court documentation available, the San Bernardino iPhone is likely running some version of iOS9, thus the IP-BOX would not be effective.
2. Cellebrite’s Advanced Investigative Services (CAIS) offers an iPhone unlocking service using tools and processes that are proprietary and publicly unknown. We are aware of a number of great success stories related to use of Cellebrite’s services to unlock iPhones in criminal cases. Use of this service requires an agency to provide the physical device to Cellebrite in hopes of getting the device and passcode in return. This service is advertised to only work on the iPhone4S (or newer) devices running iOS 8.0 to 8.4.1. The service also states that the “device wipe” functionality is bypassed. Cellebrite also has a user lock recovery tool for use on iOS and Android devices that relies on brute force attacks. This tool is only effective with iOS devices running versions of iOS 7.
3. Secure View’s svStrike is similar to the IP-BOX, but with added capabilities including the ability to brute force up to a 6-digit passcode (versus 4-digit). The documentation does not state whether the “device wipe” functionality can be bypassed.
You have the suspects thumb, could the FBI just use Touch ID on the Apple iPhone?
If only. Unfortunately, the iPhone we’re talking about here was an iPhone 5c, and Touch ID was introduced with the more fully featured iPhone 5s.
Ironic, isn’t it? One less security feature to attempt to exploit actually means this particular iPhone is more secure.
Assuming an alternate universe, where the suspect had a Touch ID capable model of iPhone and Touch ID was enabled by the suspect, there would have been a very limited time window during which his hands could have been used to attempt to unlock the phone. Let’s face it though – there was a whole lot going on in those first few hours of terror. Mass casualties and injured people, a large crime scene, stretched resources and buckets of adrenaline. Search warrants were being written and executed. The existence of this particular iPhone likely was not yet even known, let alone who it belonged to. To quarterback after the fact and say that the cops should have matched the dead terrorist’s thumb to this particular iPhone in the midst of the chaos is asking people who are already heros to have been comic book super heros.
Here are the technical limitations they would have faced: According to Apple’s security documentation to use Touch ID, users must set up their device to require entry of a passcode to unlock it. When a recognized enrolled fingerprint is scanned via Touch ID, the device unlocks without asking for the device passcode. The passcode can be used instead of a fingerprint, and the passcode is still required when the device is restarted, when the device has not been unlocked for more than 48 hours, when the device has received a remote lock command, after five unsuccessful attempts to match a fingerprint, or when setting up or enrolling new fingerprints with Touch ID.
In this case, because of the iPhone 5C hardware, Touch ID wasn’t even an option. Therefore we face the following passcode options: 1) A four digit simple passcode. This is the most common passcode for pre-iOS 9 devices. 2) A six digit PIN, introduced with iOS 9. More digits means more difficulty and more time for a brute force attack. 3) A complex passcode. Complex alphanumeric passcodes can also be used on iOS devices, making our brute force attempts even more difficult. Most forensic tools can crack simple passcodes if the iOS version is supported. At best, complex passcodes can be bypassed if the iOS device can enter DFU mode (not damaged) and the device is supported for physical acquisition, which is not the case with this iPhone.
Oh, so it’s just a Apple iPhone pin code, why can’t the FBI just brute force it now?
Brute force can be attempted and may work using the tools discussed above.
However, should the user have their iPhone enabled to wipe after 10 failed attempts, the data on the device could be wiped and rendered useless within minutes of running the brute force attack. This is the risk we aim to avoid, at all costs.
While iOS devices used to warn the user of an impending wipe of the device following too many failed passcode entry attempts, this is not the case anymore. Based upon data found in iCloud backups and information provided by the employer-owner of the phone, investigators in the San Bernardino case have reason to believe that the Erase Data function was turned on. A blind brute-forcing attempt to break the passcode in this case would not have been wise.
Could the Apple iPhone iCloud backups be of help to the FBI?
Yes. If the device has iCloud backups enabled then the backup files can be recovered. In this case, the last iCloud backup file that was recovered and examined by the FBI was dated October 19, 2015 which is over a month before the attacks occurred.  iCloud backup files contain user data as specified through their specific iPhone settings including email, contacts, calendars, photos, and Apple keychain data. iCloud backups are only performed when the device is connected to a WiFi network and plugged in. As long as the settings are turned on and maintained, iCloud backups would occur regularly when connected to a WiFi network. Any changes to the configuration of the iCloud backup settings require the user’s iCloud password and the correct password to be entered into the iPhone. If the password to the user’s iCloud account changes, the device user would need the enter the new password into the device prior to successfully backup their data into the iCloud.
The iCloud password was remotely reset by the owner of the device, San Bernardino County, hours after the attack eliminating the possibility of an automatic backup if the device connected with a known WiFi network. Also, a Mobile Device Management (MDM) solution should be considered by any government or corporate owned phones used by employees. MDM solutions would help prevent locking out the organization from the phone that they own and could secure the data for eventual examination.
I heard Apple has helped out the FBI before, why is this different?
Previously, when presented with search warrants compelling their assistance, Apple has provided law enforcement agencies with the extracted contents of older iPhone devices running iOS 7 or older. They did this by running a specialized RAM disk on the devices to bypass the passcode to obtain the data. As of iOS 8 Apple has stated that this process is no longer an option due to how the encryption process has changed. From Apple’s “Legal Process Guidelines”
“For all devices running iOS 8.0 and later versions, Apple will not perform iOS data extractions as data extraction tools are no longer effective. The files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.“
I hear Apple holds the key, can you explain?
One ring key to rule them all! When an iOS device boots it verifies the Boot ROM code against the Apple Root CA public key, this is the first step in the secure boot chain. This ensures that iOS devices will only boot with Apple provided iOS software. If another version of the iOS software was created (either by Apple or other 3rd parties) the device would still be required to verify the Boot ROM against the Apple Root CA public key to successfully load. Only Apple has a copy of their private key and this is a critical reason why the government cannot simply write their own version of iOS.
One of the critical security mechanisms for distributable software is this verification process. If a private key is lost or stolen, hackers could write backdoors, espionage software, or other malicious capabilities into their software that the device would automatically trust and load that malcode.
Ok, I get it. This is hard. Can they get the data from anywhere else? I heard the FBI has other cellphone evidence from the Apple iPhone. Surely this can this be used? (…and don’t call me Shirley!)
Some data related to use of the iPhone 5c likely exists in other locations, including the suspect’s iCloud account, the hard drive of any computers the iPhone was backed up to, the carrier, and potentially other cloud based accounts related to apps in use on the phone. Also, conversations have at least two sides, and anyone the suspect was communicating with could have relevant information stored within their phones as well.
According to the court filing, two other cell phones were examined in this case. Both of those phones were found “destroyed” and “discarded” in a dumpster behind the residence. These phones may contain information useful to the investigation, or which could potentially lead to the passcode for the iPhone 5c. Many times, “destroyed” phones can be repaired to get to relevant evidence, many times pass codes are more easily recovered from different mobile devices, and many times passcodes are reused between devices. Time will tell.
If the FBI can get to the Apple iPhone data, what else can they find?
If the FBI can get to the data, a plethora of information may be accessible. The iPhone most likely contains recent conversations, call logs, Internet searches, chats, application usage, locational data and much more. Our smartphones are the most personal devices we own and contain information that provide deep insight into our lifestyle.
Merely successfully getting to the data is sometimes just the first challenge. If we are able to successfully dump the phone after obtaining or bypassing a passcode, data stored by the apps themselves often contains encrypted data. This scenario is increasingly common, as app developers work to make user data more secure and are implementing data-at-rest security features like encrypted databases, or encrypted content within databases. For example, Threema, is a commonly used secure communications app that is used world-wide. The data associated with the application is all encrypted when data is dumped from the iPhone. If the user relied upon third-party applications like Threema to chat, it’s likely the FBI would be faced with an additional hurdle – encryption that Apple cannot help with.
Can Apple even do what the FBI is asking for?
From what we know about iPhones, the A6 chip, and iOS 9, the answer is: Probably. Apple may be the only ones who currently can. Which is not to say someone else outside Apple and the FBI won’t come up with a creative solution to this problem. Our bets aren’t on McAfee’s social engineering attack though. Dan Guido from Trail of Bits describes this in a bit more technical detail, and Jonathan Zdziarski also agrees that Apple would be technically able to comply with the order given the hardware and iOS version involved.
This is interesting, may I have more information?
We have found the articles below well written and useful for learning more about this topic.
Heather Mahalik is leading the forensic effort as a Principal Forensic Scientist and Team Lead for Oceans Edge, Inc. and has extensive experience in digital forensics which began in 2003. She is currently a senior instructor for the SANS Institute and is the course lead for FOR585: Advanced Smartphone Forensics.
Sarah Edwards is a senior digital forensic analyst who has worked with various federal law enforcement agencies. Her research and analytical interests include Mac forensics, mobile device forensics, digital profiling, and malware reverse engineering. Sarah is the lead author for FOR518: Mac Forensics.
** Above Joe Friday “Just the FAQs” – Photo by Brian Moran @brianjmoran