SANS Threat Hunting and Incident Response Summit 2018 Call for Speakers – Deadline 3/5

 

1200x627_THIR-Summit-2018

Summit Dates: September 6 & 7, 2018
Call for Presentations Closes on Monday, March 5, 2018 at 5 p.m CST
Submit your presentation here

The Threat Hunting & Incident Response Summit will focus on specific hunting and incident response techniques and capabilities that can be used to identify, contain, and eliminate adversaries targeting your networks. SANS and our advisory partner Carbon Black are pleased to invite you to the Summit where you will have the opportunity to directly learn from and collaborate with incident response and detection experts who are uncovering and stopping the most recent, sophisticated, and dangerous attacks against organizations.

Chances are very high that hidden threats already exist inside your organization’s networks. Organizations can’t afford to assume that their security measures are impenetrable, no matter how thorough their security precautions might be. Prevention systems alone are insufficient to counter focused human adversaries who know how to get around most security and monitoring tools.

The key is to constantly look for attacks that get past security systems, and to catch intrusions in progress rather than after attackers have attained their objectives and done worse damage to the organization. For the incident responder, this process is known as “threat hunting.” Threat hunting uses known adversary behaviors to proactively examine the network and endpoints and identify new data breaches.

Call for Presentations Closes on Monday, March 5, 2018
Submit your presentation here

If you are interested in presenting, SANS and our advisory partner Carbon Black would be delighted to consider your threat hunting-focused proposal with use cases and communicable lessons. We are looking for presentations from both the intelligence producer and the intelligence consumer fields, with actionable takeaways for the audience.

The THIR Summit offers speakers opportunities for exposure and recognition as industry leaders. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

CFP submissions should detail how the proposed case study, tool, or technique can be utilized by attendees to increase their security posture. All THIR Summit presentations will be 30 minutes with 5 minutes for questions.

We are looking for presentations that focus on:
– Endpoint threat hunting
– Network threat hunting
– Hunt teams and how to organize them to be extremely effective
– Using known attacker methodologies to hunt/track adversaries
– Innovative threat hunting tools, tactics, and techniques
– Case studies on the application of threat hunting to security operations

Call for Presentations Closes on Monday, March 5, 2018
Submit your presentation here 

11th Annual Digital Forensics and Incident Response Summit Call for Presentations deadline Jan 15th 2018

Call for Presentations- Now Open

The 11th Annual Digital Forensics and Incident Response Summit Call for Presentations is now open through 5 pm EST on Monday, January 15, 2018. If you are interested in presenting, we’d be delighted to consider your practitioner-based case studies with communicable lessons.

The DFIR Summit offers speakers the opportunity to present their latest tools, findings, and methodologies to their DFIR industry peers. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

We are looking for Digital Forensics and Incident Response Presentations that focus on:
1. DFIR and Media Exploitation case studies that solve a unique problem
2. New Forensic or analysis tools and techniques
3. Discussions of new artifacts related to smartphones, Windows, and Mac platforms
4. Focusing on improving the status quo of the DFIR industry by sharing novel approaches
5. Challenges to existing assumptions and methodologies that might change the industry
6. New, fast forensic techniques that can extract and analyze data rapidly
7. Focus on Enterprise or scalable forensics that can analyze hundreds of systems instead of one at a time
8. Other topics offering actionable takeaways to your peers in the DFIR community

Benefits of Speaking

Promotion of your speaking session and company recognition via the DFIR Summit website and all printed materials
Visibility via the DFIR post-Summit on the SANS DFIR Website
Full conference badge to attend all Summit sessions
Speakers can bring 1 additional attendee for free to attend the summit
Private speakers-only networking lunches
Speakers-only networking reception on evening before Summit
Continued presence and thought leadership in the community via the SANS DFIR YouTube channel

Speaking sessions will be professionally recorded and posted to the SANS DFIR YouTube Site.

All talks will be 35 minutes of content + 5 minutes of Q&A.

Deadline: Monday, 15 January, 5 p.m. EST

Click here to submit your proposed presentation.

Please direct questions to summit@sans.org.

Meltdown and Spectre – Enterprise Action Plan

Meltdown and Spectre – Enterprise Action Plan by SANS Senior Instructor Jake Williams
Blog originally posted January 4, 2018 by RenditionSec

Watch the webcast now
WATCH WEBCAST NOW

MELTDOWN SPECTRE VULNERABILITIES
Unless you’ve been living under a rock for the last 24 hours, you’ve heard about the Meltdown and Spectre vulnerabilities. I did a webcast with SANS about these vulnerabilities, how they work, and some thoughts on mitigation. I highly recommend that you watch the webcast and/or download the slides to understand more of the technical details. In this blog post, I would like to leave the technology behind and talk about action plans. Our goal is to keep this hyperbole free and just talk about actionable steps to take moving forward. Action talks, hyperbole walks.

To that end, I introduce you to the “six step action plan for dealing with Meltdown and Spectre.”

  1. Step up your monitoring plan
  2. Reconsider cohabitation of data with different protection requirements
  3. Review your change management procedures
  4. Examine procurement and refresh intervals
  5. Evaluate the security of your hosted applications
  6. Have an executive communications plan

Step 1: Step up your monitoring plan

Meltdown allows attackers to elevate privileges on unpatched systems. This means that attackers who have a toehold in your network can elevate to a privileged user account on a system. From there, they could install backdoor and rootkits or take other anti-forensic measures. But at the end of the day, the vulnerability doesn’t exfiltrate data from your network or encrypt your data, delete your backups, or extort a ransom. Vulnerabilities like Meltdown will enable attackers, but they are only a means to an end. Solid monitoring practices will catch attackers whether they use an 0-day or a misconfiguration to compromise your systems. As a wise man once told me, “if you can’t find an attacker who exploits you with an 0-day, you need to be worried about more than 0-days.”

Simply put, monitor like your network is already compromised. Keeping attackers out is so 1990. Today, we assume compromise and architect our monitoring systems to detect badness. The #1 goal of any monitoring program in 2018 must be to minimize attacker dwell time in the network.

Step 2. Reconsider cohabitation of data with different protection requirements

Don’t assume OS controls are sufficient to separate data with different protection requirements. The Spectre paper introduces a number of other possible avenues for exploitation. The smart money says that at least some of those will be exploited eventually. Even if other exploitable CPU vulnerabilities are not discovered (unlikely since this was already an area of active research before these vulnerabilities), future OS privilege escalation vulnerabilities are a near certainty.

Reconsider your architecture and examine how effective your security is if an attacker (or insider) with unprivileged access can elevate to a privileged account. In particular, I worry about those systems that give a large number of semi-trusted insiders shell access to a system. Research hospitals are notorious for this. Other organizations with Linux mail servers have configured the servers so that everyone with an email address has a shell account. Obviously this is far from ideal, but when combined with a privilege escalation vulnerability the results can be catastrophic.

Take some time to determine if your security models collapse when a privilege escalation vulnerability becomes public. At Rendition, we’re still running into systems that we can exploit with DirtyCOW – Meltdown isn’t going away any time soon. While you’re thinking about your security model, ask how you would detect illicit use of a privileged account (see step #1).

Step 3. Review your change management procedures

Every time there’s a “big one” people worry about getting patches out. But this month Microsoft is patching several critical vulnerabilities. Some of these might be easier to exploit than Meltdown. When thinking about patches, measure your response time. We advise clients to keep three metrics/goals in mind:

Normal patch Tuesday
Patch Tuesday with “active exploit in the wild”
“Out of cycle” patch
How you handle regular patches probably says more about your organization than how you handle out of cycle patches. But considering all three events (and having different targets for response) is wise.

Because of performance impacts, Meltdown definitely should be patched in a test environment first. Antivirus software has reportedly caused problems (BSOD) with the Windows patches for Spectre and Meltdown as well. This shows a definite example where “throw caution to the wind and patch now” is not advisable. Think about your test cycles for patches and figure out how long is “long enough” to test (both for performance and stability) in your test environment before pushing patches to production.

Step 4. Examine procurement and refresh intervals

There is little doubt that future processors (yet to be released) will handle some functions more securely than today’s models. If you’re currently on a 5 year IT refresh cycle, should you compress that cycle? It’s probably early to tell for hardware, but there are a number of older operating systems that will never receive patches. You definitely need to re-evaluate whether leaving those unpatchable systems in place is wise. Just because you performed a risk assessment in the past, you don’t get a pass on this. You have new information today that likely wasn’t available when you completed your last risk assessment. Make sure your choices still make sense in light of Meltdown and Spectre.

When budgeting for any IT refresh, don’t forget about the costs to secure that newly deployed hardware. Every time a server is rebuilt, an application is installed, etc. there is some error/misconfiguration rate (hopefully small, often very large). Some of these misconfigurations are critical in nature and can be remotely exploited. Ensure that you budget for security review and penetration testing of newly deployed/redeployed assets. Once the assets are in production, ensure that they are monitored to detect anything that configuration review may have missed (see step #1).

Step 5. Evaluate the security of your hosted applications

You can delegate control, but you can’t delegate responsibility. Particularly when it comes to hosted applications, cloud servers, and Platform as a Service (PaaS), ask some hard questions of your infrastructure providers. Sure your patching plan is awesome and went off without a hitch (see step #3). What about your PaaS provider? How about your IaaS (Infrastructure as a Service) provider? Ask your infrastructure provider:

Did they know about the embargoed vulnerability?
If so, what did they do to address the issue ahead of patches being available?
Have they patched now?
If not, when will they be fully patched?
What steps are they taking to look for active exploitation of Meltdown? *
* #5 is sort of a trick question, but see what they tell you…

Putting an application, server, or database in the cloud doesn’t make it “someone else’s problem.” It’s still your problem, it’s just off-prem. At Rendition, we’ve put out quite a few calls today to MSPs we work with. Some of them have been awesome – they’ve got an action plan to finish patching quickly and monitoring for all assets. One called us last night for advice, another called us this morning. Others responded with “Melt-what?” leading us to wonder what was going on there. Not all hosting providers are created equal. Even if you evaluated your hosting provider for security before you trusted them with your data, now is a great time to reassess your happiness with how they are handling your security. It is your security after all…

Step 6. Have an executive communications plan

When any new vulnerability of this magnitude is disclosed, you will inevitably field questions from management about the scope and the impact. That’s just a fact of life. You need to be ready to communicate with management. To that end, in the early hours of any of these events there’s a lot of misinformation out there. Other sources of information aren’t wrong, they’re just not on point. Diving into the register specific implementations of a given attack won’t help explain the business impact to an executive.

Spend some time today and select some sources you’ll turn to for information the next time this happens (this won’t be the last time). I’m obviously biased towards SANS, but I wouldn’t be there if they didn’t do great work cutting through the FUD (fear uncertainty and doubt) when it matters. The webcast today was written by me, but reviewed by other (some less technical) experts to make sure that it was useful to a broad audience. My #1 goal was to deliver actionable information you could use to educate a wide range of audiences. I think I hit that mark. I’ve seen other information sources today that missed that mark completely. Some were overly technical, others were completely lacking in actionable information.

Once you evaluate your data sources for the next “big one,” walk through a couple of exercises with smaller vulnerabilities/issues to draft communications to executives and senior leadership. Don’t learn how to “communicate effectively” under the pressure of a “big one.” Your experience will likely be anything but “effective.”

Closing thoughts

Take some time today to consider your security requirements. It’s a new year and we certainly have new challenges to go with it. Even if you feel like you dodged this bullet, spend some time today thinking about how your organization will handle the next “big one.” I think we all know it’s not a matter of “if” but a matter of “when and how bad.”

Of course, if I don’t tell you to consider Rendition for your cyber security needs, my marketing guy is going to slap me silly tomorrow. So Dave, rest easy my friend. I’ve got it covered

Leaving the Backdoor Open: Risk of Remotely Hosted Web Scripts

 

Many websites leverage externally hosted scripts to add a broad range of functionality, from user interaction tracking to reactive design. However, what you may not know is that by using them you are effectively handing over full control of your content to the other party, and could be putting your users at risk of having their interactions misdirected or intercepted.

How so? Well, when you reference a script file, either locally or remotely, you are effectively adding that code into your page when it’s loaded by the user. JavaScript, as an example, is a very powerful language that allows you to dynamically add or modify content on the fly. If the location and content of the script is controlled by someone else, the other party is able to modify the code as they see fit, whenever they wish. Let’s hope they never get compromised or go rogue!

__3

How many of you would click Yes if this warning came with a script? Not many I suspect!
 

Demonstration of concept

I’ve put together a small demo page on my website to show the issue (adamkramer.uk/js-test.html). It’s pretty straight forward, I’ve referenced a script hosted on an external site – in this case my attempt to include some old style blink tags on my page.

__1

However, rather than the intended script, the JavaScript on the other end was replaced with the following code, which redirects the user away from the intended site to, potentially, something malicious such as a phishing page.

__2

It’s that easy. This concept is pretty basic, but what if the script was setup to conduct keylogging, or manipulate input data before it’s submitted? Uh-oh!
 

Well that doesn’t sound good, any suggestions?

Firstly, use the debugging functionality of your favourite browser to identify whether your website leverages any externally hosted scripts. I’ve used Google Chrome against my website and can see that Google Analytics fits into that category.

__4

Now consider whether you can bring the scripts in-house and store them on your server. If you can’t and must use externally hosted scripts, it may be worth keeping an eye on what is being served up. Perhaps even schedule a web content malware scan against the script files on a regular basis. Additionally, make sure the site hosting the script is still live and the domain hasn’t expired, making it available for sale!

Happy holidays!

-Adam (Twitter: @CyberKramer)

Your Cyber Threat Intelligence Questions Answered

 

350x200_CTI-SummitAs we prepare for the sixth year of the SANS Cyber Threat Intelligence (CTI) Summit, advisory board members Rebekah BrownRick Holland, and Scott Roberts discuss some of the most frequently asked questions about threat intelligence. This blog will give you a bit of a preview of what you can expect during the CTI Summit on January 29th and 30th.

How Can New Analysts Get Started in CTI?

Scott Roberts: There are many paths to get into CTI that draw on a lot of different interests and backgrounds. The two major paths start with either computer network defense or intelligence. As a network defender, you start with a solid background in developing defenses and detections, determining what adversary attacks look like, and using basic tools. On the other side, starting from an intelligence analysis background, you start with an analytical framework and method for effectively analyzing data, along with an understanding of bias and analysis traps. In either case, you want to build your understanding of the other side.

This happened to me when I came into CTI. I had a background as a Security Operations Center analyst and in incident response, but I had minimal understanding of analytical methods, bias, or strategic thinking. What helped me was meeting my friend Danny Pickens. Danny came from the opposite background as a U.S. Marine Intelligence analyst. The result was that we traded our experiences:  he taught me about intelligence and I taught him about network defense. We ultimately both ended up more complete analysts as a result.

What Is the Best Size for a Threat Intelligence Team?

Rebekah Brown:  The best size for a threat intelligence team depends greatly on what exactly it is that the team will be doing. So before you ask for (or start filling) headcount, make sure you know the roles and responsibilities of the analysts. Rather than starting with a number – for example, saying “I need three people to do this work” – start by looking at the focus areas the responsibilities require. Do you plan on supporting high-level dissemination, such as briefing leadership, and on providing tactical support to incident responders? You may need two different people for those roles. Do strategic-level briefs occur once a week but require a lot of preparation? That may be a job for one person. Is incident response support ongoing, and is your incident response team going to be working 60 hours a week on several engagements? You may need more than one person for that role. Understanding the responsibilities and requirements will help you build the right size team with the right skills.

Why Should My CTI Team Need Developers and How Can They Be Used?

Scott Roberts: In many ways, the start of CTI is a data-wrangling problem. When you look at the original U.S. intelligence cycle, the second and third steps (collection and processing) are data-centric steps that can be highly automated. The best CTI teams use automation to handle the grunt work so analysts can focus on the analytical work that’s much more difficult to automate. No team has enough people, and developers can act as force multipliers by making data collection and processing programmatic, automatic, and continuous, ultimately letting computers do things computers are good at and letting human analysts focus on the things humans are good at. Learning some Python or JavaScript lets a single analyst accomplish far more than he or she could do by hand.

How and Where Do I Get the Internal Data I Need to Do Analysis?

Rebekah Brown: Internal data for analysis comes in all shapes and sizes. Many people automatically think of things like firewall logs and packet captures, and those are definitely critical pieces of information. However, that isn’t all there is to analyze. If we are trying to understand the threats facing our organizations, we should look to past incidents (i.e., log data), but we should also look forward. What is the business planning? Are we entering new markets? Are we making any announcements or affiliations that could change the way we look at adversaries? What are the critical systems or data that would cause significant operational impact if they were targeted? All of this information should be included in the analysis of the threats facing you.  As far as how you obtain that information, well, you have to ask, although this often means figuring out the right people to ask and establishing relationships with them, and THEN asking. It takes time, but the investment in those relationships within your organization will ensure that you have the right information when you need it. Information-sharing isn’t just something we need to work on with external partners, it is something we need to foster internally as well.

What Is the Best Way to Communicate the Value of Threat Intelligence Up the Chain of Command?

Rick Holland: We struggle to implement effective operational metrics, so it isn’t surprising that I often get asked how to communicate the value of threat intelligence to leadership. This is extremely important if your team has been the beneficiary of a budget increase, as you will have to show the benefits of that investment and the trust afforded your team. You should start off by understanding how your organization makes money (or how it accomplishes its mission). I know that sounds like a Captain Obvious statement, but so many defenders don’t truly understand the people, processes, assets, infrastructure, and, most importantly, the business metrics that their company cares about. Are you in retail or financial services? How can you tie threat intelligence back to fraud? Are you in e-commerce? How can you tie threat intelligence back to website availability? You’ve probably heard me suggest you check out your company’s past annual reports and Form 10-Ks. They will provide helpful context for better understanding what matters to your business.

Join us at the Cyber Threat Intelligence Summit!

Rick Holland: This is the sixth year of the CTI Summit, and those of us on the advisory board are singularly focused on curating content that attendees can take back to their jobs right after the event and immediately implement into their programs. The content will be great, but if you have attended in the past, you know that the relationships developed during breaks, meals, and in the evenings will be the gifts that keep on giving. We all have challenges in our jobs, and establishing a network of peers who we can call on to collaborate is essential. We will have events and activities set up to help you build out those networks.

Automated Hunting of Software Update Supply Chain Attacks

 

Software that automatically updates itself presents an attack surface, which can be leveraged en masse through the compromise of the vendor’s infrastructure. This has been seen multiple times during 2017, with high profile examples including NotPetya and CCleaner.

Most large organisations have built robust perimeter defences for incoming and outgoing traffic, but this threat vector is slightly different and far more difficult to detect. Update packages are often deployed in compressed, encrypted or proprietary formats and would not easily be subject to an antivirus scan or sandbox analysis during transit. This leaves us with a large number of trusted processes within our infrastructure that could turn on us at any time and download something evil, which could potentially be undetectable by endpoint antivirus software.

It would be almost impossible to detect all potential malicious code changes, as they could be as simple as changing a single assembly instruction from JNZ to JZ to allow for unauthorised access or privilege escalation to occur. However, this doesn’t prevent some additional proportionate due diligence on the update package being pulled down and installed.

 

Methodology for discovering inbound evil updates

1. Discover all the software across your estate that could be auto updating
Let us consider for a moment software that automatically updates, and how it looks on the endpoint. Firstly, it needs to open an internet connection to a server that will allow it to identify whether the version currently running is up to date, or whether there is a newer version available. This may occur when the process is first executed or at intervals (set dates or times or even randomly). If the version is current then it will try again later, mostly likely connecting to the same server and repeating the process of checking the version numbers. The traffic volume on each occasion is likely to be very small, after all, all that it needs to ask is “what is the current version number?” and the reply need only be “1.0.32” or similar.

If we put these components together we can build a search for a process, other than a browser (to reduce the noise), which makes repetitive connections to the same server on multiple occasions. We can further refine by looking for low volume traffic in the transmissions. This data may contain false positives such as the checking of licence validation, however this doesn’t matter as we will refine this further in the next stages.
2. Monitor traffic ratios for evidence of updates being downloaded
Once we have a list of processes, devices and servers that have displayed the behaviour from our discovery phase, we can now monitor the traffic volume ratios for abnormal behaviour. Consider a process that had been polling the same server with 5KB uploads followed by 50KB downloads for the past month when suddenly the volume of the download changes to 300MB. It would be a clear outlier based on volumes and upload/download ratios.
3. Perform automated sandbox analysis on detected binaries post update
Now we’re looking for a suspicious update, not just any update, so we can send the installer from the endpoint to an internal sandbox for analysis. Alternatively, we could trigger the same update to occur within a sandbox running our gold image. The sandbox would perform a behavioural analysis, which we would use to highlight any suspicious behaviour during, or after the update process takes place.
4. Alert CERT for further examination should there be evidence of anything suspicious
This can then feed the alerting system used by our SOC/CERT for a manual review and prompt containment activity should the updates contain malware – at this point the number of machines that have pulled the update should be small and we can prevent further infections through proxy blocks etc.

 

Demonstration of concept

Here we will demonstrate the concept by conducting a deep dive into the behavior of the Notepad++ update process. We begin by identifying the polling between the process and update server. We can see from Process Monitor below that the traffic volumes are low, and the upload / download ratio is relatively similar, approximately 1:11 when there is no update to be downloaded.

_4

Now let’s have a look at what happens when there is an update available.

_1

We can see the difference when there is an update to be pulled, in this case the download volume is significantly higher than the upload as we would expect, in fact it’s over 673 times higher at 1:7756.

_2

In addition, following the network activity we can see that a process launch has been identified. This is the start of the update installation and provides us details on the location of the update binary.  At this point we can take the desired action to review the file, this may include a local scan or better yet, uploading the binary to an internal sandbox for behavioural analysis to take place – this would give us the added benefit of detecting a threat before the antivirus signatures are available.

_3

 

Practicalities and further ideas

In a large network there is going to be a lot of noise and you may want to select the top 50 auto updating processes that are most widely utilised across the estate and focus on them. This way you can be more selective about the alerts that go to your SOC.

Some processes may update on the fly using hot patching, in which case better results would be obtained from the creation of a custom image for your sandbox with all of these processes installed, so there would be no need to send/pull binaries and you could monitor the entirety of the update process taking place, including all files dropped to disk and executed during the update.

Feedback and ideas are always welcome!

Happy hunting

-Adam (Twitter: @CyberKramer)

Updated Memory Forensics Cheat Sheet

Just in time for the holidays, we have a new update to the Memory Forensics Cheatsheet!  Plugins for the Volatility memory analysis project are organized into relevant analysis steps, helping the analyst walk through a typical memory investigation.  We added new plugins like hollowfind and dumpregistry, updated plugin syntax, and now include help for those using the excellent winpmem  and DumpIt acquisition tools.  The cheatsheet includes nearly everything you need to spend a relaxing evening at home analyzing memory dumps.  Enjoy! Memory Forensics Cheat Sheet

Acquiring a Memory Dump from Fleeting Malware

Introduction

The acquisition of process memory during behavioural analysis of malware can provide quick and detailed insight. Examples of where it can be really useful include packed malware, which may be in a more accessible state while running, and malware, which receives live configuration updates from the internet and stores them in memory. Unfortunately the execution of some samples can be transient and the processes will be long gone before the analyst has a chance to fire up ProcDump. A while back, HBGary released a nifty tool called Flypaper, which prevented a process from closing down, allowing more time for the memory to be grabbed, but unfortunately the tool is now difficult to find and awkward to use. I’ve spent some time considering a suitable alternative that would work on the latest versions of Windows.

A little known feature…

During my research I found an article detailing a little known feature in Windows entitled ‘Monitoring Silent Process Exit‘.

TL;DR – You can configure Windows to automatically generate a memory dump when a process with a specified name exits.

So what this means for us is, even though the malware finishes running very quickly, we can obtain a full memory dump and extract what we need from it at our leisure.

This feature is designed as part of the Windows debugging portfolio, but we can use it as a tool in our belt. The easiest way to configure is by using a Microsoft tool named gflags.exe, which is easy to download and use. The screenshot below shows the configuration that I’ve had success with. You provide the name of the executable you’re interested in keeping an eye on (it doesn’t matter from where the process is run). In addition you have the option to choose what kind of memory dump you want generating, Custom Dump Type 2 represents MiniDumpWithFullMemory, which I found to give the most comprehensive output. There are plenty of other options that can be found on MSDN. Then you just need to run the process and wait for it to finish.

gflags_screenshot

Testing the concept

To test the concept I wrote a tiny program, shown below, designed to load a string to memory and have the process exit very quickly – certainly before we would have a change to pull the string from live memory.

int main()
{
char secretString[] = “This is a secret string!”;
return 0;
}

I compiled, executed and the mini dump appeared in the appropriate folder. A quick check with BinText showed the secret string that had been stored in memory.

bintext_screenshot

This is all instigated through a small number of registry entries, details of which are listed in the Microsoft article on the subject, and could easily be implemented into a sandbox or endpoint security setup to gather clues about what has occurred. I’ve found this to be a neat alternative to Flypaper without having to go to the trouble of writing a hook for the ExitProcess function.

Happy analysing!

-Adam (Twitter: @CyberKramer)

TIME IS NOT ON OUR SIDE WHEN IT COMES TO MESSAGES IN IOS 11

BLOG ORIGINALLY POSTED SEPTEMBER 30, 2017 HEATHER MAHALIK

apQ3w85_460s_v1

This is going to be a series of blog posts due to the limited amount of free time I have to allocate to the proper research and writing of an all-inclusive blog post on iOS 11. More work is needed to make sure nothing drastic is missing or different and to dive deeper into the artifacts that others have reported to me as currently being unsupported by tools.

From what I have seen thus far, I am relieved that iOS 11 artifacts look very similar to iOS 10. This is good news for forensicators who see iOS devices and have adapted to the challenges that iOS 10 brought. Prior to writing this, I was referred to a blog post on iOS 11,that was an interesting read (thanks Mike). I suggest you also check it out as it pinpoints what is new in iOS 11 in regards to features: https://arstechnica.com/gadgets/2017/09/ios-11-thoroughly-reviewed/5/

Understanding what the OS is capable of doing helps us determine what we need to look for from a forensic standpoint. From what I have seen so far, the major artifact paths have not changed for iOS 11. Key artifacts for normal phone usage appear to be in the same locations:
– Contacts- /private/var/mobile/Library/AddressBook/AddressBook.sqlitedb
– Calls-/private/var/mobile/Library/CallHistoryDB/CallHistory.storedata
– SMS – /private/var/mobile/Library/sms.db
– Maps – /private/var/mobile/Applications/com.apple.Maps/Library/Maps/History.mapsdata – Still missing? Refer to my blog post from Dec.

When I test an update to a smartphone OS, I normally start with basic user activity (create a new contact, place some calls, send messages, ask for directions, etc.) and then I dump my phone and see what the tools can do. For this test, I created both encrypted and unencrypted iTunes backups, used PA Methods 1 and 2 and did a logical extraction with Oxygen Detective. What I found is that not all tools parsed the data in the same manner, which is to be expected. (I also plan to test more methods and tools as time allows and for my FOR585 course updates.)
To get this post done in a timely manner, I found one item that has always been parsed and jumped out as “missing” or not completely supported.

4403afcf498768ffc5653803d484788dd1da5d11a5a2238441046b64b1889d5c

iMessages and SMS in iOS 11 were the first items that jumped out as “something is off…” and I was right. I sent test messages and could not locate them in the tools as easily as I have done in the past. I normally sort by date, because I know when I send something. Up until this release of iOS, we could rely on our tools to parse the sms.db and parse it well. The tools consistently parsed the message, to/from, timestamps, attachments and even deleted messages from this database. Things have changed with iOS11 and it doesn’t seem that our tools have caught up yet, at least not to the same level they were parsing older iOS versions.

One of the most frustrating things I find is that the tools need access to different dumps in order to parse the data (as correctly as it could for this version). For example, Oxygen didn’t provide access to the sms.db for manual parsing, nor did it parse it for examination when the tools was provided and iTunes backup. This had nothing to do with encryption, because the passcode was known and was provided. UFED isn’t the same as PA Method 1 and 2 (you have heard this from me before), but it’s confusing because most don’t know the difference. This is what it looked like when I imported the iOS 11 backup into Oxygen. Believe me, there are more than 3 SMS/iMessages on my iPhone.

Oxygen_iTunes-vs-O2-extraction2

However, I when I dumped my iPhone logically using Oxygen Detective, it parsed the SMS and provided access to the sms.db. When I say “parsed” the sms.db, I am not referring to timestamp issues at all, those will be addressed in a bit. Here is what my device looked like when I dumped it and parsed it in Oxygen.

Oxygen_iTunes-vs-O2-extraction

Spot the differences in the messages? Yep, you now see 48,853 more! Crazy… all because the data was extracted a different way. I also tested adding in the PA, Method 1 image and those message numbers were different, but the sms.db was available and parsed. You really have to dump these devices in different ways to get the data!

Bottom line – add the sms.db to something you need to manually examine for iOS 11 to ensure your tool is grabbing everything and parsing it. The rest of this blog is going to focus on just that – parsing the sms.db in regards to changes found in iOS 11.

Let’s take a look at what is the same (comparing iOS 11 to iOS 10):
• SMS and iMessages are still stored in the sms.db
• Multiple tables in this database are required for parsing/joining the messages correctly
What is different (comparing iOS 11 to iOS 10):
• Additional tables appear to be used?
• The timestamp is different for iOS 11 – SOMETIMES!

Here is what I found (so far). The tools are hit or miss. Some tools are parsing the data, but storing the messages in a different location, others are parsing the message content, but not the timestamp… you catch my drift… What I recommend? Go straight to the database and take a look to make sure the tool(s) you rely on are not missing or misinterpreting the messages (wait… didn’t I just say that – YES, I did.)
The timestamp fields for the sms.db are all over the place now. What I am seeing is that the length of the Mac Absolute value varies between two formats and both of these formats can be stored in the same column. This is why the tools are struggling to parse these dates. Additionally, the tables in the sms.db differ in how they are storing the timestamp. So, if your tool is parsing it correctly, excellent – but still take a look at the tables.
Here are some examples of what this mess looks like. The column below is from the chat table in the sms.db. Notice how it has the traditional Mac Absolute value (number of seconds since 01/01/2001), while others are a 18 digit Mac Absolute values and some are 0 (sent messages).

strange_datetime3

Additionally, I was seeing some that were 19 digits that were not appended with 00s at the end. The “conv start date” on the left column is from the messages table in sms.db and this timestamp has not changed. As expected, your tools handle this one nicely. The table on the right column is from the chat_message_join table, and this caused a little havoc as well due to the variety of timestamps in the column. Converting this wasn’t fun! Thanks Lee for your help here. You, my friend, ROCK!

strange_datetime

When I first ran my SQL query, I noticed this one pesky date that wasn’t converting. This is because it was the timestamp highlighted above and I needed to beef up my query to handle this. If you see a date that looks like the one below, something is up and you aren’t asking for the data to be rendered correctly. The query below will handle this for you.

strange_datetime2

Don’t believe me that this causes issues yet, take a look at how it looked in one tool.

Oxygen_SMSiOS11-1

The dates and times are not parsed correctly. I found that the dates and times appear to be consistent when the tools are parsing the 9 digit Mac Absolute timestamps from specific tables. Otherwise, expect to have to do this yourself. Here is where it was correct, but this wasn’t the case for all of my messages sent using iOS 11.

PA_iMessage

If you need a sanity check, I always like to use the Epoch Converter that I got for free from BlackBag to make sure I am not losing my mind when dealing with these timestamps. Below, you can see it was parsing it correctly (Cocoa/Webkit Date). Also, I love that it gives you both localtime and UTC.

unixepochconverter

This leads me to the good news -below is the query that will handle this for you. This query is a beast and “should” parse all sms and iMessages from the sms.db REGARDLESS of the iOS version, but only columns that I deemed interesting. (Note that I state should, because this has only been run across a few databases and you should report any issues back to me so they can be fixed.) Take this query and copy and paste it into your tool of choice. Here, I used the DB Browser for SQLite because it’s free. I limited some columns to the ones I care about the most, so you should make sure this query isn’t missing any columns that may be relevant to your investigation.

SELECT
message.rowid,
chat_message_join.chat_id,
message.handle_id,
message.text,
message.service,
message.account,
chat.account_login,
chat.chat_identifier AS “Other Party”,
datetime(message.date/1000000000 + 978307200,’unixepoch’,’localtime’) AS “conv start date”,
case when LENGTH(chat_message_join.message_date)=18 then
datetime(chat_message_join.message_date/1000000000+978307200,’unixepoch’,’localtime’)
when LENGTH(chat_message_join.message_date)=9 then
datetime(chat_message_join.message_date +978307200,’unixepoch’,’localtime’)
else ‘N/A’
END AS “conversation start date”,
datetime(message.date_read + 978307200,’unixepoch’,’localtime’) AS “date read”,
message.is_read AS “1=Incoming, 0=Outgoing”,
case when LENGTH(chat.last_read_message_timestamp)=18 then
datetime(chat.last_read_message_timestamp/1000000000+978307200,’unixepoch’,’localtime’)
when LENGTH(chat.last_read_message_timestamp)=9 then
datetime(chat.last_read_message_timestamp +978307200,’unixepoch’,’localtime’)
else ‘N/A’
END AS “last date read”,
attachment.filename,
attachment.created_date,
attachment.mime_type,
attachment.total_bytes
FROM
message
left join chat_message_join on chat_message_join.message_id=message.ROWID
left join chat on chat.ROWID=chat_message_join.chat_id
left join attachment on attachment.ROWID=chat_message_join.chat_id
order by message.date_read desc

Here is a snippet of what this beauty looks like. (Note: this screenshot was taken prior to me joining attachments – aka MMS).

parsed_db

I always stress that you cannot rely on the tools to be perfect. They are great and they get us to a certain point, but then you have to be ready to roll up your sleeves and dive in.
What’s next – applications, the image/video files that apparently aren’t parsing correctly, interesting databases and plists new to iOS 11 and the pesky maps. That one is still driving me crazy! Stay tuned for more iOS 11 blogs and an upcoming one on Android 7 and 8.
Thanks to Lee, Tony, Mike and Sarah for keeping me sane, sending reference material, testing stuff and helping me sort these timestamps out. Like parenting, sometimes forensicating “takes a village” too.

Uncovering Targeted Web-Based Malware Through Shapeshifting

Targeted Web-Based Malware?

Malware authors are frequently observed leveraging server side scripting on their infrastructure to evade detection and better target their attacks. This includes both exploit kits and servers hosting secondary stage payloads, all of which can easily be set up to alter their responses based on the footprint of the visitor. This could include geolocation of the IP address visiting the site if the attacker is targeting users from a particular country or region, or perhaps user-agent if they are only focused on certain browsers or operating systems. Without access to the source code leveraged on the server, it is difficult to detect whether it would alter its behaviour if you were visiting from a different device or location, and therefore malware analysts may find themselves declaring a link benign, or that the payload server is down, when in fact, it is only presenting that way to the analyst in question.

Got an example?

An example of this can be seen in the Malwarebytes blog on Magnitude exploit kit which details that  “…users are inspected at a ‘gate’ that decides whether or not they should be allowed to proceed to Magnitude EK. This gate, which has been nicknamed ‘Magnigate’ by Proofpoint, performs additional checks on the visitor’s IP address and user-agent to determine their geolocation, Internet Service Provider, Operating System and browser information…”

Hmm, so what can I do?

There is of course a balance here, even if you have access to VPN software, which allows you to select the country you want to appear from, it would be extremely time consuming and cumbersome to iterate through all of the available countries, each time using different browsers and painstakingly looking to identify whether there was any variation in the responses.

To aid in this task, I have written a new tool which automates this whole process and may be useful during malware analysis if you suspect the server is hiding something from you.

It works as follows –

  1. Loads a list of countries and user-agents that you want to appear from
  2. Leverages a proxy listing website’s API to obtain various country proxies
  3. Verifies that the proxies are working, and that the geolocation is per requirements
  4. Connects to the server using the proxy and iterates through all of the user-agents requesting the site multiple times
  5. Identifies any results which are different from the control value and highlights to the analyst

Demonstration

I’ve uploaded a number of test documents to my webserver, which you are welcome to use for your testing:

adamkramer.uk/browser-test.php & adamkramer.uk/browser-test-404.php

Both of these do the same thing – they wait until they observe someone connecting from a Chinese IP address with an iPhone user-agent before presenting the main content. The first URL will return a result in all other circumstances stating “Go away”, and the second will return a 404 error unless the conditions are met.

404_script_screenshot

The screenshot below shows the script iterating through various user-agents whilst connected to a Chinese proxy. In each case we can see the result was a 404 until the iPhone user-agent was sent, after which the script presented a ‘diff’ style output on what was different about this case.

China-iPhone

Great! Where can I get it?

The script was written in Python (v3.x) and is available from Github here.

Please feel free to use / fork / enhance / provide feedback.

 

Happy analysing!

-Adam (Twitter: @CyberKramer)