Shortcuts for Understanding Malicious Scripts

You are being exposed to malicious scripts in one form or another every day, whether it be in email, malicious documents, or malicious websites. Many malicious scripts at first glance appear to be impossible to understand. However, with a few tips and some simple utility scripts, you can deobfuscate them in just a few minutes.

Capture3SANS Instructor Evan Dygert conducted a webcast on October 3rd, 2018. This webcast teaches you how to cut through the obfuscation techniques the script authors use and not spend a lot of time doing it. Evan also demonstrates how to quickly deobfuscate a variety of malicious scripts.

The samples of the scripts he provided during the webcast can be downloaded here: https://dfir.to/MaliciousScripts. Please note the password for the samples.zip folder is: “infected”

 

 

Capture4We hope that the techniques presented in this webcast help you to begin deobfuscating potentially malicious JavaScript.  This topic is explored in depth in the SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Techniques course.  This class offers an excellent opportunity to understand the unique and insightful perspective that malware analysis can bring to your investigations.

 

 

 

New CheatSheets you might be interested in:

Tips for Reverse-Engineering Malicious Code – This cheat sheet outlines tips for reversing malicious Windows executables via static and dynamic code analysis with the help of a debugger and a disassembler. Download Here

REMnux Usage Tips for Malware Analysis on Linux – This cheat sheet outlines the tools and commands for analyzing malicious software on the REMnux Linux distribution Download Here

Cheat Sheet for Analyzing Malicious Documents – This cheat sheet presents tips for analyzing and reverse-engineering malware. It outlines the steps for performing behavioral and code-level analysis of malicious software. Download Here

Malware Analysis and Reverse-Engineering Cheat Sheet – This cheat sheet outlines tips and tools for analyzing malicious documents, such as Microsoft Office, RTF and Adobe Acrobat (PDF) files Download Here

—————————————————————————————————————————–

For opportunities to take the FOR610 course, consider upcoming runs and modalities: 

US & International live training : Live events offered throughout the US, EMEA & APAC regions.

 DFIR Summits :  Two days of industry expert talks plus DFIR training events

Simulcast :   Live events from anywhere in the world.

OnDemand  : Learn at your own pace, anytime, anywhere.

1500x500_OLT-Nov15-Dec5

 

 

 

 

 

 

DFIR Resources: Digital Forensic Blog | Twitter | Facebook | Google+ | Community Listservice | DFIR Newsletter

How to build an Android application testing toolbox

 

Mobile devices hold a trove a data that could be crucial to criminal cases, and they also can play a key role in accident reconstructions, IP theft investigations and more. It’s not just investigators who care about examining a mobile device – so do those interested in application research and data, and enterprises who rely on smartphones and tablets to perform work tasks, engage with customers and deliver new services.

Capture3 Effectively accessing and testing smartphones requires an optimal application toolbox, and the chops to use it. Listen to this webinar that details how to build your Android application testing toolbox to ensure you’re set up to successfully access and examine the information you need from Android mobile phones.

SANS instructor Domenica Crognale, who is one of the course co-authors of SANS FOR585: Advanced Smart Phone Forensics, and who teaches the course as well, details why testing of mobile phone applications is critical – especially given the fact that Android apps change weekly and even daily. It is becoming more common for application developers to restrict very important user artifacts from being accessed from these Android devices. This most often includes the SQLite databases, which likely contain the information that examiners are after. It’s not just commercially available applications you have to consider. Often, custom-built apps aren’t parsed by commercial tools, so you’ll need to know how to access and parse any data stored on the device.

During the webinar, Domenica talks about the importance of rooting Android devices as well as ways to access and parse the data. She explains how to do this using utilities that exist on the SIFT workstation or that can be downloaded for free from the SANS website.

This webcast explores topics such as:

  • Choosing the best test device
    During a forensics acquisition, many tools will apply a soft root onto the phone that is then removed once the data is obtained. But a full physical acquisition is not always necessary for application testing. Ideally, we want a test phone that is always rooted, whether or not the device loses power, because the root basically unlocks access to the core of the device’s operating system so you can access, add, remove or tweak anything inside the phone.
  • Rooting your Android
    During the webinar, Domenica walks through a demo of a root, how to locate the root and share information on free and publicly-available root tools.
  • Utilizing File Browsers for quick file/folder access
    Sometimes a file browser is all you really need to get to the data you’re after. Domenica shares her favorite third-party applications for accessing the file system.
  • Examining application directories of interest
    Once you have access to the files you need, utilize tools available on the SIFT workstation to view the contents of SQLite databases.

Listen to the recording, “Building your Android application testing toolbox” webcast now.  And check out our FOR585: Advanced Smartphone Forensics, a week-long course that teaches you how to find key evidence on a smartphone, how to recover deleted mobile device data that forensic tools miss, advanced acquisition terminology and free techniques to gain access to data on smartphones, how to handle locked or encrypted devices, applications, and containers, and much more

650x125_CDI-2018_No-EBDomenica will be teaching FOR585: Advanced Smartphone Forensics at SANS Cyber Defense Initiative  Dec 11-18.  Register to attend live here: sans.org/u/JGl  or to  try it from home via Simulcast register here: sans.org/u/JGq

For additional course runs log in here

1500x500_OLT-Nov15-Dec5

Inhibiting Malicious Macros by Blocking Risky API Calls

 

Microsoft Office Macros have been the bane of security analysts’ lives since the late 1990s. Their flexibility and functionality make them ideal for malware authors to use as a primary stage payload delivery mechanism, and to date the challenge they pose remains unsolved. Many organisations refrain from blocking them completely due to the impact it would cause to their users, and instead rely on a combination of detection and mitigation technology to compensate for the risk they pose.

I have long thought that it would be ideal if Microsoft were able to implement granular controls via group policy over which activities macros were permitted to perform, for example allowing the blocking of process execution or network activity. In the interim, I have been experimenting with alternative methods to limit which functions a macro can call, either by redefining or patching the high risk API calls to prevent their intended outcome.

Below we can see a very basic example of what a malicious macro might look like. The fact that the subroutine is named “AutoOpen” means that it will be run as soon as the document is opened (and if required, macros are enabled).  The command “Shell” at the beginning of the line instructs Word to execute the process as specified in the parameter, so in this case the macro will launch the Windows calculator.

 

SS4

 

Option 1 – Using global templates to override built-in functions

Shell is the function built into VBA which is used to launch new processes, and as such, can often be found in malicious macros. Malware authors leveraging macros as a primary stage payload typically want to either download or drop a file, and then execute it. Alternatively, they could decide to execute an existing process on the system, such as powershell, with parameters which will take actions against their intended target. Common to both of these methods is the requirement for something to be run, and this is where Shell comes in. If we were able to somehow disable the Shell function, then we might be able to prevent the malicious macro from succeeding in it’s goal.

The first experiment involved redefining the Shell function, with the hope that the newly defined function would override the built-in one. We can achieve this by placing the code in the global template which is named normal.dotm or within an add-in (which is effectively a template that is loaded every time Word is opened), either way, our code will become part of the malicious macros execution.

Below we can see the redefined function used as part of this experiment. This was saved within normal.dotm (the global template), with the intention that when Shell was called, rather than calling the system function to launch the process, our redefined function would be called, resulting in a popup warning the user that the activity had been blocked.

 

ss_normal_dot_macro

 

Then, using the one line test macro that attempts to execute ‘cmd.exe /c calc.exe’, we can see that the redefined function worked well and blocked the execution of the process. Unfortunately, it only seemed to work when the malicious code was placed within the document and not when it was within a seperate module. The reason for this is unknown and warrants further research, as if solved, this would be the easiest way to achieve our goal.

 

SS1

 

Option 2 – Patching associated API in VBE7.dll to alter behaviour

For a more surgical approach we can look at API hooking the dll which is leveraged when macro code is executed. Looking through the dlls, which are loaded into Word at runtime, we can observe that VBE7.dll includes a large number of exports that appear related to the execution of macro code. Below we can see a snapshot from the list of exports with one in particular highlighted, rtcShell, that warranted further investigation.

 

ss_vbe_exports

 

Digging into this function we can see that it’s nothing more than a small wrapper around the CreateProcess Windows API call. The particular code block which executes the process is listed below. Looking at the parameters being passed to the API,  we can see that the EBX register is being used to hold a pointer to the name of the process to be executed.

 

ss_ida_prepatch

 

At this point, if we want to alter the way the process is launched we can patch the instructions to alter the process creation flags, which is one of the parameters passed to the CreateProcess API. Alternatively, we could completely patch out the API call to prevent the execution from occurring.

In this case we are going to leverage the CREATE_SUSPENDED process creation flag to allow the process to launch, but immediately be frozen into a suspended state, preventing the code from running.

 

ss_msdn_createsuspended

 

The patch sets the ECX register to 4, which is the numeric value of CREATE_SUSPENDED, and then this is pushed as the creation flags parameter to the CreateProcess API call, resulting in the process being instructed to launch in a suspended state.

 

ss_ida_postpatch

 

Following this patch, a test macro that was designed to launch powershell was executed, and as we can see from the process explorer window, it was placed into a suspended state, thus mitigating any impact and allowing an opportunity for endpoint protection sensors to scan and identify any threatening behaviour.

 

ss_taskmanager

 

These are just proof of concepts, and would require more research and work before they were used in a production environment. I still believe that the ideal state would be for Microsoft to implement more granular security controls against macros, allowing organisations to continue using them, but at the same time empowering them with the ability to limit their capabilities depending on individual risk appetites.

 

Any comments or feedback would be very welcome!

 

-Adam (Twitter: @CyberKramer)

Top 11 Reasons Why You Should NOT Miss the SANS DFIR Summit and Training this Year

The SANS DFIR Summit and Training 2018 is turning 11! The 2018 event marks 11 years since SANS started what is today the digital forensics and incident response event of the year, attended by forensicators time after time. Join us and enjoy the latest in-depth presentations from influential DFIR experts and the opportunity to take an array of hands-on SANS DFIR courses. You can also earn CPE credits and get the opportunity to win coveted DFIR course coins!

summit video pic

To commemorate the 11th annual DFIR Summit and Training 2018, here are 11 reasons why you should NOT miss the Summit this year:

1.     Save money! There are two ways to save on your DFIR Summit & Training registration (offers cannot be combined):

·       Register for a DFIR course by May 7 and get 50% off a Summit seat (discount automatically applied at registration), or

·       Pay by April 19 and save $400 on any 4-day or 6-day course, or up to $200 off of the Summit. Enter code “EarlyBird18” when registering.

2.     Check out our jam-packed DFIR Summit agenda!

·       The two-day Summit will kick off with a keynote presentation by Kim Zetter, an award-winning journalist who has provided the industry with the most in-depth and important investigative reporting on information security topics. Her research on such topical issues as Stuxnet and election security has brought critical technical issues to the public in a way that clearly shows why we must continue to push the security industry forward.

·       The Summit agenda will also include a presentation about the Shadow Brokers, the group that allegedly leaked National Security Agency cyber tools, leading to some of the most significant cybersecurity incidents of 2017. Jake Williams and Matt Suiche, who were among those targeted by the Shadow Brokers, will cover the history of the group and the implications of its actions.

·       All DFIR Summit speakers are industry experts who practice digital forensics, incident response, and threat hunting in their daily jobs. The Summit Advisory Board handpicked these professionals to provide you with highly technical presentations that will give you a brand-new perspective of how the industry is evolving to fight against even the toughest of adversaries. But don’t take our word for it, have a sneak peek, check out some of the past DFIR Summit talks.

kaplan

Immerse yourself in six days of the best in SANS DFIR training. Here are the courses you can choose from:

·       FOR500 – Advanced Windows Forensics

·       FOR585 – Advanced Smartphone Forensics

·       FOR610 – Reverse-Engineering Malware

·       FOR508 – Digital Forensics, Incident Response & Threat Hunting

·       FOR572 – Advanced Network Forensics: Threat Hunting, Analysis, and Incident Response

·       FOR578 – Cyber Threat Intelligence

·       FOR526 – Memory Forensics In-Depth

·       FOR518 – Mac and iOS Forensic Analysis and Incident Response

·       MGT517 – Managing Security Operations: Detection, Response, and Intelligence

3.    All courses will be taught by SANS’s best DFIR instructors. Stay tuned for more information on the courses we’re offering at the conference in a future article post.

4.     Rub elbows and network with DFIR pros at evening events, including networking gatherings and receptions. On the first night of the Summit, we’re going to gather at one of Austin’s newest entertainment venues, SPiN, a ping pong restaurant and bar featuring 14 ping pong tables, lounges, great food, and drinks. Give your overloaded brain a break after class and join us at our SANS Community Night, Monday, June 9 at Speakeasy. We will have plenty of snacks and drinks to give you the opportunity to network with fellow students.

5.     Staying to take a DFIR course after the two-day Summit? Attend SANS@Night talks guaranteed to enrich your DFIR training experience with us. Want to know about threat detection on the cheap and other topics? As for cheap (and in this case, that doesn’t mean weak), there are actions you can take now to make threat detection more effective without breaking the bank. Attend this SANS@Night talk on Sunday evening to learn some baselines you should be measuring against and how to gain visibility into high-value actionable events that occur on your systems and networks.

6.     Celebrate this year’s Ken Johnson Scholarship Recipient, who will be announced at the Summit. This scholarship was created by the SANS Institute and KPMG LLP in honor of Ken Johnson, who passed away in 2016. Early in Ken’s digital forensics career, he submitted to a Call for Presentations and was accepted to present his findings at the 2012 SANS DFIR Summit. His networking at the Summit led to his job with KPMG.

7.     Prove you’ve mastered the DFIR arts by playing in the DFIR NetWars – Coin Slayer Tournament. Created by popular demand, this tournament will give you the chance to leave Austin with a motherlode of DFIR coinage! To win the new course coins, you must answer all questions correctly from all four levels of one or more of the six DFIR Domains: Windows Forensics & Incident Response, Smartphone Analysis, Mac Forensics, Memory Forensics, Advanced Network Forensics, and Malware Analysis. Take your pick or win them all!

 

8.     Enjoy updated DFIR NetWars levels with new challenges. See them first at the Summit! But not to worry, you will have the opportunity to train before the tournament. You’ll have access to a lot of updated posters that can serve as cheat sheets to help you conquer the new challenges, as well as the famous SIFT WorkStation that will arm you with the most powerful DFIR open-source tools available. You could also choose to do an hour of crash training on how to use some of our Summit sponsors’ tools prior to the tournament. That should help give you an edge, right? That new DFIR NetWars coin is as good as won!

9. The Forensic 4:cast Awards winners will be announced at the Summit. Help us text2985celebrate the achievements of digital forensic investigators around the world deemed worthy of the award by their peers. There is still time to cast your vote. (You may only submit one set of votes; any additional voting will be discounted). Voting will close at the end of the day on May 25, 2018.

10.  Come see the latest in tools offered by DFIR solution providers. Summit sponsors and exhibitors will showcase everything from managed services covering advanced threat detection, proactive threat hunting, and accredited incident response to tools that deliver rapid threat detection at scale, and reports that provide insights for identifying potential threats before they cause damage.

11.  Last but not least, who doesn’t want to go to Austin?!? When you think Austin, you think BBQ, right? This city isn’t just BBQ, Austin has amazing food everywhere and there’s no place like it when it comes to having a great time. The nightlife and music include the famous 6th Street – which, by the way, is just walking distance from the Summit venue. There are many other landmarks such as Red River, the Warehouse District, Downtown, and the Market District. You will find entertainment of all kinds no matter what you’re up for. Nothing wrong with some well-deserved play after days full of DFIR training, lectures, and networking!

As you can see, this is an event you do not want to miss! The SANS DFIR Summit and Training 2018 will be held at the Hilton Austin. The event features two days of in-depth digital forensics and incident response talks, nine SANS DFIR courses, two nights of DFIR NetWars, evening events, and SANS@Night talks.

The Summit will be held on June 7-8, and the training courses run from June 9-14.

We hope to see you there!

SANS Threat Hunting and Incident Response Summit 2018 Call for Speakers – Deadline 3/5

 

1200x627_THIR-Summit-2018

Summit Dates: September 6 & 7, 2018
Call for Presentations Closes on Monday, March 5, 2018 at 5 p.m CST
Submit your presentation here

The Threat Hunting & Incident Response Summit will focus on specific hunting and incident response techniques and capabilities that can be used to identify, contain, and eliminate adversaries targeting your networks. SANS and our advisory partner Carbon Black are pleased to invite you to the Summit where you will have the opportunity to directly learn from and collaborate with incident response and detection experts who are uncovering and stopping the most recent, sophisticated, and dangerous attacks against organizations.

Chances are very high that hidden threats already exist inside your organization’s networks. Organizations can’t afford to assume that their security measures are impenetrable, no matter how thorough their security precautions might be. Prevention systems alone are insufficient to counter focused human adversaries who know how to get around most security and monitoring tools.

The key is to constantly look for attacks that get past security systems, and to catch intrusions in progress rather than after attackers have attained their objectives and done worse damage to the organization. For the incident responder, this process is known as “threat hunting.” Threat hunting uses known adversary behaviors to proactively examine the network and endpoints and identify new data breaches.

Call for Presentations Closes on Monday, March 5, 2018
Submit your presentation here

If you are interested in presenting, SANS and our advisory partner Carbon Black would be delighted to consider your threat hunting-focused proposal with use cases and communicable lessons. We are looking for presentations from both the intelligence producer and the intelligence consumer fields, with actionable takeaways for the audience.

The THIR Summit offers speakers opportunities for exposure and recognition as industry leaders. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

CFP submissions should detail how the proposed case study, tool, or technique can be utilized by attendees to increase their security posture. All THIR Summit presentations will be 30 minutes with 5 minutes for questions.

We are looking for presentations that focus on:
– Endpoint threat hunting
– Network threat hunting
– Hunt teams and how to organize them to be extremely effective
– Using known attacker methodologies to hunt/track adversaries
– Innovative threat hunting tools, tactics, and techniques
– Case studies on the application of threat hunting to security operations

Call for Presentations Closes on Monday, March 5, 2018
Submit your presentation here 

11th Annual Digital Forensics and Incident Response Summit Call for Presentations deadline Jan 15th 2018

Call for Presentations- Now Open

The 11th Annual Digital Forensics and Incident Response Summit Call for Presentations is now open through 5 pm EST on Monday, January 15, 2018. If you are interested in presenting, we’d be delighted to consider your practitioner-based case studies with communicable lessons.

The DFIR Summit offers speakers the opportunity to present their latest tools, findings, and methodologies to their DFIR industry peers. If you have something substantive, challenging, and original to offer, you are encouraged to submit a proposal.

We are looking for Digital Forensics and Incident Response Presentations that focus on:
1. DFIR and Media Exploitation case studies that solve a unique problem
2. New Forensic or analysis tools and techniques
3. Discussions of new artifacts related to smartphones, Windows, and Mac platforms
4. Focusing on improving the status quo of the DFIR industry by sharing novel approaches
5. Challenges to existing assumptions and methodologies that might change the industry
6. New, fast forensic techniques that can extract and analyze data rapidly
7. Focus on Enterprise or scalable forensics that can analyze hundreds of systems instead of one at a time
8. Other topics offering actionable takeaways to your peers in the DFIR community

Benefits of Speaking

Promotion of your speaking session and company recognition via the DFIR Summit website and all printed materials
Visibility via the DFIR post-Summit on the SANS DFIR Website
Full conference badge to attend all Summit sessions
Speakers can bring 1 additional attendee for free to attend the summit
Private speakers-only networking lunches
Speakers-only networking reception on evening before Summit
Continued presence and thought leadership in the community via the SANS DFIR YouTube channel

Speaking sessions will be professionally recorded and posted to the SANS DFIR YouTube Site.

All talks will be 35 minutes of content + 5 minutes of Q&A.

Deadline: Monday, 15 January, 5 p.m. EST

Click here to submit your proposed presentation.

Please direct questions to summit@sans.org.

Meltdown and Spectre – Enterprise Action Plan

Meltdown and Spectre – Enterprise Action Plan by SANS Senior Instructor Jake Williams
Blog originally posted January 4, 2018 by RenditionSec

Watch the webcast now
WATCH WEBCAST NOW

MELTDOWN SPECTRE VULNERABILITIES
Unless you’ve been living under a rock for the last 24 hours, you’ve heard about the Meltdown and Spectre vulnerabilities. I did a webcast with SANS about these vulnerabilities, how they work, and some thoughts on mitigation. I highly recommend that you watch the webcast and/or download the slides to understand more of the technical details. In this blog post, I would like to leave the technology behind and talk about action plans. Our goal is to keep this hyperbole free and just talk about actionable steps to take moving forward. Action talks, hyperbole walks.

To that end, I introduce you to the “six step action plan for dealing with Meltdown and Spectre.”

  1. Step up your monitoring plan
  2. Reconsider cohabitation of data with different protection requirements
  3. Review your change management procedures
  4. Examine procurement and refresh intervals
  5. Evaluate the security of your hosted applications
  6. Have an executive communications plan

Step 1: Step up your monitoring plan

Meltdown allows attackers to elevate privileges on unpatched systems. This means that attackers who have a toehold in your network can elevate to a privileged user account on a system. From there, they could install backdoor and rootkits or take other anti-forensic measures. But at the end of the day, the vulnerability doesn’t exfiltrate data from your network or encrypt your data, delete your backups, or extort a ransom. Vulnerabilities like Meltdown will enable attackers, but they are only a means to an end. Solid monitoring practices will catch attackers whether they use an 0-day or a misconfiguration to compromise your systems. As a wise man once told me, “if you can’t find an attacker who exploits you with an 0-day, you need to be worried about more than 0-days.”

Simply put, monitor like your network is already compromised. Keeping attackers out is so 1990. Today, we assume compromise and architect our monitoring systems to detect badness. The #1 goal of any monitoring program in 2018 must be to minimize attacker dwell time in the network.

Step 2. Reconsider cohabitation of data with different protection requirements

Don’t assume OS controls are sufficient to separate data with different protection requirements. The Spectre paper introduces a number of other possible avenues for exploitation. The smart money says that at least some of those will be exploited eventually. Even if other exploitable CPU vulnerabilities are not discovered (unlikely since this was already an area of active research before these vulnerabilities), future OS privilege escalation vulnerabilities are a near certainty.

Reconsider your architecture and examine how effective your security is if an attacker (or insider) with unprivileged access can elevate to a privileged account. In particular, I worry about those systems that give a large number of semi-trusted insiders shell access to a system. Research hospitals are notorious for this. Other organizations with Linux mail servers have configured the servers so that everyone with an email address has a shell account. Obviously this is far from ideal, but when combined with a privilege escalation vulnerability the results can be catastrophic.

Take some time to determine if your security models collapse when a privilege escalation vulnerability becomes public. At Rendition, we’re still running into systems that we can exploit with DirtyCOW – Meltdown isn’t going away any time soon. While you’re thinking about your security model, ask how you would detect illicit use of a privileged account (see step #1).

Step 3. Review your change management procedures

Every time there’s a “big one” people worry about getting patches out. But this month Microsoft is patching several critical vulnerabilities. Some of these might be easier to exploit than Meltdown. When thinking about patches, measure your response time. We advise clients to keep three metrics/goals in mind:

Normal patch Tuesday
Patch Tuesday with “active exploit in the wild”
“Out of cycle” patch
How you handle regular patches probably says more about your organization than how you handle out of cycle patches. But considering all three events (and having different targets for response) is wise.

Because of performance impacts, Meltdown definitely should be patched in a test environment first. Antivirus software has reportedly caused problems (BSOD) with the Windows patches for Spectre and Meltdown as well. This shows a definite example where “throw caution to the wind and patch now” is not advisable. Think about your test cycles for patches and figure out how long is “long enough” to test (both for performance and stability) in your test environment before pushing patches to production.

Step 4. Examine procurement and refresh intervals

There is little doubt that future processors (yet to be released) will handle some functions more securely than today’s models. If you’re currently on a 5 year IT refresh cycle, should you compress that cycle? It’s probably early to tell for hardware, but there are a number of older operating systems that will never receive patches. You definitely need to re-evaluate whether leaving those unpatchable systems in place is wise. Just because you performed a risk assessment in the past, you don’t get a pass on this. You have new information today that likely wasn’t available when you completed your last risk assessment. Make sure your choices still make sense in light of Meltdown and Spectre.

When budgeting for any IT refresh, don’t forget about the costs to secure that newly deployed hardware. Every time a server is rebuilt, an application is installed, etc. there is some error/misconfiguration rate (hopefully small, often very large). Some of these misconfigurations are critical in nature and can be remotely exploited. Ensure that you budget for security review and penetration testing of newly deployed/redeployed assets. Once the assets are in production, ensure that they are monitored to detect anything that configuration review may have missed (see step #1).

Step 5. Evaluate the security of your hosted applications

You can delegate control, but you can’t delegate responsibility. Particularly when it comes to hosted applications, cloud servers, and Platform as a Service (PaaS), ask some hard questions of your infrastructure providers. Sure your patching plan is awesome and went off without a hitch (see step #3). What about your PaaS provider? How about your IaaS (Infrastructure as a Service) provider? Ask your infrastructure provider:

Did they know about the embargoed vulnerability?
If so, what did they do to address the issue ahead of patches being available?
Have they patched now?
If not, when will they be fully patched?
What steps are they taking to look for active exploitation of Meltdown? *
* #5 is sort of a trick question, but see what they tell you…

Putting an application, server, or database in the cloud doesn’t make it “someone else’s problem.” It’s still your problem, it’s just off-prem. At Rendition, we’ve put out quite a few calls today to MSPs we work with. Some of them have been awesome – they’ve got an action plan to finish patching quickly and monitoring for all assets. One called us last night for advice, another called us this morning. Others responded with “Melt-what?” leading us to wonder what was going on there. Not all hosting providers are created equal. Even if you evaluated your hosting provider for security before you trusted them with your data, now is a great time to reassess your happiness with how they are handling your security. It is your security after all…

Step 6. Have an executive communications plan

When any new vulnerability of this magnitude is disclosed, you will inevitably field questions from management about the scope and the impact. That’s just a fact of life. You need to be ready to communicate with management. To that end, in the early hours of any of these events there’s a lot of misinformation out there. Other sources of information aren’t wrong, they’re just not on point. Diving into the register specific implementations of a given attack won’t help explain the business impact to an executive.

Spend some time today and select some sources you’ll turn to for information the next time this happens (this won’t be the last time). I’m obviously biased towards SANS, but I wouldn’t be there if they didn’t do great work cutting through the FUD (fear uncertainty and doubt) when it matters. The webcast today was written by me, but reviewed by other (some less technical) experts to make sure that it was useful to a broad audience. My #1 goal was to deliver actionable information you could use to educate a wide range of audiences. I think I hit that mark. I’ve seen other information sources today that missed that mark completely. Some were overly technical, others were completely lacking in actionable information.

Once you evaluate your data sources for the next “big one,” walk through a couple of exercises with smaller vulnerabilities/issues to draft communications to executives and senior leadership. Don’t learn how to “communicate effectively” under the pressure of a “big one.” Your experience will likely be anything but “effective.”

Closing thoughts

Take some time today to consider your security requirements. It’s a new year and we certainly have new challenges to go with it. Even if you feel like you dodged this bullet, spend some time today thinking about how your organization will handle the next “big one.” I think we all know it’s not a matter of “if” but a matter of “when and how bad.”

Of course, if I don’t tell you to consider Rendition for your cyber security needs, my marketing guy is going to slap me silly tomorrow. So Dave, rest easy my friend. I’ve got it covered

Leaving the Backdoor Open: Risk of Remotely Hosted Web Scripts

 

Many websites leverage externally hosted scripts to add a broad range of functionality, from user interaction tracking to reactive design. However, what you may not know is that by using them you are effectively handing over full control of your content to the other party, and could be putting your users at risk of having their interactions misdirected or intercepted.

How so? Well, when you reference a script file, either locally or remotely, you are effectively adding that code into your page when it’s loaded by the user. JavaScript, as an example, is a very powerful language that allows you to dynamically add or modify content on the fly. If the location and content of the script is controlled by someone else, the other party is able to modify the code as they see fit, whenever they wish. Let’s hope they never get compromised or go rogue!

__3

How many of you would click Yes if this warning came with a script? Not many I suspect!
 

Demonstration of concept

I’ve put together a small demo page on my website to show the issue (adamkramer.uk/js-test.html). It’s pretty straight forward, I’ve referenced a script hosted on an external site – in this case my attempt to include some old style blink tags on my page.

__1

However, rather than the intended script, the JavaScript on the other end was replaced with the following code, which redirects the user away from the intended site to, potentially, something malicious such as a phishing page.

__2

It’s that easy. This concept is pretty basic, but what if the script was setup to conduct keylogging, or manipulate input data before it’s submitted? Uh-oh!
 

Well that doesn’t sound good, any suggestions?

Firstly, use the debugging functionality of your favourite browser to identify whether your website leverages any externally hosted scripts. I’ve used Google Chrome against my website and can see that Google Analytics fits into that category.

__4

Now consider whether you can bring the scripts in-house and store them on your server. If you can’t and must use externally hosted scripts, it may be worth keeping an eye on what is being served up. Perhaps even schedule a web content malware scan against the script files on a regular basis. Additionally, make sure the site hosting the script is still live and the domain hasn’t expired, making it available for sale!

Happy holidays!

-Adam (Twitter: @CyberKramer)

Your Cyber Threat Intelligence Questions Answered

 

350x200_CTI-SummitAs we prepare for the sixth year of the SANS Cyber Threat Intelligence (CTI) Summit, advisory board members Rebekah BrownRick Holland, and Scott Roberts discuss some of the most frequently asked questions about threat intelligence. This blog will give you a bit of a preview of what you can expect during the CTI Summit on January 29th and 30th.

How Can New Analysts Get Started in CTI?

Scott Roberts: There are many paths to get into CTI that draw on a lot of different interests and backgrounds. The two major paths start with either computer network defense or intelligence. As a network defender, you start with a solid background in developing defenses and detections, determining what adversary attacks look like, and using basic tools. On the other side, starting from an intelligence analysis background, you start with an analytical framework and method for effectively analyzing data, along with an understanding of bias and analysis traps. In either case, you want to build your understanding of the other side.

This happened to me when I came into CTI. I had a background as a Security Operations Center analyst and in incident response, but I had minimal understanding of analytical methods, bias, or strategic thinking. What helped me was meeting my friend Danny Pickens. Danny came from the opposite background as a U.S. Marine Intelligence analyst. The result was that we traded our experiences:  he taught me about intelligence and I taught him about network defense. We ultimately both ended up more complete analysts as a result.

What Is the Best Size for a Threat Intelligence Team?

Rebekah Brown:  The best size for a threat intelligence team depends greatly on what exactly it is that the team will be doing. So before you ask for (or start filling) headcount, make sure you know the roles and responsibilities of the analysts. Rather than starting with a number – for example, saying “I need three people to do this work” – start by looking at the focus areas the responsibilities require. Do you plan on supporting high-level dissemination, such as briefing leadership, and on providing tactical support to incident responders? You may need two different people for those roles. Do strategic-level briefs occur once a week but require a lot of preparation? That may be a job for one person. Is incident response support ongoing, and is your incident response team going to be working 60 hours a week on several engagements? You may need more than one person for that role. Understanding the responsibilities and requirements will help you build the right size team with the right skills.

Why Should My CTI Team Need Developers and How Can They Be Used?

Scott Roberts: In many ways, the start of CTI is a data-wrangling problem. When you look at the original U.S. intelligence cycle, the second and third steps (collection and processing) are data-centric steps that can be highly automated. The best CTI teams use automation to handle the grunt work so analysts can focus on the analytical work that’s much more difficult to automate. No team has enough people, and developers can act as force multipliers by making data collection and processing programmatic, automatic, and continuous, ultimately letting computers do things computers are good at and letting human analysts focus on the things humans are good at. Learning some Python or JavaScript lets a single analyst accomplish far more than he or she could do by hand.

How and Where Do I Get the Internal Data I Need to Do Analysis?

Rebekah Brown: Internal data for analysis comes in all shapes and sizes. Many people automatically think of things like firewall logs and packet captures, and those are definitely critical pieces of information. However, that isn’t all there is to analyze. If we are trying to understand the threats facing our organizations, we should look to past incidents (i.e., log data), but we should also look forward. What is the business planning? Are we entering new markets? Are we making any announcements or affiliations that could change the way we look at adversaries? What are the critical systems or data that would cause significant operational impact if they were targeted? All of this information should be included in the analysis of the threats facing you.  As far as how you obtain that information, well, you have to ask, although this often means figuring out the right people to ask and establishing relationships with them, and THEN asking. It takes time, but the investment in those relationships within your organization will ensure that you have the right information when you need it. Information-sharing isn’t just something we need to work on with external partners, it is something we need to foster internally as well.

What Is the Best Way to Communicate the Value of Threat Intelligence Up the Chain of Command?

Rick Holland: We struggle to implement effective operational metrics, so it isn’t surprising that I often get asked how to communicate the value of threat intelligence to leadership. This is extremely important if your team has been the beneficiary of a budget increase, as you will have to show the benefits of that investment and the trust afforded your team. You should start off by understanding how your organization makes money (or how it accomplishes its mission). I know that sounds like a Captain Obvious statement, but so many defenders don’t truly understand the people, processes, assets, infrastructure, and, most importantly, the business metrics that their company cares about. Are you in retail or financial services? How can you tie threat intelligence back to fraud? Are you in e-commerce? How can you tie threat intelligence back to website availability? You’ve probably heard me suggest you check out your company’s past annual reports and Form 10-Ks. They will provide helpful context for better understanding what matters to your business.

Join us at the Cyber Threat Intelligence Summit!

Rick Holland: This is the sixth year of the CTI Summit, and those of us on the advisory board are singularly focused on curating content that attendees can take back to their jobs right after the event and immediately implement into their programs. The content will be great, but if you have attended in the past, you know that the relationships developed during breaks, meals, and in the evenings will be the gifts that keep on giving. We all have challenges in our jobs, and establishing a network of peers who we can call on to collaborate is essential. We will have events and activities set up to help you build out those networks.

Automated Hunting of Software Update Supply Chain Attacks

 

Software that automatically updates itself presents an attack surface, which can be leveraged en masse through the compromise of the vendor’s infrastructure. This has been seen multiple times during 2017, with high profile examples including NotPetya and CCleaner.

Most large organisations have built robust perimeter defences for incoming and outgoing traffic, but this threat vector is slightly different and far more difficult to detect. Update packages are often deployed in compressed, encrypted or proprietary formats and would not easily be subject to an antivirus scan or sandbox analysis during transit. This leaves us with a large number of trusted processes within our infrastructure that could turn on us at any time and download something evil, which could potentially be undetectable by endpoint antivirus software.

It would be almost impossible to detect all potential malicious code changes, as they could be as simple as changing a single assembly instruction from JNZ to JZ to allow for unauthorised access or privilege escalation to occur. However, this doesn’t prevent some additional proportionate due diligence on the update package being pulled down and installed.

 

Methodology for discovering inbound evil updates

1. Discover all the software across your estate that could be auto updating
Let us consider for a moment software that automatically updates, and how it looks on the endpoint. Firstly, it needs to open an internet connection to a server that will allow it to identify whether the version currently running is up to date, or whether there is a newer version available. This may occur when the process is first executed or at intervals (set dates or times or even randomly). If the version is current then it will try again later, mostly likely connecting to the same server and repeating the process of checking the version numbers. The traffic volume on each occasion is likely to be very small, after all, all that it needs to ask is “what is the current version number?” and the reply need only be “1.0.32” or similar.

If we put these components together we can build a search for a process, other than a browser (to reduce the noise), which makes repetitive connections to the same server on multiple occasions. We can further refine by looking for low volume traffic in the transmissions. This data may contain false positives such as the checking of licence validation, however this doesn’t matter as we will refine this further in the next stages.
2. Monitor traffic ratios for evidence of updates being downloaded
Once we have a list of processes, devices and servers that have displayed the behaviour from our discovery phase, we can now monitor the traffic volume ratios for abnormal behaviour. Consider a process that had been polling the same server with 5KB uploads followed by 50KB downloads for the past month when suddenly the volume of the download changes to 300MB. It would be a clear outlier based on volumes and upload/download ratios.
3. Perform automated sandbox analysis on detected binaries post update
Now we’re looking for a suspicious update, not just any update, so we can send the installer from the endpoint to an internal sandbox for analysis. Alternatively, we could trigger the same update to occur within a sandbox running our gold image. The sandbox would perform a behavioural analysis, which we would use to highlight any suspicious behaviour during, or after the update process takes place.
4. Alert CERT for further examination should there be evidence of anything suspicious
This can then feed the alerting system used by our SOC/CERT for a manual review and prompt containment activity should the updates contain malware – at this point the number of machines that have pulled the update should be small and we can prevent further infections through proxy blocks etc.

 

Demonstration of concept

Here we will demonstrate the concept by conducting a deep dive into the behavior of the Notepad++ update process. We begin by identifying the polling between the process and update server. We can see from Process Monitor below that the traffic volumes are low, and the upload / download ratio is relatively similar, approximately 1:11 when there is no update to be downloaded.

_4

Now let’s have a look at what happens when there is an update available.

_1

We can see the difference when there is an update to be pulled, in this case the download volume is significantly higher than the upload as we would expect, in fact it’s over 673 times higher at 1:7756.

_2

In addition, following the network activity we can see that a process launch has been identified. This is the start of the update installation and provides us details on the location of the update binary.  At this point we can take the desired action to review the file, this may include a local scan or better yet, uploading the binary to an internal sandbox for behavioural analysis to take place – this would give us the added benefit of detecting a threat before the antivirus signatures are available.

_3

 

Practicalities and further ideas

In a large network there is going to be a lot of noise and you may want to select the top 50 auto updating processes that are most widely utilised across the estate and focus on them. This way you can be more selective about the alerts that go to your SOC.

Some processes may update on the fly using hot patching, in which case better results would be obtained from the creation of a custom image for your sandbox with all of these processes installed, so there would be no need to send/pull binaries and you could monitor the entirety of the update process taking place, including all files dropped to disk and executed during the update.

Feedback and ideas are always welcome!

Happy hunting

-Adam (Twitter: @CyberKramer)