TLS/SSL Failures and Some Thoughts on Cert Pinning (Part 1)

By Chris Crowley

It’s going to happen sooner or later…sooner probably. You’re going to be asked about your company’s mobile app or a mobile app your company wants to install across all mobile devices. They’ll put the request in the “yet another duty as assigned” (YADAA) category/bucket. You look at the network traffic; it’s using TLS so you can’t see the content. Cool, right? Maybe, maybe not. Can an attacker get man in the middle position by tricking users, or by attacking the very root of trust (well, one of the 100 or so root CAs) that is used to sign a TLS server cert? It is a complicated situation and requires a thorough understanding of several systems and protocols to address. I want you to be smart on the subject, so when that YADAA comes your way, you can answer the question knowledgeably and authoritatively. So here you go…read on.

The first installment of this two-part blog post will provide some background on TLS (Transport Layer Security) certificates so you can understand why your mobile apps should implement certificate pinning. In the second part, I’ll discuss a tool that was released recently, called TrustKit, that makes it easy for iOS developers to implement certificate pinning with no change to the iOS app. (No change presumes the URLs to be protected are requested by TLS already). No analogous project exists for Android yet.

Background

TLS provides transport encryption to protect data while in transit over a network, providing confidentiality and integrity protection against untrusted and unknown network nodes. This is a mainstay in our protection of information in transit. In addition to recent attacks against implementations in TLS/SSL, (iOS/OSX goto fail; Heartbleed; Poodle) there are fundamental trust issues within the implementation of TLS. A trusted Certification Authority (CA) is trusted to issue certificates for any resource.

The TLS handshake involves a client verification of the certificate the server provides, then an optional server validation of client certs. In practice, there is no server validation of client certificates, but the spec allows for it.

Screen Shot 2015-11-25 at 8.47.39 AM

Diagram Source:

https://upload.wikimedia.org/wikipedia/commons/1/14/Abbreviated_TLS_1.2_Handshake.svg

There are 5 basic items that the client validates during the assessment of the server presented certificate.

  1. Is this certificate current and still valid based on the expiration date in the cert compared to the current system date?
  2. Is this certificate intended for the purpose I’m attempting to use it for? This capability is defined in the certificate. In the case of a client connecting over HTTPS to a server, the certificate must include the specification for identifying a server.
  3. Is this certificate on a revocation list? In practice, we rarely see OCSP (Online Certificate Status Protocol – RFC 6960) a real time query mechanism for certificates, used. The client may have downloaded CRLs stored. The certificate offered may include a query URL.
  4. Is this certificate issued to the resource I’m requesting? If the browser requested https://www.willhackforsushi.com, but the certificate was issued to a server named www.montance.com, the browser (or mobile app) shouldn’t allow the connection to proceed.
  5. Is this certificate issued by a certification authority I trust? The certification authority (CA) included with the mobile device or web browser is stored in a structure called a certificate store. The list of trustworthy authorities usually numbers in the tens of CAs up to 100 or more CAs. This means that any of these CAs (companies, organizations, or government entities) could issue a certificate for any server name. In practice, the CA is supposed to vet the requester of a certificate. In practice, the CA is only supposed to issue certificates to trustworthy people who actually represent the organization the certificate is assigned to. In practice, the CA has your personal privacy as its primary concern. In reality, we’ve seen a small number of violations of these practical considerations.

To put this in perspective, iOS 9 trusts 218 CA’s:

$ curl -s https://support.apple.com/en-us/HT205205 | grep '</tr><tr><td>' | wc -l
218

Attacks

We’ve seen some attacks in the past related to certificate manipulation. We’ve seen scenarios of complete CA compromises that were used to issue unauthorized certs. We’ve seen CAs tricked into handing out certs. We’ve seen users fail to protect the CA-issued private keys which resulted in unauthorized certificates being issues. Below I cite two reported instances (India CCA and TURKTRUST Inc) where government organizations have issued certificates for servers in an unauthorized fashion. These two are not really a big deal, just for google.com host certificates. Also, many organizations using HTTP proxies will configure CA certs on the client end points, allowing for the transparent interception of HTTPS communication.

We’ve seen theoretical and in the wild attacks against TLS and its related certificate authority infrastructure and organizations. Why, there’s even an RFC “Summarizing Known Attacks on Transport Layer Security (TLS) and Datagram TLS (DTLS)

Some of my favorite failures are below. I like studying failure; it helps us to improve.

2001 : VeriSign issues Microsoft certificates to unauthorized party: https://support.microsoft.com/en-us/kb/293818

2008 : MD5 Collision Construction allows rouge CA Certs http://www.zdnet.com/article/ssl-broken-hackers-create-rogue-ca-certificate-using-md5-collisions/

2009 : TLS Renegotiation attack http://blog.g-sec.lu/2009/11/tls-sslv3-renegotiation-vulnerability.html

2011: Diginotar http://arstechnica.com/security/2011/09/comodo-hacker-i-hacked-diginotar-too-other-cas-breached/

2011: Duqu – Code Signing used in malware http://www.symantec.com/connect/blogs/duqu-protect-your-private-keys

2013 : TURKTRUST – Unauthorized certs https://technet.microsoft.com/en-us/library/security/2798897.aspx

2014 : India CCA – Unauthorized Google certs http://www.entrust.com/google-fraudulent-certificates/

2014: Heartbleed http://heartbleed.com/

2014 : goto fail; https://support.apple.com/en-us/HT202934

2015 : Microsoft revokes certs associated with live.fi https://technet.microsoft.com/en-us/library/security/3046310

In many of the cases listed above, there was some fundamental component of the protocol or the implementation that undermined integrity of the security. Several of the attacks were entirely fraudulent certificates. You can’t have endpoint protection when there is a broken implementation on the endpoint (like in the Apple goto fail;) or in the server OS (like in Heartbleed). But, we can address the frequent (2001, 2009, 2011, 2013, 2014, 2015) scenario of fraudulent certificates used in networks by requiring our mobile applications to only trust specific certificates we intend for use.

Certificate Pinning

Certificate pinning is the application-specific requirement that some specific certificate or CA be required for a TLS connection, rather than accommodating any of the CAs the phone trusts. This can also be used to remove the user from the negotiation. Certificate pinning is most effective when the user is unable to override it.

For protecting your data, this is a useful thing. Is it foolproof? Certainly not. For example, there’s a tweak available for jailbroken iPhones called SSLKillSwitch (https://github.com/iSECPartners/ios-ssl-kill-switch) which is a “Blackbox tool to disable SSL certificate validation – including certificate pinning – within iOS Apps.” Instead of falling victim to the perfect solution fallacy, I recommend developers use certificate pinning to minimize the likelihood of data interception, while acknowledging the reality that a jailbroken or rooted endpoint is not going to provide the same data protection as a non-rooted device.

Cert pinning provides your application with a layer of certificate control that allows you to specify which certificates you’ll accept, substantially limiting the likelihood of unauthorized information disclosure or alteration during network transmission. Another scenario where this may be undesirable is if your organization uses a privately issued CA to introduce a man-in-the-middle (MITM) TLS scenario for data loss protection. Google implements one of the largest certificate pinning instances in the world via its Chrome browser. It addresses this nuance by allowing the private trust store CAs to override pinning, while prohibiting the public trust store to violate pinning. Read more here: https://www.chromium.org/Home/chromium-security/security-faq#TOC-How-does-key-pinning-interact-with-local-proxies-and-filters-

Here’s a simple example of how certificate pinning could help you to protect your organization’s data. A user, alibo, reported to Google Groups that there was a cert error reported by Google Chrome. As it turns out, Google used certificate pinning capability in the Chrome browser to identify unexpected certificates signing certificates for servers in the google.com domain. Read more here: https://www.eff.org/deeplinks/2011/08/iranian-man-middle-attack-against-google . That is a public example of detecting the DigiNotar compromise.

Stay Tuned…

In my next blog installment I’ll discuss the specific implementation of certificate pinning in iOS applications using TrustKit. Let me know if you found this useful on twitter @CCrowMontance.

-Chris Crowley

P.S. If you like this kind of thing, you really should consider joining me for the SANS Security 575 course on mobile device penetration testing and security in New Orleans in January.  We call the event SANS Security East, and it’s gonna be an awesome event full of great strategies, tactics, techniques, and practical advice on securing your organizations mobile devices!  I hope you can make it!

Using the SSH “Konami Code” (SSH Control Sequences)

By Jeff McJunkin

Are you familiar with the Konami code? The one popularized by the Contra video game?

contra
Pictured above: Tangentially related to SSH

If not, let me fill you in.  This code is a sequence of control actions for some video games that’ll let you jump forward in the game (some call it a “cheat,” but I’d rather not judge.).  The code itself is a series of button presses as follows (from Wikipedia):

800px-Konami_Code.svg

For me, learning about SSH control sequences felt like finding SSH’s Konami code. First I learned how to kill an SSH client that wasn’t responsive, which was convenient. Then, finding out about changing SSH’s options *after I had established the connection* felt like cheating. Adding SOCKS proxies or local and remote port forwards after I’ve already connected to an SSH server is very useful, and far less annoying than typing my SSH key passphrase again.

So, how do you start a control sequence? First, make sure “Enter” was the last key you pressed, as the SSH client won’t notice the control sequence otherwise. Next, press the tilde character (shift + backtick) followed by another character.

What are the support escape sequences, you ask? Well, press “?” as your second character, and your SSH client will tell you:

Supported escape sequences:
~. – terminate connection (and any multiplexed sessions)
~B – send a BREAK to the remote system
~C – open a command line
~R – request rekey
~V/v – decrease/increase verbosity (LogLevel)
~^Z – suspend ssh
~# – list forwarded connections
~& – background ssh (when waiting for connections to terminate)
~? – this message
~~ – send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)

Of these, I use “~.” to kill stubborn SSH clients, “~C” to use additional SSH options (like “-D 8080” to start up a new SOCKS proxy), and rarely “~#” to see what forwards I’ve created.

Here’s an example of me connecting to an SSH server (I set up the alias in my ~/.ssh/config file) and using an SSH control sequence to add a SOCKS proxy on port 9001 retroactively:

Jeff 1

An example of using an SSH escape sequence

Note the line beginning with “whoami”. We were interacting with the SSH client itself at the line beginning with “ssh>”, but when we finished that by pressing Enter, we didn’t get a new prompt from the remote server. The input was still accepted, though, which is why the “whoami” command I typed returned “jeff” in the next line, followed by another newline and the SSH server’s prompt.
Gosh, this is useful stuff.

Thanks for reading along! I hope you find as much use for the SSH Konami Code as I have.

– Jeff McJunkin

 

Upcoming SANS Special Event – 2018 Holiday Hack Challenge

KringleCon

SANS Holiday Hack Challenge – KringleCon 2018

  • Free SANS Online Capture-the-Flag Challenge
  • Our annual gift to the entire Information Security Industry
  • Designed for novice to advanced InfoSec professionals
  • Fun for the whole family!!
  • Build and hone your skills in a fun and festive roleplaying like video game, by the makers of SANS NetWars
  • Learn more: www.kringlecon.com
  • Play previous versions from free 24/7/365: www.holidayhackchallenge.com

Player Feedback!

  • “On to level 4 of the #holidayhackchallenge. Thanks again @edskoudis / @SANSPenTest team.” – @mikehodges
  • “#SANSHolidayHack Confession – I have never used python or scapy before. I got started with both today because of this game! Yay!” – @tww2b
  • “Happiness is watching my 12 yo meet @edskoudis at the end of #SANSHolidayHack quest. Now the gnomes #ProudHackerPapa” – @dnlongen
kringle_02

What’s the Deal with Mobile Device Passcodes and Biometrics? (Part 2 of 2)

By Lee Neely

In the first installment of this 2-parter, I discussed  the use of mobile device  fingerprint scanners to unlock the device.  As a follow-up, I’d like to discuss how a developer can integrate the scanner into their applications.  This discussion may provide some insights into how to secure mobile apps, or even inspire some hacking ideas (this is a pen test related blog, after all).  At the end of the article below, I’ll discuss some ideas for compromising this type of environment.

In part one, I introduced the secure environment used to manage fingerprints. This environment is called by a couple of names. Most commonly it’s called the Trusted Execution Environment (TEE) and Secure Enclave. In both cases, the terms describe a separate environment consisting of hardware and software, which performs specific tasks relating to managing and validating trusted information, isolated from the primary device applications and operating system.

Background

Global Platform, an international standards organization, has developed a set of standards for the TEE’s APIs and security services. These were published in 2010 and the corresponding APIs between the Trusted Application and Trusted OS was completed in 2011. The image below, from www.arm.com, describes the TrustZone standard.Blog 1

With this published, we saw the introduction of implementations from Apple and Samsung using a secure micro-kernel on their application coprocessor. A TrustZone enabled processor allows for hardware isolation of secure operations. Rooting or Jailbreaking the device Operating System does not impact the secure micro-kernel.

Both platforms are using the same paradigm. There is now a Normal World (NW) and Secure World (SW) with APIs for NW applications to access information in the SW. In both cases the Fingerprint Reader/Scanner is connected to the SW, so the fingerprint scan itself cannot be accessed by NW applications. Both platforms have API calls that mask the details of the implementation from an application developer, making it simple to incorporate this technology into applications.

Apple Implementation Details

Apple introduced their Secure Enclave when they released the iPhone 5S which included Touch ID and their A7 processor. Apple creates a Secure Enclave using encrypted memory and includes a hardware random number generator, and a micro-kernel based on the L4 family with modifications by Apple. The Secure Enclave is created during the manufacturing process with its own Unique ID (UID) that reportedly not even Apple knows. At device startup, an ephemeral key is created, entangled with its UID, and used to encrypt the Secure Enclave’s portion of the device’s memory. Secure Enclave data written to the file system is encrypted with a key that is entangled with the UID and an anti-replay counter. During device startup, the Secure Enclave coprocessor also uses a secure boot process to ensure the software is verified and signed by Apple. In the event the Secure Enclave integrity cannot be verified, the device enters device firmware upgrade (DFU) mode and you have to restore the device to factory default settings.

Using Touch ID with applications:

Third-party apps can use system-provided APIs to ask the user to authenticate using Touch ID or a passcode. The app is only notified as to whether the authentication was successful; it cannot access Touch ID or the data associated with the enrolled fingerprint.

Keychain items can also be protected with Touch ID, to be released by the Secured Enclave only by a fingerprint match or the device passcode. App developers also have APIs to verify that a passcode has been set by the user and therefore able to authenticate or unlock keychain items using Touch ID.

Apple APIs

After all that, it may seem like it’s hard to make a call to Touch ID for authentication. Apple is now publishing sample code and API documentation through the Apple iOS Developer Library. Note: Downloading the iOS SDK requires an Apple Developer Program membership.

This code sample Local Authentication Framework from Apple’s iOS Developer Library makes it pretty simple to add a Touch ID authentication prompt to an application:

 

Blog 2.2

Samsung Implementation Details

Samsung introduced their Trusted Execution Environment in the Galaxy SIII. Samsung uses a micro-kernel named Mobicore.  Developed by Giesecke & Devrient GmbH (G&D), it uses TrustZone security extension of ARM processors to create a secure program execution and data storage environment which sits next to the “rich” operating system of the device. This isolation creates the TEE. Secure applications that run inside Mobicore are called trustlets. Applications communicate with trustlets through the Mobicore library, service and device drivers.

While third-party application developers can create their own trustlets, they need to be incorporated into Mobicore by G&D.

The following figure published by G&D demonstrates Mobicore’s architecture:

Blog 2

Using Samsung Fingerprint Scanner with applications:

Samsung’s fingerprint scanner can be used with Web sign-in and verification of a Samsung account. Additionally, it will be incorporated into the authorization process for the new Samsung Pay application.

Applications workflow for Fingerprint Authorization is as follows:

  1. User opens app (that uses fingerprint)
  2. User goes to authorize something (such as payment)
  3. App launches fingerprint Trustlet
  4. Secure screen takes over and provides UI for reading fingerprint
  5. Fingerprint scanner accepts input
  6. Trustlet verifies fingerprint by comparing it to stored scan data from registration and provides authorization back to App
  7. Authorization is completed

Samsung API

To access fingerprints on Samsung, you need to use the Pass SDK. It works with both the Swipe (S5, Note 4) and Touch (S6) sensor. The Pass SDK provides for not only checking a fingerprint but also has mechanisms to add, check, and even throw a handled exception in the event your application is run on a device that doesn’t support a fingerprint scanner.

Sample code from the Pass SDK to authenticate with a fingerprint:

Blog 3

Compromising

So now you know how the architecture works, and how to access it, the question becomes how can this be compromised? What are the risks?

The first option is to directly attack the fingerprint database. The FireEye team found HTC was storing the fingerprint database as a world-readable world-writable file, which means that any application could read or modify the contents. Reports of patches have been issued to address this.

The second option is to interfere with the communication between the fingerprint reader and the SW. If you could attach a process to that device, you could make your own collection of fingerprints accessing the device, or even more insidiously, insert your own fingerprint into the legitimate fingerprint database. Again, the FireEye team found cases where this was possible and the vendors issued patches.

The third option is to modify the device driver that’s between the NW and SW portions of the device. Potentially altering the behavior of the fingerprint scan checks to always return true. Other data, which is uniquely stored in the SW, cannot so easily be faked so there are limits to how successful this would be. As luck would have it, another presentation at Black Hat 2015 found an exploit.

Di Shen’s presentation from BH-15 shows how an insecure implementation of the TEE OS and driver in the Huawei devices with the Huawie HiSilicon chipset can be used to alter trusted memory, injecting shell code, accessing fingerprint data, and otherwise compromising the TEE. These compromises do require kernel (rooted) access to be successful.

A common thread in addressing these weaknesses is patches or updates. Android users have had a struggle getting these updates consistently and in a timely fashion. On August 5th, Google announced monthly Over the Air (OTA) security updates for their Nexus devices and Samsung announced a fast track OTA security update process. Samsung is negotiating with mobile operators worldwide to insure the process will be successful. With luck, other device manufacturers will follow suit. An important thing to note is the backward compatibility of the updates. Typically Android devices only have updates for eighteen months, if at all. By contrast, Apple iOS 8.4.1 is still supported on the four-year-old iPhone 4s. Once a platform with updates is chosen, it is still necessary to check for and apply the updates that are distributed.

To stay informed about security updates, subscribe to data feeds that carry announcements of updates and fixes. Google recently created an Android Security Updates Google Group. Apple announces their updates through their security page and security-announce mailing list.

Summary

When considering the use of biometrics either for device or application authentication, a risk-based decision has to be made, as with any external authentication service, whether to trust it. Fingerprint readers are still subject to bypass techniques as were described in part one. The simplest mitigation is to rely on device and application integrity checks which must pass prior to allowing the external authentication to either be enabled or succeed. Additionally, being aware of the physical security of the mobile device is paramount. Protect it like you would protect your wallet.

If the risk is acceptable, users are again provided with an authentication mechanism that cannot be merely observed to circumvent. The authentication information itself is securely stored, the application verifies it via calls through a secure API designed to detect tampering, and the application is no longer managing passwords.

One advantage an application developer has is they may elect to have layered authentication. If the target is a corporate application, working with the mobile device administration team to ensure a device level passcode is also configured would then ensure two levels of authentication prior to allowing access to your application.

One more option an application developer has is to require added authentication, within the application, prior to allowing access. Perhaps, that might be a PIN plus a Fingerprint. While this is appealing from a security perspective, from an end-user perspective this may be a tough option to accept.

Conclusion

In short, adding support for biometric authentication has a nominal impact and, as such, making a pitch for using it when developing applications should be an easy option for management to accept. I suggest seriously considering it for your next mobile application development project.

-Lee Neely

Want to learn more on this topic? You really should check out SEC575: Mobile Device Security and Ethical Hacking, an amazing course covering mobile device security, attacks, and much more!

References:
Apple iOS Security Guide: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

Brute Force Android PIN:
http://www.brotherton.com/main/androidpinbruteforce

Samsung Fingerprint 4.4: http://www.samsung.com/us/support/howtoguide/N0000011/16610/225807

Samsung S5 Fingerprint Hacked:
http://www.tomsguide.com/us/samsung-galaxy-s5-fingerprint-hack,news-18655.html

https://www.pentestpartners.com/blog/brute-forcing-android-pins-or-why-you-should-never-enable-adb-part-2/

Samsung/Apple face-off:
http://www.cio.com/article/2454883/consumer-technology/fingerprint-faceoff-apple-touch-id-vs-samsung-finger-scanner.html

Improvements to iPhone 6 Fingerprint scanner:
http://9to5mac.com/2014/09/24/hack-test-shows-apple-improved-security-and-reliability-of-still-not-perfect-touch-id-sensor-in-iphone-6/

CCC how-to make a fake fingerprint: http://dasalte.ccc.de/biometrie/fingeradbdruck_kopieren.en

FAR/FRR Graph: http://www.securitysales.com/article/knowing-how-biometrics-can-be-beaten-helps-you-win

Entrust Blog:
http://www.entrust.com/bypassing-fingerprint-biometrics-nothing-new/

iOS Authenticate with Touch ID: https://developer.apple.com/library/ios/documentation/LocalAuthentication/Reference/LocalAuthentication_Framework/

Samsung Pass SDK:
http://developer.samsung.com/galaxy#pass

Samsung KNOX Security Overview: https://www.samsungknox.com/en/system/files/whitepaper/files/An%20Overview%20of%20the%20Samsung%20KNOX%20Platform_V1.11.pdf

MobiCore description:
https://www.sensepost.com/blog/2013/a-software-level-analysis-of-trustzone-os-and-trustlets-in-samsung-galaxy-phone/

BlackHat Presentation:
https://t.co/GblBcEN9a4

https://www.blackhat.com/docs/us-15/materials/us-15-Zhang-Fingerprints-On-Mobile-Devices-Abusing-And-Leaking-wp.pdf

http://www.theregister.co.uk/2015/08/10/htc_caught_storing_fingerprints_as_worldreadable_cleartext/?mt=1439215037342

Attack TrustZone:
https://www.blackhat.com/docs/us-15/materials/us-15-Shen-Attacking-Your-Trusted-Core-Exploiting-Trustzone-On-Android-wp.pdf