HTML5: Risky Business or Hidden Security Tool Chest?

I was lucky to be allowed to present about how to use HTML5 to improve security at the recent OWASP APPSEC USA Conference in New York City. OWASP now made a video of the talk available on YouTube for anybody interested.

http://www.youtube.com/watch?v=fzjpUqMwnoI

 

Apple’s iCloud: Thoughts on Security and the Storage APIs

This is a guest post from security researcher Nitesh Dhanjani which follows his previous iOS articles.

At the 2011 World Wide Developer Conference in San Francisco, Steve Jobs revealed his vision for Apple’s iCloud: to demote the desktop as the central media hub and to seamlessly integrate the user’s experience across devices.

Apple’s iCloud service comprises of two distinct features. The first is to provide the user with the ability to backup and restore the device over the air without having to sync with an OSX or Windows computer. This mechanism is completely controlled by Apple and also provides free email and photo syncing capabilities. The second feature of iCloud allows 3rd party developers to leverage data storage capabilities within their own apps.

In this article, I will provide my initial thoughts on iCloud from a security perspective. The emphasis of this article is to discuss the iCloud storage APIs from a secure coding and implementation angle, but I will start by addressing some thoughts on the backup and restore components.

Business Implications of Device Backup and Restore Functionality

Starting with iOS5, iPhone and iPad users do not have to sync their devices with a computer. Using a wireless connection, they can activate their devices as well as backup and restore their data by setting up an iCloud account.

Following are some thoughts on risks and opportunities that may arise for businesses as their employees begin to use iOS devices that are iCloud enabled.

High potential for mass data compromise using automated tools.

An iOS device that is iCloud enabled continuously syncs data to Apple’s data-centers (and to cloud services Apple has in turn leased from Amazon (EC2) and Microsoft (Azure)). The device also performs a backup about once a day when the device is plugged into a power outlet and when WiFi is available (this can also be manually initiated by the user).

It is easy to intercept the traffic between an iOS device and the iCloud infrastructure using an HTTP proxy tool such as Burp. Interestingly, the backupd process  also backs up data to the Amazon infrastructure:

PUT /[snip]?x-client-request-id=[snip]&Expires=1322186125&AWSAccessKeyId=[snip]&Signature= [snip] HTTP/1.1

Host: us-nca-00001.s3-external-1.amazonaws.com:443

User-Agent: backupd (unknown version) CFNetwork/548.0.4 Darwin/11.0.0

x-amz-date: Fri, 25 Nov 2011 01:00:25 GMT

Content-Type: application/octet-stream

x-apple-request-uuid: [snip]

Connection: close

Proxy-Connection: close

Content-Length: 3574527

In this case, the device had authenticated to Apple domains (*.icloud.com and *.apple.com). Most likely, those servers initiated a back-end session with Amazon tied to the user’s session based on the filename provided to the PUT request above.

The biggest point here from a security perspective is that all the information is protected by the user’s iCloud credentials that is present in the Authorization: X-MobileMe-AuthToken header using basic access authentication (base 64).

iCloud backs-up emails, calendars, SMS and iMessages, browser history, notes, application data, phone logs, etc. This information can be a gold-mine for adversaries. It is my hypothesis that in the near future, we are going to see automated tools that will do the following:

  1. Continuously scrape compromised usernames and password of other services from sources such as http://twitter.com/PastebinLeaks.
  2. Attempt credentials of each compromised account (whose username is in the form of an email as required by iCloud) on iCloud.
  3. For every successful credential, download the restore files thereby completely compromising the user’s information.

The risk to organizations and government institutions is enormous. A malicious entity can automatically download majority of data associated with an individual’s iPhone or iPad of that user’s account simply by gaining access to their iCloud password (which could have been compromised due to password reuse at another service).

Information security teams must integrate baselines and policies relating to iCloud.

It is possible to turn off iCloud Backups (allowCloudBackup)and document synchronization (allowCloudDocumentSync) by deploying specific configuration profiles. More information is available here: http://developer.apple.com/library/ios/#featuredarticles/iPhoneConfigurationProfileRef/Introduction/Introduction.html

Also, Mobile Device Management (MDM) vendors are likely to integrate iCloud related policy settings and this should be leveraged.

3rd party apps as well as iOS apps developed in-house should be assessed for security vulnerabilities and the iCloud API related principles listed in the next section.

iCloud Storage APIs

A significant aspect of the iCloud platform is the availability of the iCloud storage APIs [http://developer.apple.com/icloud/index.php] to developers. These APIs allow developers to write applications that leverage the iCloud to wirelessly push and synchronize data across user devices.

iCloud requires iOS5 and Mac OSX Lion. These operating systems have been recently released and developers are busy modifying their applications to integrate the iCloud APIs. In the coming months, we are bound to see an impressive increase in the number of apps that leverage iCloud.

In this section, I will discuss my initial thoughts on how to securely enable iOS apps using the iCloud Storage APIs. I will step through how to write a simple iOS app that leverages iCloud Storage APIs. This app will create a simple document in the user’s iCloud container and auto update the document when it changes. During this walk-through, I will point out secure development tips and potential abuse cases to watch out for.

[I have relied on Ray Wenderlich’s fantastic Beginning iCloud in iOS 5 tutorial(s) located at for the walk-through below. The tutorial can be found here: http://www.raywenderlich.com/6015/beginning-icloud-in-ios-5-tutorial-part-1].

Creating an Configuring an App ID and Provisioning Profile for iCloud Services

This is the first step required to allow your test app to be able to use the iCloud services. The App ID is really a common name description of the app to use during the development process.

1_provisioning_portal_1

Figure 1: Creating an App ID using the Developer provisioning portal.

The provisioning portal also requires you to pick a “Bundle Identifier” in reverse-domain style. This has to be a unique string. For example, an attempt to create an App ID with the Bundle Identifier of com.facebook.facebook is promptly rejected because it is most likely in use by the official Facebook app.

The next step is to enable your App ID for iCloud services. Click on “Configure”  in your App ID list under the “Action” column. Next, check “Enable for iCloud”

1_provisioning_portal_3

Figure 2: Enabling your App ID for iCloud

Select the “Provisioning” tab and click on “New Profile”. Pick the App ID you created earlier and select the devices you want to test the app on. Note that the simulator cannot access the iCloud API so you will need to deploy the app onto an actual device.

Once you have the App ID configured, you have to create a provisioning profile. A provisioning profile is a property-list (.plist) file signed by Apple. This file contains your developer certificate. Code that is complied with this developer certificate is allowed to execute on the devices selected in the profile.

2_xcode_provisioning_profile_1

Figure 3: Provisioning profile loaded in XCode

Download the profile and open it (double-click and XCode should pick it up as shown in Figure 3).

Writing a Simple iCloud App in XCode

 In Xcode, create a new project. Choose “Single View Application” as the template. Enter “dox” for the product name and the company identifier you used when creating the App ID. The Device family should be “Universal”. The “Use Automatic Reference Counting” option should be checked and the other options should be unchecked.

3_xcode_new_project_1

Figure 4: Creating a sample iCloud project in XCode

Next, select your project in the “Project Navigator” and select the “dox” target. Click on “Summary” and go to the “Entitlements” section.

4_xcode_entitlements_1

Figure 5: Project entitlements (iCloud)

The defaults should look like the screen-shot in Figure 5 and you don’t have to change anything.

Open up AppDelegate.m and add the following code at the bottom of application:didFinishLaunchingWithOptions (before the return YES;):

 

NSURL *ubiq = [[NSFileManager defaultManager] 

  URLForUbiquityContainerIdentifier:nil];

if (ubiq) {

    NSLog(@”iCloud access at %@”, ubiq);

    // TODO: Load document… 

} else {

    NSLog(@”No iCloud access”);

}

 

6_no_icloud_access_2

Figure 6: “Documents & Data” in iCloud settings turned off

Now assume that the test device has the “Documents & Data” preference in iCloud set to “off”. In this case, if you run the project now, you should see the log notice shown in Figure 7.

6_no_icloud_access
Figure 7: App unable to get an iCloud container instance 

If the “Documents & Data” settings were turned “On”, you should see the log notice similar to Figure 8.

7_debug_icloud_file

Figure 8: iCloud directory on the local device

Notice that the URL returned is a ‘local’ (i.e. file://) container. This is because the iCloud daemon running on the iOS device (and on OSX) automatically synchronizes information users put into this directory between all of the user’s iCloud devices. If the user also has OSX Lion, they can find their iCloud files created on iOS appear in their ~/Library/Mobile Documents/ directory.

At this point, I’ll turn you over to Ray Wenderlich’s iCloud tutorials I referenced earlier. Go through Part 1 and Part 2 (you can stop at the end of “Setting Up the User Interface”). The tutorial is available here: http://www.raywenderlich.com/6015/beginning-icloud-in-ios-5-tutorial-part-1.

Video: Demonstration of sample iCloud App

Once you are done, you can deploy your app onto two separate iOS devices and watch the text sync using iCloud. The embedded video above demonstrates the app in action.

Security Considerations

The following are a list of security considerations that may be useful in managing risk pertaining to the iCloud storage APIs.

Guard the credentials to your Apple developer accounts. It is important for you to safeguard your Apple developer account credentials and make sure the credentials are complex enough to prevent potential brute forcing. Someone with access to your developer account could release an app with the same Bundle Seed ID (discussed below) that accesses the users’ iCloud containers and ferries the information to the attacker.

The Bundle Seed ID is used to constrain the local iCloud directory. As you can see in Figure 8, the local directory is in the form of [Bundle Seed/Team ID].[iCloud Container specific in entitlements]. The app can have multiple containers (i.e. multiple directories) if specified in the entitlements, but only in the form of [Bundle Seed ID].* as constrained in the provisioning profile:

[sourcecode language=”php”]

<key>Entitlements</key>
    <dict>
        <key>application-identifier</key>
        <string>46Q6HN4L88.com.icloudtest.dox</string>
        <key>com.apple.developer.ubiquity-container-identifiers</key>
        <array>
            <string>46Q6HN4L88.*</string>
        </array>
        <key>com.apple.developer.ubiquity-kvstore-identifier</key>
        <string>46Q6HN4L88.*</string>
    </dict>


    <key>TeamIdentifier</key>
    <array>
        <string>46Q6HN4L88</string>
    </array>

[/sourcecode]

10_entitlements_1

Figure 9: Entitlements settings visible in XCode

If you try to change the values of com.apple.developer.ubiquity-container-identifiers or com.apple.developer.ubiquity-kvstore-identifier (in your entitlements settings visible in Xcode) to begin with anything other than what you have in your provisioning profile, XCode will complain as shown in Figure 10.

10_entitlements_2

Figure 10: Xcode error about invalid entitlements

It is clear that Apple uses the Bundle Seed ID (Team ID) to constrain access to user data in iCloud between different organizations. As discussed earlier, if someone were to get Apple’s provisioning portal to issue a provisioning profile with someone else’s Team ID, they could write Apps that can (at least locally) have access to the user’s iCloud data since their local iCloud file:// mapping will coincide.

Do not store critical information in iCloud containers, including session data. iCloud data  is stored locally and synced to the iCloud infrastructure. Users often have multiple devices (iPhone, iPod Touch, iPad, Macbook, iMac) so their iCloud data will be automatically synced across devices. If a malicious entity were to temporarily gain access to the file-system (by having physical access or by implanting malware), he or she could gain access to the iCloud local containers (/private/var/mobile/Library/Mobile Documents/ in iOS and ~/Library/Mobile Documents/ in OSX). It is therefore a good idea not to store critical information such as session tokens, passwords, financial, and or healthcare data that is personally identifiable.

Do not trust data in your iCloud to commit critical transactions. As discussed in the prior paragraph, an attacker with temporary access to a user’s file system can access iCloud documents stored locally. Note that the attacker can also edit or add files into the iCloud containers and the changes will be synced across devices.

11_patient_1

Figure 11: Sample medical app that leverages iCloud to store patient data

Assume a hospital were to deploy an iCloud enabled medical app to be used by doctors such as the screenshot in Figure 11. If an attacker were to gain access to the doctor’s Macbook Air running OSX for example, they could look at the local filesystem:

$ cd ~/Library/Mobile Documents/46Q6HN4L88~com~hospital~app/Documents

$ ls

Allergies.txt                  1.TIFF

$ cat /dev/null > Allergies.txt

$ cp ~/Downloads/1.TIFF 1.TIFF

Once the attacker has issued the commands above, the doctor’s iCloud container will be updated with the modified information across all devices. In this example, the attacker has altered a particular patient’s record to remove listed allergies and replace the X-Ray image.

11_patient_2

Figure 12: Updated medical record after intruder gains temporary access to Doctor’s Macbook Air

The doctor will see the updated record when the medical app is accessed after the attacker makes these changes on the doctor’s Macbook Air (Figure 12).

Store files in the Documents directory. Users can delete individual files in their iCloud accounts if they are stored in the ‘Documents’ directory:

NSURL *ubiquitousPackage = [[ubiq URLByAppendingPathComponent: @”Documents”] URLByAppendingPathComponent:kFILENAME];

Other files will be treated as data and can only be deleted all at once. Doing this can allow user’s to notify you if a bug in your application is causing too much data to be written into iCloud which can exhaust user’s storage quotas and thus create a denial of service condition.

Take care to handle conflicts appropriately. Documents that are edited on multiple devices are likely to cause conflicts. Depending upon the logic of your application code, it is important to make sure you handle these conflicts so that the integrity of the user’s data is preserved.

Understand that Apple has the capability to see your users’ iCloud data. Data from the local device to the iCloud infrastructure is encrypted during transmit. However, note that Apple has the capability to look at your users’ data. There is low probability that Apple would choose to do this but depending upon your business, there may be regulatory and legal issues that may prohibit storage of certain data on the iCloud.

iOS sandboxing vulnerabilities may be exploited by rogue apps. Try putting in the string @”..\..\Documents” in URLByAppendingPathComponent or editing your container identifier in your entitlements to contain “..” or any other special characters. You will note that iOS will either trap your attempt at runtime or replace special characters that can cause an app to break out of the local iCloud directory. If someone were to find a vulnerability in iOS sandboxing or file parsing mechanisms, it is possible they can leverage this to build a rogue app that is able to access another app’s iCloud data.

These security principles also apply to key-value data storage. The iCloud Storage APIs also allow the storage of key-value data in addition to documents. The security tips outlined in this article also apply to key-value storage APIs.

Watch out for iCloud backups. As presented in the earlier section, the user can choose to backup his or her phone data into the iCloud. This includes the Documents/ portion within the app sand-box (Note: This is not the Documents folder created as part of the iCloud container, but is present as part the application bundle). If there is critical information you do not wish to preserve move it to Library/Caches. You may also wish to leverage the addSkipBackupAttributeToItemAtURL method to identify specific directories that should not be backed up.

 

I hope this article contained information to help you and your organization think through security issues and principles surrounding iCloud. The ultimate goal is to enable technology, but in a way that is cognizant of the associated risks. Feel free to get in touch if you have any comments, questions, or suggestions.

Password Tracking in Malicious iOS Apps

In this article, John Bielich and Khash Kiani introduce OAuth, and demonstrate one type of approach in which a malicious native client application can compromise sensitive end-user data.

Earlier this year, Khash posted a paper entitled: Four Attacks on OAuth – How to Secure Your OAuth Implementation that introduced a common protocol flow, with specific examples and a few insecure implementations.  For more information about the protocol, various use cases and key concepts, please refer to the mentioned post and any other freely available OAuth resources on the web.

This article assumes that the readers are familiar with the detailed principles behind OAuth, and that they know how to make GET and POST requests over HTTPS.  However, we will still provide a high-level introduction to the protocol.

OAuth

OAuth is a user-centric authorization protocol that is gaining popularity within the social networking space, and amongst other resource providers such as Google and SalesForce. Furthermore, sites such as Facebook are increasingly positioning themselves as Single Sign-On (SSO) providers for other sites using OAuth as the platform. This makes credentials from such providers extremely high-value targets for attackers.

OAuth is a permission-based authorization scheme.

It allows a user to grant access to a third-party application to access his or her protected content hosted by a different site.  This is done by having the user redirected to the OAuth provider and authenticating on that server, thereby bypassing the need for the client application to know the credentials.  The main goal this protocol attempts to achieve is defeating the password anti-pattern in which users share their credentials with various third-party applications for their service offerings.

OAuth and Embedded Mobile Applications

In mobile operating systems such as iOS, there are controls that can be embedded into the application that enable the login page to be served up within the application itself (in this example, UIWebView). This is often done by client applications to provide a good user experience. However, this approach violates the trust context by not using a web browser (Safari) external to the app.  Several of the browser controls and the OAuth redirect schemes are not applicable to facilitating user authorization.

The ideal application flow typically consists of the following steps:

1.     The client application requests a specific scope of access.

2.     The client application switches the user to an external web browser, and displays the OAuth authentication page, asking for consent and user permission to grant access to the client application.

3.     Upon user authentication and approval, the client application receives an Access Token that can be leveraged to access protected resources hosted by Service Provider.

4.     The callback feature of OAuth is typically leveraged to call the local client app in this scenario.

Building a Malicious Google OAuth Client

As mentioned, the main purpose behind OAuth is to defeat the password anti-pattern, preventing identity attacks through a delegation model where users do not share their Server credentials with third-party Client applications.  However, by using embedded web components with OAuth in native mobile applications, we can provide an attack landscape for malicious rich clients to easily compromise user credentials. This compromise would be undetected by either the user or the service provider, and is especially sneaky, as it provides the user with a false sense of safety in the authentication workflow.

In this section, we will demonstrate a malicious iOS application using Google’s own OAuth sample iOS application.  Our simple modifications will steal user’s credentials as they login through Google’s authorization endpoint.

The Attack

This sample exploit consists of the following overall approach:

1.     Downloading the native OAuthSampleTouch mobile Google application.

2.     Using XCode and the iOS SDK, modify the UIWebViewDelegate’s webView:shouldStartLoadWithRequest:navigationType: callback method to intercept the login page before sending the POST request.

3.     Capture the credentials out of the HTTP body.

4.     Use the captured credentials. In this case, we simply output the values. We could easily store or send the credentials in the background to another destination.

The Attack Details

First step is to download, compile and run OAuthSampleTouch that comes with the gtm-oauth code. This is the sample app that uses the Google framework which we are compromising.

figure 1 – install OAuthSampleTouch

Secondly, find the GTMOAuthViewControllerTouch. This class handles the interaction for the sample app, including the UIWebView for authentication.

figure 2 – edit GTMOAuthViewController

Thirdly, insert the malicious code in the webView:shouldStartLoadWithRequest:navigation that will process the request being sent before it is sent from your device. This code effectively does the following:

  • Grabs the HTTP BODY and assigns it to a string
  • Creates an array of string components separated by the “&” token to get a name/value pair per array element
  • Iterates through the values, grabbing the “Email” and “Passwd” name/value pairs, which are Google’s username / password fields

figure 3- insert evil code in the webView:shouldStartLoadWithRequest:navigationType

Fourthly, we authenticate and grant access to the device.

figure 4 – OAuth process with an embedded view

And lastly, we finish authenticating, granting access, and view the compromised credentials.

figure 5 – outputting credentials

Solutions

The best solution for Client application developers is to follow the recommended flow and use an external browser (in this case, Safari).  Although it might diminish the user experience, this approach will allow the user to be assured that the authentication process is outside the context of the application altogether.

From a user’s perspective, do not trust native OAuth applications that do not follow the recommended flow and are not capturing your credentials in a trusted and neutral 3rd party application, such as Safari.

And lastly, the protocol needs to mature its support for native mobile applications to implement a stricter client authentication process with consumer credentials, a better browser API support, and also properly notify the OAuth providers upon unauthorized access.

About the Authors

John Bielich is a security consultant with  14 years of experience building and securing web-based software applications, as well as over one year in iOS development and security. He can be reached at:  john@codeshuffle.com

Khash Kiani is a principal security consultant and researcher with over 13 years of experience in building and securing software applications for large defense, insurance, retail, technology, and health care organizations. He specializes in application security integration, penetration testing, and social-engineering assessments. Khash currently holds the GIAC GWAPT, GCIH, and GSNA certifications, has published papers and articles on various application security concerns, and spoken at Blackhat US. He can be reached at:  khash@thinksec.com

If you liked this post check out SANS’ new class on Secure iOS App Development.

 

 

 

 

 

 

 

Apple iOS Push Notifications: Security Implications, Abuse Scenarios, and Countermeasures

This is a guest post from security researcher Nitesh Dhanjani. Nitesh will be giving a talk on “Hacking and Securing Next Generation iPhone and iPad Apps” at SANS AppSec 2011.

Millions of iOS users and developers have come to rely on Apple’s Push Notification Service (APN). In this article, I will briefly introduce details of how APN works and present scenarios of how insecure implementations can be abused by malicious parties.

Apple’s iOS allows some tasks to truly execute in the background when a user switches to another app (or goes back to the home screen), yet most apps will return and resume from a frozen state right where they left off. Apple’s implementation helps preserve battery life by providing the user the illusion that iOS allows for full-fledged multi-tasking between 3rd party apps.

This setup makes it hard for apps to implement features that rely on full-fledged multi-tasking. For example, an Instant Messaging app will want to alert the user of a new incoming message even if the user is using another app. To enable this functionality, the APN service can be used by app vendors to push notifications to the user’s device even when the app is in a frozen state (i.e. the user is using another app or is on the home screen).


Figure 1: Badge and alert notification in iOS

There are 3 types of push notifications in iOS: alerts, badges, and sounds. Figure 1 shows a screen-shot illustrating both an alert and a badge notification.

iOS devices maintain a persistent connection to the APN servers. Providers, i.e. 3rd party app vendors, must connect to the APN to route the notification to a target device.

Implementing Push Notifications

So how can an app developer implement push notifications? For explicit details, I’d recommend going through Apple’s excellent Local and Push Notifications Programming Guide. However, I will briefly cover the steps below with emphasis on steps that have interesting security implications and tie them to abuse cases and recommendations.

1. Create an App ID on Apple’s iOS Provisioning Portal
This step requires you sign up for a developer account ($99). This is straight-forward: just click on App IDs in the provisioning portal and then select New App ID. Next, enter text in the Description field that describes your app. The Bundle Seed ID is used to signify if you want multiple apps to share the same Keychain store.

The Bundle Identifier is where it gets interesting: you need to enter a unique identifier for your App ID. Apple recommends using a reverse-domain name style string for this.


Figure 2: iOS provisioning portal rejects “com.facebook.facebook” Bundle Identifier

The Bundle Identifier is significant from a security perspective because it makes its way into the provider certificate we will create later (also referred to as the topic). The APN trusts this field in the certificate to figure out which app in the target user’s device to convey the push message to. As Figure 2 illustrates, an attempt to create an App ID with the Bundle Identifier of com.facebook.facebook is promptly refused as a duplicate, i.e. the popular Facebook iOS app is probably using this in it’s certificate. Therefore, if a malicious entity were to bypass the provisioning portal’s restrictions against allowing an existing Bundle Identifier, then he or she could possibly influence Apple to provision a certificate that can be abused to send push notifications to users of another app (but the malicious user would still need to know the target user’s device-token discussed later in this article).

2. Create a Certificate Signing Request (CSR) to have Apple generate an APN certificate
In this step, Keychain on the desktop is used to generate a public and private cryptographic key pair. The private key stays on the desktop. The public key is included in the CSR and uploaded to Apple. The APN certificate that Apple provides back is specific to the app and tied to a specific App ID generated in step 1.

3. Create a Provisioning Profile to deploy your App into a test iOS device with push notifications enabled for the App ID you selected
This step is pretty straight forward and described in Apple’s documentation linked above.

4. Export the APN certificate to the server side
As described in Apple’s documentation, you can choose to export the certificate to .pem format. This certificate can then be used on the server side of your app infrastructure to connect to the APN and send push notifications to targeted devices.
This step is pretty straight forward and described in Apple’s documentation linked above.

5. Code iOS application to register for and respond to notifications
Your iOS app must register with the device (which in turns registers with the APN) to receive notifications. The best place to do this is in applicationDidFinishLaunching:, for example:

[sourcecode language=”objc”]
[[UIApplication sharedApplication]registerForRemoteNotificationTypes:(UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound)];
[/sourcecode]

The code above will make the device initiate a registration process with the APN. If this succeeds, the didRegisterForRemoteNotificationsWithDeviceToken: application delegate is invoked:

[sourcecode language=”objc”]
– (void)application:(UIApplication *)app didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)devToken {
const void *devTokenBytes = [devToken bytes];
self.registered = YES;
[self sendProviderDeviceToken:devTokenBytes]; // custom method
}
[/sourcecode]

Notice that the delegate is passed the deviceToken. This token is NOT the same as the UDID which is an identifier specific to the hardware. The deviceToken is specific to the instance of the iOS installation, i.e. it will migrate to a new device if the user were to restore a backup in iTunes onto a new device. The interesting thing to note here is that the deviceToken is static across apps on the specific iOS instance, i.e. the same deviceToken will be returned to other apps that register to send push notifications.

The sendProviderDeviceToken: method sends the deviceToken to the provider (i.e. your server side implementation) so the provider can use it to tell the APN which device to send the targeted push notification to.

In the case where your application is not running and the iOS device receives a remote push notification from the APN destined for your app, the didFinishLaunchingWithOptions: delegate is invoked and the message payload is passed. In this case, you would handle and process the notification in this method. If your app is already in the running, then the didReceiveRemoteNotification: method is called (you can check the applicationState property to figure out if the application is active or in the background and handle the event accordingly).

6. Implement provider communication with the APN
With the provider keys obtained in step 4, you can implement server side code to send a push notification to your users.


Figure 3: (enhanced) Notification format for push notification payloads

As illustrated in Figure 3, you will need the deviceToken of the specific user you want to send the notification to (this is the token you would have captured from the device invoking your implementation of the sendProviderDeviceToken method described in Step 4). The Identifier is an arbitrary value that identifies the notification (you can use this to correlate an error-response. Please see Apple’s documentation for details). The actual payload is in JSON format.

Security Best Practices

Now that we have established details on implementing push-notifications into custom apps, let’s discuss applicable security best practices along with abuse cases.

1. Do not send company confidential data or intellectual property in the message payload
Even though end points in the APN architecture are TLS encrypted, Apple is able to see your data in clear-text. There may be legal ramifications of disclosing certain types of information to third-parties such as Apple. And I bet Apple would appreciate it too (they wouldn’t want the liability).

2. Push delivery is not guaranteed so don’t depend on it for critical notifications
Apple clearly states that the APN service is best-effort delivery.


Figure 4: Do not rely on push for critical notifications

As shown in Figure 4, the push architecture should not be relied upon for critical notifications. In addition, iPhones that are not connected to cellular data (or when the phone has low to no signal) MAY not receive push notifications when the display is off for a specific period since wifi is often automatically turned off to preserve battery.

3. Do not allow the push notification handler to modify user data
The application should not delete data as a result of the application launching in response to a new notification. An example of why this is important is the situation where a push notification arrives while the device is locked: the application is immediately invoked to handle the pushed payload as soon as the user unlocks the device. In this case, the user of your app may not have intended to perform any transaction that results in the modification of his or her data.

4. Validate outgoing connections to the APN
The root Certificate Authority for Apple’s certificate is Entrust. Make sure you have Entrust’s root CA certificate so you are able to verify your outgoing connections (i.e. from your server side architecture) are to the legitimate APN servers and not a rogue entity.

5. Be careful with unmanaged code
Be careful with memory management and perform strict bounds checking if you are constructing the provider payload outbound to the APN using memory handling API in unmanaged programming languages (example: memcpy).

6. Do not store your SSL certificate and list of deviceTokens in your web-root
I have come across instances where organizations have inadvertently exposed their Apple signed APN certificate, associated private key, and deviceTokens in their web-root.


Figure 5: Screen-shot of APN certificates and deviceTokens being exposed

The illustration in Figure 5 is a real screen-shot of an iOS application vendor inadvertently exposing their APN certificates as well as a PHP script which lists all deviceTokens of their customers.

 
Figure 6: Scenario depicting rogue push notifications targeting the Facebook and CNN apps once their provider certificates have been compromised

In this abuse case, any malicious entity who is able to get at the certificates and list of deviceTokens will be able to send arbitrary push notifications to this iOS application vendor’s customers (see Figure 6). For example, the jpoz-apns Ruby gem can be used to send out concurrent rogue notifications:

[sourcecode language=”objc”]
APNS.host = ‘gateway.push.apple.com’

APNS.pem = ‘/path/to/pwn3d/pem/file’

APNS.pass = ”

APNS.port = 2195

stolen_dtokens.each do |dtoken|

APNS.send_notification(dtoken,‘pwn3d’)

end
[/sourcecode]

I would also highly recommend protecting your cert with a passphrase (it is assumed to be null in the above example). And it goes without saying that you should also keep any sample server side code that has the passphrase embedded in it out of your web-root. In fact, there is no good reason why any of this information should even reside on a host that is accessible from the Internet (incoming) since the provider connections to the APN need to be outbound only.

Conclusion

I feel Apple has done a good job of thinking through security controls to apply in the APN architecture. I hope the suggestions in this article help you to think through how to make sure the APN implementation in your app and architecture is secure from potential abuses.

If you liked this post check out SANS’ new course on Secure iOS App Development.

What’s in Your iOS Image Cache?

Backgrounding and Snapshots

In iOS when an application moves to the background the system takes a screen shot of the application’s main window. This screen shot is used to animate transitions when the app is reopened. For example, pressing the home button while using the logon screen of the Chase App results in the following screen shot being saved to the application’s Library/Caches/Snapshots/com.chase directory.

Figure 1: Snapshot showing cached information

Example Application

To further illustrate this point take the following profile page from a fictitious bank app which displays sensitive information like the user’s account number, balance, and secret question/answer.

Figure 2: Application that utilizes sensitive information

If the user presses the home button while viewing this screen a snapshot of the window will be saved to the application’s Snapshots directory. If you run this code in the iOS Simulator the snapshot is stored in the ~/Library/Application Support/iPhone Simulator/4.2/Applications//Library/Caches/Snapshots/com.yourcompany.MyBank directory.

Hiding Sensitive Data

The iOS Application Programming Guide states that sensitive information should be removed from views before moving to the background. Specifically, it states that when “the applicationDidEnterBackground: method returns, the system takes a picture of your application’s user interface…If any views in your interface contain sensitive information, you should hide or modify those views before the applicationDidEnterBackground: method returns.”

Fortunately, the code for hiding the sensitive fields in the fictitious “My Bank” application is very straightforward. In the delegate you can simply mark the sensitive fields as hidden:

[sourcecode language=”objc”]
– (void)applicationDidEnterBackground:(UIApplication *)application {
viewController.accountNumberField.hidden = YES;
viewController.balanceField.hidden = YES;
viewController.dobField.hidden = YES;
viewController.maidenNameField.hidden = YES;
viewController.secretQuestionField.hidden = YES;
viewController.secretAnswerField.hidden = YES;
}
[/sourcecode]

Of course, you also need to make the fields visible before the app becomes active using the following code in applicationDidBecomeActive:

[sourcecode language=”objc”]
– (void)applicationDidBecomeActive:(UIApplication *)application {
viewController.accountNumberField.hidden = NO;
viewController.balanceField.hidden = NO;
viewController.dobField.hidden = NO;
viewController.maidenNameField.hidden = NO;
viewController.secretQuestionField.hidden = NO;
viewController.secretAnswerField.hidden = NO;
}
[/sourcecode]

Adding this code to the delegate results in the following screen shot (without sensitive data) being taken when the home button is pressed.

Figure 3: Snapshot showing that sensitive data is not displayed (border added for display purposes)

Preventing Backgrounding

Instead of hiding or removing sensitive data you can also prevent backgrounding altogether by setting the “Application does not run in background” property in the application’s Info.plist file (this adds the UIApplicationExitsOnSuspend key to the plist). Setting this property results in applicationWillTerminate: being called and prevents the screenshot from being taken at all.


Figure 4: Screenshot showing plist configuration to prevent backgrounding

Summary

Sensitive data can be inadvertently saved when an app moves to the background. Developers should mitigate this by identifying sensitive fields and implementing applicationDidEnterBackground: or by preventing backgrounding altogether.

About

Frank Kim is the curriculum lead for application security at the SANS Institute and the author of DEV541 Secure Coding in Java. If you liked this post check out SANS’ new class on Secure iOS App Development.

Secure Coding iPhone and iPad Apps Against MiTM

This is a guest post from security researcher Nitesh Dhanjani. Nitesh will be giving a talk on “Hacking and Securing Next Generation iPhone and iPad Apps” at SANS AppSec 2011.

Many iOS applications use HTTP to connect to server side resources. To protect user-data from being eavesdropped, iOS applications often use SSL to encrypt their HTTP connections.

In this article, I will present sample Objective-C code to illustrate how HTTP(S) connections are established and how to locate insecure code that can leave the iOS application vulnerable to Man in the Middle attacks. I will also discuss how to configure an iOS device to allow for interception of traffic through an HTTP proxy for testing purposes.

A Simple App Using NSURLConnection

The easiest way to initiate HTTP requests in iOS is to utilize the NSURLConnection class. Here is sample code from a very simple application that takes in a URL from an edit-box, makes a GET request, and displays the HTML obtained.

Please note that the code in this particular example is mostly from Apple’s wonderful tutorial on how to use NSURLConnection

[sourcecode language=”objc”]
//This IBAction fires when the user types in a URL and presses GO
– (IBAction) urlBarReturn:(id)sender
{
//htmlOutput is the UITextView that displays the HTML
htmlOutput.text=@””;

//urlBar is the UITextField that contains the URL to load
NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:urlBar.text]
cachePolicy:NSURLRequestUseProtocolCachePolicy
timeoutInterval:60.0];

NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self];

if(!theConnection)
htmlOutput.text=@”failed”;
}

– (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response

{
//receivedData is of type NSMutableData
[receivedData setLength:0];

}

– (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data
{
[receivedData appendData:data];

NSString *tempString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];

htmlOutput.text = [NSString stringWithFormat:@”%@%@”,htmlOutput.text,tempString];

[tempString release];
}

– (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error
{

[connection release];

[receivedData release];

NSLog(@”Connection failed! Error: %@ %@”,
[error localizedDescription],
[[error userInfo] objectForKey:NSURLErrorFailingURLStringErrorKey]);

htmlOutput.text=[NSString stringWithFormat:@”Connection failed! Error %@ %@”,[error localizedDescription],
[[error userInfo] objectForKey:NSURLErrorFailingURLStringErrorKey]];
}

– (void)connectionDidFinishLoading:(NSURLConnection *)connection
{
NSLog(@”Succeeded! Received %d bytes of data”,[receivedData length]);

[connection release];

[receivedData release];

}
[/sourcecode]

The result is a simple iOS application that fetches HTML code from a given URL.


Figure 1: Simple iOS App using NSURLConnection to fetch HTML from a given URL.

In the screen-shot above, notice that the target URL is https. NSURLConnection seamlessly establishes an SSL connection and fetches the data. If you are reviewing source code of an iOS application for your organization to locate security issues, it makes sense to analyze code that uses NSURLConnection. Make sure you understand how the connections are being inititated, how user input is utilized to construct the connection requests, and if SSL is being used or not. While you are at it, you may also want to watch for NSURL * in general to include invocations to objects of type NSHTTPCookie, NSHTTPCookieStorage, NSHTTPURLResponse, NSURLCredential, NSURLDownload, etc.

Man in the Middle

74.125.224.49 is one of the IP addresses bound to the host name www.google.com. If you browse to https://74.125.224.49, your browser should show you a warning due to the fact that the Common Name field in the SSL certificate presented by the server (www.google.com) does not match the host+domain component of the URL.


Figure 2: Safari on iOS warning the user due to mis-match of the Common Name field in the certificate.

As presented in the screen-shot above, Safari on iOS does the right thing by warning the user in this situation. Common Name mis-matches and certificates that are not signed by a recognized certificate authority can be signs of a Man in the Middle attempt by a malicious party that is on the same network segment as that of the user or within the network route to the destination.


Figure 3: NSURLConnection‘s connection:didFailWithError: delegate is invoked to throw a similar warning.

The screenshot above shows what happens if we attempt to browse to https://74.125.224.49 using our sample App discussed earlier: the connection:didFailWithError: delegate is called indicating an error, which in this case warns the user of the risk and terminates.

This is fantastic. Kudos to Apple for thinking through the security implications and presenting a useful warning message to the user (via NSError).

Unfortunately, it is quite common for application developers to over-ride this protection for various reasons: for example, if the test environment does not have a valid certificate and the code makes it to production. The code below is enough to over-ride this protection outright:

[sourcecode language=”objc”]
– (BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace

{
return [protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust];
}

– (void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge

{
[challenge.sender useCredential:[NSURLCredential credentialForTrust:challenge.protectionSpace.serverTrust] forAuthenticationChallenge:challenge];
}
[/sourcecode]

The details on this code is available from this stackoverflow post. There is also a private method for NSURLRequest called setAllowsAnyHTTPSCertificate:forHost: that can be used to over-ride the SSL warnings but Apps that use it are unlikely to get through the App store approval process (Apple prohibits invocations of private API).

If you are responsible for reviewing your organization’s iOS code for security vulnerabilities, I highly recommend you watch for such dangerous design decisions that can put your client’s data and your company’s data at risk.

Intercepting HTTPS traffic using an HTTP Proxy

As part of performing security testing of applications, it is often useful to intercept HTTP traffic being invoked by the application. Applications that use NSURLConnection‘s implementation as-is will reject your local proxy’s self-signed certificate and terminate the connection. You can get around this easily by implanting the HTTP proxy’s self-signed certificate as a trusted certificate on your iOS device [Note: This is not a loop-hole against the precautions mentioned above: in this case we have access to the physical device and are legitimately implatining the self-signed certificate].

If you are using Burp Proxy or Charles Proxy, all you need to do is place the self-signed cert on a HTTP location and browse to it. Instructions for Burp Proxy and instructions for Charles Proxy are available on their respective web sites.

Once you have your iOS device or simulator setup using the self-signed certificate of your HTTP proxy, you should be able to intercept HTTPS connections that would otherwise terminate. This is useful for fuzzing, analyzing, and testing iOS applications for security issues.

If you liked this post check out SANS’ new class on Secure iOS App Development.

How Not to Store Passwords in iOS

The WordPress iOS App

I was looking for an open source iOS application and quickly came across the WordPress app. Once you log in to your WordPress blog via the app your credentials are then stored on the device itself. If done correctly this is not necessarily a bad thing. However, the WordPress app’s implementation leaves a bit to be desired. Take a look at the following snippet of code from WPcomLoginViewController.m and in Spot the Vuln fashion see if you can find the issue.

Update: The WordPress for iOS team has already committed a change for the next release to address the issue. Check out their comments below.

[sourcecode language=”objc”]
…snip…

– (void)saveLoginData {
if(![username isEqualToString:@””])
[[NSUserDefaults standardUserDefaults] setObject:username forKey:@”wpcom_username_preference”];
if(![password isEqualToString:@””])
[[NSUserDefaults standardUserDefaults] setObject:password forKey:@”wpcom_password_preference”];

if(isAuthenticated)
[[NSUserDefaults standardUserDefaults] setObject:@”1″ forKey:@”wpcom_authenticated_flag”];
else
[[NSUserDefaults standardUserDefaults] setObject:@”0″ forKey:@”wpcom_authenticated_flag”];
[[NSUserDefaults standardUserDefaults] synchronize];
}

– (void)clearLoginData {
[[NSUserDefaults standardUserDefaults] removeObjectForKey:@”wpcom_username_preference”];
[[NSUserDefaults standardUserDefaults] removeObjectForKey:@”wpcom_password_preference”];

[[NSUserDefaults standardUserDefaults] removeObjectForKey:@”wpcom_authenticated_flag”];
[[NSUserDefaults standardUserDefaults] synchronize];
}

– (BOOL)authenticate {
if([[WPDataController sharedInstance] authenticateUser:WPcomXMLRPCUrl username:username password:password] == YES) {
isAuthenticated = YES;
[self saveLoginData];
}
else {
isAuthenticated = NO;
[self clearLoginData];
}
return isAuthenticated;
}
[/sourcecode]

As you can see from the saveLoginData method above, (NSUserDefaults *)standardUserDefaults is used to store the username, password, and an authenticated flag after the user successfully signs on. To see this code in action simply logon to the app then go to Settings -> WordPress on the device to see your stored credentials as shown in the following screen shot.

The problem is that standardUserDefaults stores information in a plist file in plain text.

If you’re using the iOS Simulator to run the WordPress app the org.wordpress.plist file is stored at ~/Library/Application Support/iPhone Simulator/4.2/Applications/<APP_ID>/Library/Preferences where <APP_ID> is a randomly generated string.

If you’re running the app on an actual iOS device and perform a backup (I did this with my iPad) the plist file is located
at ~/Library/Application Support/MobileSync/Backup/<DEVICE_ID> where <DEVICE_ID> is a random string.

The plist file will also have a random name but you can simply run grep wpcom_password_preference * to find the file that contains your password.


Figure 1: Screenshot showing the plaintext credentials stored in the plist file

Using the Keychain

Sensitive data like passwords and keys should be stored in the Keychain. Apple’s Keychain Services Programming Guide states that a “keychain is an encrypted container that holds passwords for multiple applications and secure services. Keychains are secure storage containers, which means that when the keychain is locked, no one can access its protected contents”. Moreover, in iOS, each application only has access to its own keychain items.

You interact with the Keychain by passing in a dictionary of key-value pairs that you want to find or create. Each key represents a search option or an attribute of the item in the keychain. For example, the following code shows how you can add a new item to the keychain.

[sourcecode language=”objc”]
NSMutableDictionary *query = [NSMutableDictionary dictionary];
[query setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass];
[query setObject:account forKey:(id)kSecAttrAccount];
[query setObject:(id)kSecAttrAccessibleWhenUnlocked forKey:(id)kSecAttrAccessible];
[query setObject:[inputString dataUsingEncoding:NSUTF8StringEncoding] forKey:(id)kSecValueData];

OSStatus error = SecItemAdd((CFDictionaryRef)query, NULL);
[/sourcecode]

The code first creates a NSMutableDictionary then simply sets the appropriate key-value pairs in the dictionary. We start by adding the kSecClass key with the value kSecClassGenericPassword which, as the name implies, means that we are adding a generic password to the keychain. The kSecAttrAccount key specifies the account whose password we’re storing. In our case it would be for the “iostesting” username we saw above. The kSecValueData key specifies the password we want to store in the keychain. In our case the inputString variable would contain the password “princess123” that is in the screen shot above. Finally, calling the SecItemAdd function adds the data to the keychain.

Retrieving, updating, and deleting items from the keychain can be done using other functions [1]. To see how it works feel free to use this sample code that is based on work by Michael Mayo.

Protection Attributes

When you are adding or updating keychain items via the SecItemAdd or SecItemUpdate functions you should also specify the appropriate protection attribute. You may have noticed the kSecAttrAccessible key with the value kSecAttrAccessibleWhenUnlocked in the code above. This specifies that the keychain item can only be accessed while the device is unlocked. In total, there are six constants which were introduced in iOS 4.0 that you can use to specify when an item in the keychain should be readable.

Attribute Data is…
kSecAttrAccessibleWhenUnlocked Only accessible when device is unlocked.
kSecAttrAccessibleAfterFirstUnlock Accessible while locked. But if the device is restarted it must first be unlocked for data to be accessible again.
kSecAttrAccessibleAlways Always accessible.
kSecAttrAccessibleWhenUnlockedThisDeviceOnly Only accessible when device is unlocked. Data is not migrated via backups.
kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly Accessible while locked. But if the device is restarted it must first be unlocked for data to be accessible again. Data is not migrated via backups.
kSecAttrAccessibleAlwaysThisDeviceOnly Always accessible. Data is not migrated via backups.

 

Never use kSecAttrAccessibleAlways. If you are storing user data in the keychain it’s safe to assume that it should be protected.

In general, you should always use at least kSecAttrAccessibleWhenUnlocked to ensure that data is not accessible when the device is locked. If you have to access data while the device is locked use kSecAttrAccessibleAfterFirstUnlock.

The three …ThisDeviceOnly attributes should only be used if you are sure that you never want to migrate data from one device to another.

Summary

Mobile development brings both familiar and new challenges for creating secure applications. The iOS platform provides a number of strong security features with the Keychain and protection attributes introduced in iOS 4.0. As usual, it is up to developers, development teams, and security professionals to make sure that user data is handled securely and appropriate APIs are used correctly.

About

Frank Kim is the curriculum lead for application security at the SANS Institute and the author of DEV541 Secure Coding in Java. If you liked this post check out SANS’ new class on Secure iOS App Development.

[1] See the Keychain Services Reference for full details on the API

UI Spoofing Safari on the iPhone

This is the second in a series of guest posts from security researcher Nitesh Dhanjani. His first post was on Insecure Handling of URL Schemes in Apple’s iOS.  Nitesh will be giving a talk on “Hacking and Securing Next Generation iPhone and iPad Apps” at SANS AppSec 2011.

Popular web browsers today do not allow arbitrary websites to modify the text displayed in the address bar or to hide the address bar (some browsers may allow popups to hide the address bar but in such cases the URL is then displayed in the title of the window). The reasoning behind this behavior is quite simple: if browsers can be influenced by arbitrary web applications to hide the URL or to modify how it is displayed, then malicious web applications can spoof User Interface elements to display arbitrary URLs thus tricking the user to thinking he or she is browsing a trusted site.

Iʼd like to call your attention to the behavior of Safari on the iPhone via a proof of concept demo. If you have an iPhone, browse to the following demo and keep an eye out on the address bar:

http://www.dhanjani.com/iphone-safari-ui-spoofing/

For those who do not have an iPhone available, here is a video

And here are two images detailing the issue:

Figure 1: Image on left illustrates the page rendered which displays the ʻfakeʼ URL bar while the real URL bar is hidden above. Image on right illustrates the real URL bar that is visible once the user scrolls up.

Notice that the address bar stays visible while the page renders, but immediately disappears as soon as it is rendered. Perhaps this may give the user some time to notice but it is not a reasonably reliable control (and I donʼt think Apple intended it to be).

I did contact Apple about this issue and they let me know they are aware of the implications but do not know when and how they will address the issue.

I have two main thoughts on this behavior, outlined below:

1. Precious screen real estate on mobile devices

This is most likely the primary reason why the address bar disappears upon page load on the iPhone. Note that on the iPhone, this only happens for websites that follow directives in HTML to advertise themselves as mobile sites (see the source of the index.html in the demo site above for example).

Since the address bar in Safari occupies considerable real estate, perhaps Apple may consider displaying or scrolling the current domain name right below the universal status bar (i.e. below the carrier and time stamp). Positioning the current domain context in a location that is unalterable by the rendered web content can provide the users similar indication that browsers such as IE and Chrome provide by highlighting the current domain being rendered.

2. The consequences of full screen apps in iOS using UIWebView

Desktop operating systems most often launch the default web browser of choice when a http or https handler is invoked (this is most often the case even though the operating systems provide interface elements that can be used to render web content within the applications).

However, in the case of iOS, since most applications are full-screen, it is in the interest of the application designers to keep the users immersed within their application instead of yanking the user out into Safari to render web content. Given this situation, it becomes vital for iOS to provide consistency so the user can be ultimately assured what domain the web content is being rendered from.

To render web content within applications, all developers have to do is invoke the UIWebView class. It is as simple as invoking a line of code such as [webView loadRequest:requestObj]; where requestObj contains the URL to render.

Figure 2: Twitter App rendering web content on the iPad.

The screenshot above illustrates web-content rendered by the fantastic Twitter app on the iPad. To create this screen-shot, I launched the Twitter app on the iPad and selected a tweet from @appleinsider and clicked on the URL http://dlvr.it/9D81j in the tweet. Notice that the URL of the actual page being rendered is no where to be seen.

In such cases, it is clear that developers of iOS applications need to make sure they display the ultimate domain from which the application renders web content. A welcome addition to this would be default behavior on part of UIWebView to display the current domain context in a designated and consistent location.

Given how rampant phishing and malware attempts are these days, I hope Apple chooses to not allow arbitrary web applications to scroll the real Safari address bar out of view. In the case of applications that utilize UIWebView, I recommend a designated screen location label only accessible by iOS that displays the domain from where the web content is being rendered when serving requests via calls to UIWebView. That said, I do realize how precious real estate is on mobile devices and if Apple choses to come up with a better way of addressing this issue, I’d welcome that as well.

If you liked this post check out SANS’ new class on Secure iOS App Development.

Insecure Handling of URL Schemes in Apple’s iOS

This is a guest post from security researcher Nitesh Dhanjani.  Nitesh will be giving a talk on “Hacking and Securing Next Generation iPhone and iPad Apps” at SANS AppSec 2011

In this article, I will discuss the security concerns I have regarding how URL Schemes are registered and invoked in iOS.

URL Schemes, as Apple refers to them, are URL Protocol Handlers that can be invoked by the Safari browser. They can also be used by applications to launch other applications to perform certain transactions, but this use case isnʼt relevant to the scope of this discussion.

In the URL Scheme Reference document, Apple lists the default URL Schemes that are registered within iOS. For example, the tel: scheme can be used to launch the Phone application. Now, imagine if a website were to contain the following HTML rendered to someone browsing using Safari on iOS:

<iframe src=”tel:1-408-555-5555”></iframe>

In this situation, Safari displays an authorization dialog:


Figure 1: Dialog box in Safari requesting for user authorization to launch the Phone application

Fantastic. A malicious website should not be able to initiate a phone call without the userʼs explicit permission. This is the right behavior from a security perspective.

Now, let us assume the user has Skype.app installed. Let us also assume that the user has launched Skype in the past and that application has cached the userʼs credentials (this is most often the case: users on mobile devices donʼt want to repeatedly enter their credentials so this is not an unfair assumption). Now, what do you think happens when a malicious site renders the following HTML?

<iframe src=”skype://14085555555?call"></iframe>

In this case, Safari throws no warning, and yanks the user into Skype which immediately initiates the call. The security implications of this is obvious, including the additional abuse case where a malicious site can make Skype.app call a Skype-id who can then uncloak the victimʼs identity (by analyzing the victimʼs Skype-id from the incoming call).


Figure 2: Skype automatically initiating a call on iOS after being invoked by a malicious website

I contacted Appleʼs security team to discuss this behavior, and their stance is that the onus is on the third-party applications (such as Skype in this case) to ask the user for authorization before performing the transaction. I also contacted Skype about this issue, but I have not heard back from them.

I do agree with Apple that third-party applications should also take part in ensuring authorization from the user, yet their stance leaves the following concerns unaddressed.

Third party applications can only ask for authorization after the user has already been yanked out of Safari. A rogue website, or a website whose client code may have been compromised by a persistent XSS, can yank the user out of the Safari browser. Since applications on iOS run in full-screen mode, this can be an annoying and jarring experience for the user.

Third party applications can only ask the user for authorization after they have been fully launched. To register a URL Scheme, application developers need to alter their Info.plist file. For example, here is a section from Skypeʼs Info.plist:

<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>com.skype.skype</string>
<key>CFBundleURLSchemes</key>
<array>
<string>skype</string>
</array>
</dict>
</array>

[Note: If you have a jailbroken iOS device and would like to obtain the list of URL Schemes for applications you have downloaded from the App Store, just copy over the Info.plist files for the applications onto your Mac and run the plutil tool to convert them to XML: plutil -convert xml1 Info.plist]

Next, the application needs to implement handling of the message in itʼs delegate. For example:

(BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url
{
// Ask for authorization
// Perform transaction
}

Unlike the case of the tel: handler which enjoys the special privilege of requesting the user for authorization before yanking the user away from his or her browsing session in Safari, third party applications can only request authorization after they have been fully launched.

A solution to this issue is for Apple to allow third party applications an option register their URL schemes with strings for Safari to prompt and authorize prior to launching the external application.

Should Apple audit the security implications of registered URL schemes as part of its App Store approval process? Appleʼs tel: handler causes Safari to ask the user for authorization before placing phone calls. The most logical explanation for this behavior is that Apple is concerned about their customersʼ security and doesnʼt want rogue websites from being able to place arbitrary phone calls using the customerʼs device.

However, since the Skype application allows for such an abuse case to succeed, and given that Apple goes to great lengths to curate applications before allowing them to be listed in the App Store, should Apple begin to audit applications for security implications of exposed URL Schemes? After all, Apple is known to reject applications that pose a security or privacy risk to their users, so why not demand secure handling of transactions invoked by URL Schemes as well?

Skype is just one example. Besides Skype, there are many additional third party applications that register URL Schemes that can be invoked remotely by Safari without any interaction from the user. See the following websites for a comprehensive list http://wiki.akosma.com/IPhone_URL_Schemes and http://handleopenurl.com/scheme.

List of registered URL Schemes not available to the average user. You can enumerate all the Info.plist files in your iPhone to ascertain the list of URL schemes your iPhone or iPad may respond to assuming you have jailbroken your iPhone. However, it may make sense for this list to be available in the Settings section of iOS so users can look at it to understand what schemes their device responds that can be invoked by arbitrary websites. Perhaps this will mostly appeal to the advanced users yet I feel it will help keep the application designers disciplined the same way the user location notification in iOS does. This will also make it easier for enterprises to figure out what third party applications to provision on their employee devices based on any badly designed URL schemes that may place company data at risk.

Note that in order to create a registry of exposed URL schemes, Apple cannot simply parse information from Info.plist because it only contains the initial protocol string. In other words, the skype: handler responds to skype://[phone_or_id]?call and skype://[phone_or_id]?chat but only the skype: protocol is listed Info.plist while the actual parsing of the URL is performed in code. Therefore, in order to implement this proposed registry system, Apple will have to require developers to disclose all patterns within a file such as Info.plist.

I feel the risk posed by how URL Schemes are handled in iOS is significant because it allows external sources to launch applications without user interaction and perform registered transactions. Third party developers, including developers who create custom applications for enterprise use, need to realize their URL handlers can be invoked by a user landing upon a malicious website and not assume that the user authorized it. Apple also needs to step up and allow the registration of URL Schemes that can instruct Safari to throw an authorization request prior to yanking the user away into the application.

Given the prevalence and ever growing popularity of iOS devices, we have come to depend on Appleʼs platform with our personal, financial, and health-care data. As such, we need to make sure both the platforms and the custom applications iOS devices are designed securely. I hope this writeup helped increase awareness of the need to implement URL Schemes securely and what Apple can do to assist in making this happen.

If you liked this post check out SANS’ new class on Secure iOS App Development.