Smart Mobility: Mobile Application Security

Part of the Smart Week, the Smart Mobility conference has started this morning. I am speaking in the afternoon, about a comparison of access control in mobile application frameworks. In support to this talk, I will write a few posts on this topic, ans also try to follow a little what happens at the conference.

This first post is about the complexity mobile application security. Without even trying to find solutions to anything, I will describe why this problem is difficult, and why it is really difficult for any actor to propose a good solution.

First, let’s start by a little game. Do you think that the following things should be allowed for mobile applications?

  • A mapping application that continuously sends your position to a remote server to provide guidance.
  • An application that can modify its behavior without any notice to the operator or end-user.
  • An application that can be granted more permissions than the ones it was signed for.
  • A signed application that includes a well-known virus.

The answers after the jump.

Of course, all of this has happened, and we can’t accuse small start-up companies for doing this:

  • Google Maps offers to track you using your embedded GPS or, if you don’t have one, using the identification of mobile cells. Google warns you that the application will generate a lof of data traffic, but not really that you are being tracked by a company that has the reputation to exploit every it of data that comes through its servers.
  • Yahoo has a widget framework, in which widgets can be updated deynamically during their execution, without telling anybody. I don’t want to pinpoint Yahoo, because I am sure that many widget frameworks allow this in one way or another. The main idea is here that, if code is consdered as data, it becomes difficult to control it.
  • I will not tell you the details, but there is a very simple way to assign permissions to a signed Java application, even if it has not requested it. Just perfect for Trojan horses!”
  • In Windows Mobile, the signature of an application simply verifies the identity of the person/entity who submitted the application. Of course, in such conditions, malware is getting signed, both by mistake or purposedly.

If we move to principles, there are a few common assumptions that don’t hold very well:

  • Signatures are safe. In cryptographic terms, signatures are difficult to forge. But the only thing that a signature tells you is that somebody (with access to a private key) has signed a given content. A signatue by itself does not provide any guarantee about the content.
  • Interpreted applications always run on a sandbox. Usually, this is quite true. However, running in the sandbox means that the application will only do what it has been authorized to do. If it gets extended permissions, it may be authorized to wipe all your contacts or to block your SIM card. Who knows?
  • Applications cannot be updated. This is mostly true for native applications, and also for “standard” interpreted application like Java or .Net. However, if we move to widget frameworks, this doesn’t hold any more, and update are in fact quite frequent (and quire practical, in most cases).
  • Servers should be trusted. If you still think that, it is time to get a security training. Clients should not trust servers more than servers trust clients. Mutual authentication should be the norm.

In order to understand the security of anything, it is very important to understand who the stakeholders are. In the case of mobile security, this is very much the heart of the problem. The monst commonly mentioned actors are:

  • The device manufacturer. There is always at least one device manufacturer, and in some cases, there may be more than one (for instance if you consider the SIM card as a device).
  • The mobile operator. The mobile operator is present and unique in most cases. In some cases, for instance when there is a MVNO, there may be more than one; in some other cases, for instance with an iPod Touch, there may be none.
  • The end user. There is always an end-user, who is the person in front of the screen and keyboard. For simplicity, we will only consider single-user devices, since this use case is by far the most common on mobile phones.
  • The application providers. An application provider is responsible for an application. Application providers may be specialized actors, or they may just be a role of another actor.

The device manufacturers and mobile operators are fairly well-known actors, and their roles are quite natural (respectively, providers of a device and of a mobile network service). The two other actors are not as clearly defined. Let’s start by the application providers. An application provider may be:

  • A device manufacturer. Today’s devices usually come with many bundled applications, to manage contacts, agenda, and more, or to offer a few basic games. The manufacturer then acts as application provider for these applications.
  • A mobile operator. Many devices sold through operators are customized by the operator. Initially, operators simply customized the look and feel, but they are now embedding more and more applications. For these applications, usually branded by the operator, the operator acts as application provider.
  • Third parties. Naturally, applications can in most cases be selected and loaded by the end-user. This selection may happen on the manufacturer or operator portal, but most likely, the applications will come from somewhere else, for instance from a Web site, who proposes to install an application.

The end user also seems easy to define as the guy who uses the device. However, this end user has relationships with all the other actors:

  • As a mobile service subscriber. As such, the end user is under contract with a mobile operator.
  • As a device owner. This is a simple relationship, but the end-user usually has accepted some usage terms when purchasing the device.
  • As an application user/service subscriber. This relationship comes from buying an application, or subscribing to a service that includes an application (for instance, a chat service).

You may consider that distinguishing between these roles is not important. However, it starts to be important when you consider the assets that are at stake in the relationships, as each relationship considers different assets.


The next important item in order to reason about security is the list of the assets to be protected. Every actor needs to protect its own assets.

Let’s start by the device manufacturer, which is an easy one. The manufacturer’s assets are usually embedded in the device itself. The first such asset is the device’s design and code, whose integrity must be protected on the device’s itself. Confidentiality is often desirable, but it is often difficult to achieve on the device (and also elsewhere, actually). One of the challenges for device manufacturers is that thec deo embedded in a mobile device often comes from several providers. On top of the code, the device usually manages some data, whose integrity needs to be protected as well. Some of these requirements, such as the protection of the IMEI, are actually legal requirements, as this number uniquely identifies a device, and is used as a countermeasure against theft. Finally, most devices include cryptographic keys. Some of these keys are public keys, used to verify signatures, whose integrity needs to be protected. Some other keys are secret keys, for instance used to encrypt data on the device, and their confidentiality also needs to be protected.

The operator’s assets are also very important. The main asset of the operator is the network and all network-related information. The access to the network must be controlled at all times, in order to verify that the device/user is actually allowed to use the network, and to perform a given operation on that network. This of course requiresamong other things some authentication, which involves sensitive cryptographic keys. Operators don’t store these directly on the device; instead, they store their sensitive information on the SIM card, which they own, and which is inserted in the phone. The same SIM card may also be used to store some other sensitive information; in particular, it is used to store thepublic key certificates that the operator uses to verify the signatures of mobile applications.

From the application provider’s point of view, the code of the application is an important asset, whose integrity needs to be protected. Application providers would also like to protect the confidentiality of their code, but this is more difficult to achieve in practice. The application’s data sometimes needs to be protected, but this protection is under the direct responsibility of the application itself, as the platform can only provide limited guarantees. The platform usually only guarantees the integrity of meta-data, such as the file descriptors in a filesystem, which cannot be accessed by applications. Confidentiality is often difficult to achieve, unless it is managed by some secure element, like the SIM card.

The end-user also has many assets. As a mobile subscriber, the end user expects to keep the privacy of its subscription information, such as the MSISDN (phone number). Similarly, the location information, for instance the current cell identifier, needs to be considered confidential. As a device user, the end user wants to protect the privacy of its private data (pictures, videos, messages), as well as the confidentiality of location information, and of the device identifier. As an application user, the integrity of the application data should be protected, as well as its confidentiality in some cases (depending on the application itself). These constraints apply both to the information stored in the device, and to the information transmitted by the card.


Now that we have identified the stakeholders, their assets, and the security constraints on these assets, the next step is to look at the threats on these assets, as perceived by the various stakeholders. Some of the threats are universally recognized. Phone theft is one such threat: all actors agree that this needs to be countered, even though some actors may actually gain from it. Similarly, everybody agrees that an end-user who steals airtime is a threat, as well as an application that steals data from the end-user (for instance, a game that copies the user’s contacts). Some other threats are less obvious. An end-user using content without DRM is usually not considered as a problem, and applications that disclose sensitve user data are also widely accepted. In each case, the definition of threats greatly depends on the point of view of the actor. Building an exhaustive catalog of threats is difficult, but we can look at a few specific ones.

Let’s first consider stolen phones. The end-user bears most of the costs related to phone theft. When it occurs, the user loses a device, but also the assets on the phones. In addition, it may be possible for the thief to actually (mis)use the applications on the phone. This particular issue needs to be addressed by application providers, by providing a way to limit the abuse of applications (for instance by blocking it), and by being able to replace an application (and its data?) on a new device. The theft issue can also be addressed by the network operator, for instance by being able to replace the phone by a phone with the same configuration.

The next threat we can consider is the abuse of the network by customers. The obvious victim is here the network operator, who at least loses some business. In the worst case, the integrity of the network could be compromised. Such a threat is normally impossible at the application level, or it would at least require a considerable amount of permissions. The standard APIs defined in application frameworks usually don’t allow applications to circumvent network access controls. This means that such threats will rely on malware that directly accesses low-level features, and consequently, that interpreted frameworks are better protected, because they usually don’t provide any way to access these low-level features. There are other cases in which network abuse can be a threat to the end-user. If an application (malware) includes a Trojan horse that transforms a phone into a spambot, the end-user may not be aware that the application is using the network, a least until his next bill arrives. However, even in such a case, and especially if the attack is widespread, it is quite likely that operators will bear at least a part of the attack’s cost, as they will need to waive some of the costs incurred by their customers.

Another kind of threat is the abuse of an application by an end-user. For instance, a customer may access protected content illegally by intercepting an information stream after decryption by the application, or by modifying the application to make it send the protected application to another application. Such attacks obviously target application providers, or more widely, content providers. Protection may here come from signatures and integrity protections, but basically, the application providers need to rely on the mobile device’s application framework and basic platform to provide security services.

The last threat we will consider is an application that discloses data. The victim is in many cases the end-user, and possibly the operator. The data that may be leaked includes operator data, such as connection-related data or network data, or more likely end-user data such as location information, contact information, pictures, videos, etc. The two potential issues are here the privacy/confidentiality of the information disclosed, and the potential legal issues linked to the disclosure and/or aggregation of the data (for instance, in France, large databases need to be declare to a control authority).

What about the application?

The idea has been here to show the complexity of the security constraints on mobile applications. Here, the main issue is that mobile applications may be both the attacker and the victim, depend on the issue.

If you consider a mobile application as a potential attacker, there are things that you can do. First, the application framework may enforce a lot of protections, for instance based on access control. Then, the operator may impose a security policy, and require the application provider to prove the inocuity of his application before to sign the application. Third-party laboratories may be used in such a certification scheme.

If you consider the mobile application as the vistim, the options are more limited. In particular, the application is responsible for protecting its assets, and the security policy is necessarily application-dependent, and necessarily relies on the application’s specification. In such a case, it is much more difficult to verify the application of the policy, and most application providers are not ready to assume the costs associated to such a certification.

In future posts, we will look at permissions and access control mechanisms, and also to certification schemes that are available to application providers and mobile operators.


  • In relation to code signing, this is a general issue not only on windows mobile but any platform. At least in the best case code signing provides some degree of confidence that the signer is accountable for what has been signed. But the signing process isn’t a warranty that the application is without virus or even works correctly !

  • Sure, it is possible to get malware signed in any framework; it is even the point I want to make.

    There are two reasons why I singled out Windows Mobile: first, it is a native platform, in which it is much easier to write a virus; second, and more importantly, its signing process used to only authenticate the developer, without paying any attention to the application. But of course, it is possible to do the same thing in Symbian. Hopefully, it is a bit more difficult.

Leave a Reply

Your email is never shared.Required fields are marked *