The distribution of mobile applications heavily relies on digital signatures. Applications must be signed before they are distributed. The problem with signatures is that we have often been warned that “we should not trust applications that have not been signed”. Although this is absolutely true, and although anybody who has followed a Logics 101 class knows that it does not say anything about signed applications, most end users tend to understand that “applications that have been signed can be trusted”. And of course, this is not true, at least not always.
There has been a case recently, in which a Symbian piece of malware has been signed by the Symbian Signed program. This case is rather sad, because it somehow gives arguments in favor of security by secrecy, obscurity, and proprietary measures. Symbian has a transparent process, in at least two ways:
- The antivirus tools used to scan the programs are known by all, which allows malware developers to make sure that their piece of code is not detected.
- Symbian Signed is able to revoke signatures, but this revocation is managed in a way that allows users to ignore it.
Of course, such problems don’t happen with obscure, proprietary programs. Apple doesn’t say anything about their application certification program, and they claim to have the ability to disable any application on any phone (at least as soon as it connects to iTunes or the application store). Even Amazon has shown that they could unilaterally decide to recall a fraudulous book. Don’t get me wrong, I still largely prefer open mobile solutions, but closed systmes score one point here.
So, we get back to the original question: Should we sign malware?
And the answer is … yes, of course.
To justify this answer, we need to get back to the basics. What guarantees do we get through a signature?
- Some knowledge about the developer’s identity. In order to submit to most programs, including the Symbian Signed programs, you need to sign your own code with a certificate obtained against a proof of identity. Actually, that’s a part of the problem that nobody mentioned in the latest Symbian issue: how did the malware develoepr hide, and can we close the loophole?
- Some knowledge about the fact that that the application follows some rules. This knowledge can be obtained through automated tools, like antivirus tools or static code analysis tools, or through a human analysis, like interactive testing or code review. Usually, the analysis is mostly automated, because human analysis is much more expensive (and by a lot, like 100 or 1000 times more expensive).
These guarantees usually don’t mean that the application isn’t malware. Let’s consider may favorite example: how to make sure that an application does not dump use my Internet connection to spam my entire contact base? Well, in most cases, you can’t. You may be able to know through permission requests that this application needs to read your contacts and access Internet, but you will have no idea about the relationship between the two.
There is hope that, eventually, some tools will be available that will allow us to prove that an application is not malware (or at least that it does not do anything from a list of known bad things), but we are not there today.
Static analysis addresses a part of the problem: it allows developers to prove that their applications don’t violate a predefined set of rules. This sounds fine at first: the burden of proof belongs to the developer, which seems fair. However, the problem is in the rules. With today’s static analysis technology, if you want to prove that an application does not dump you contacts on Internet, you must basically prove that your application either uses Internet or uses contacts, but not both. At the research level, some people may even be able to prove that your contacts data can never get to Internet in any way. But what about real life?
In real life, if an application uses your contacts and Internet, they are also likely to send an e-mail to one of the contacts. In that case, part of the contact (the e-mail address) actually goes on Internet, and things become really, really hard to prove.
Now, is this problem impossible to solve? No, of course. However, the solution is hard to get to. The idea is here to design APIs so that they are more difficult to misuse; a direct consequence is that static analysis will work better, making it much simpler for developers to prove that the applications can’t do harm.
To get back to our example, one solution is to use a
sendMailToContact API that starts an interaction (managed at system level) to select an e-mail address, and then sends a message to this address, without ever letting the application know about the address itself. Such an API addresses the needs of most applications, while making it very easy to prove that contacts aren’t dumped on Internet: the application does not actually access your contacts.
Of course, this will never work for all applications. If you want to replace the default contact management application on your Android device, you will have to allow this application to access directly your contacts, and also most likely to access Internet. In fact, you will even want this application to dump (backup) your contacts on Internet. But most importantly, you are quite likely to use an application that has been developed by a well-established company or community, and that you trust.
So, once again, as of today, we will most likely keep signing malware. However, when this happens, we should have of some person or some company to be held accountable for it. That’s what the signature processes bring to us.