I have just attended the final review of the S3MS project, a research and development project partially funded by the European Commission in which my company was involved.
The central concept of the S3MS project is Security-by-Contract™. The idea is here that application developers provide with their application a security contract, which is a set of security properties that the application guarantees it will comply to. A large variety of properties can be expressed in S3MS, including the following ones:
- The application does not send more than 5 SMSs per session.
- The application does not send SMSs to premimum numbers.
- The application only uses secure network connections after accessing the user’s contacts.
- The application does not disclose user contacts over network connections.
- The application does not use network connections while roaming.
The really interesting thing in the S3MS approach is that we have tried to maximize both the kind of properties that can be specified and the level of automation that can be provided. With such an approach, it becomes possible to have applications that use all kinds of sensitive functions, while keeping the certification process simple.
The S3MS project has worked on both Java (MIDP) and .NET (Compact), and has produced many results, including the following ones:
- Static analysis can be used to verify that an application complies to its contract. Of course, some properties cannot be verified statically, for instance those that count use of a resource.
- When static analysis fails, inlining can be used. This consists in modifying the application code in order to include some security checks corresponding to the contract properties that could n ot be proved statically. Inlining can be performed on the target device during the application provisioning process.
- It is possible to perform a match between an application’s contract and a policy (which may be a device policy, an operator policy, a user policy, etc.).
Here is an idea of how an application provisioning process could go for a Java application, combining all these technologies:
- Load the application’s contract with the JAD file.
- Check that the application contract matches the various policies on the device. In case of mismatch, reject the application or in some cases, ask the user.
- Load the application itself.
- Perform static analysis (using a Web service) or verify the signatures generated after a previous successful static analysis
- For the properties that could not be statically verified, inline the code that will perform the checks dynamically.
The last point can be tricky in some cases, because it involves some changes in the program itself. This makes it impossible to perform further signature verifications (for instance, to verify the integrity of the program). It may therefore be necessary to compute a new signature for the modified program, using a local key.
The project does not claim that to have solved all issues, and we will see which ones remain open. However, it proposes an innovative way to handle security certification, in which operators and end users can define their own requirements, developers can define the properties that they guarantee on their application, and the entire deployment process can be automated (i.e., cheap). This approach is a nice alternative to the current certification models (which are not systematic, and in which application developers cannot really specify their application’s properties), while allowing applications to safely access sensitive features (which is a problem for the systems that don’t include any kind of certification, as operators may want to restrict/forbid access to some features).
In terms of the unresolved issues, here are the most interesting ones:
- How to interact with the end user? How to define the end user’s security policy?
- How to organize the flow of actions in order to make things easy for developers (good integration in their process, low certification price), while keeping high security levels?
- What to do when everything fails? When an application’s contract does not match a policy? When an application’s code does not match its contract?
Of course, this approach can be used for Java. It could for instance be integrated in the Java Verified program, which already requires developers to fill out a security questionnaire. It could also be used for .Net, and provide an interesting alternative to the “This application is not signed. It may screw up your phone. Install it anyway?” kind of user prompt. But the more interesting targets for the approach are the systems in which the developers enjoys a great level of freedom, like Android and iPhone. In such frameworks, Security-by-Contract™ could bring the security that operators and users may need, without putting new limits on application developers.
Of course, Java Card 3 would also be an interesting target, but this would mean that applications of unknown origin can be freely downloaded into a card, which does not seem to be for tomorrow.