Inside Java Card: From APDUs to CAP File and Interoperability

As promised in the previous post, here are a few Java Card stories. Over the almost 20 years of Java Card history, many design decisions have been taken on the product: some successful, some less successful. Here are a few stories of these discussions/decisions.

API vs. APDU

Before Java Card, a smart card specification consisted in the definition of a set of APDU commands, with a precise syntax, and in most cases, a precise semantics associated to that syntax. With Java Card, for the first time, an API was designed to run inside the card.

At that time, some of us believed that this would lead rapidly to a paradigm shift in the card world, with the definition of card products shifting from the definition of APDUs (which are low-level communication protocol items) to a more noble definition of API, closer to a functional definition. That’s why we worked on the definition of the Java Card RMI protocol, whose objective was to allow the definition of interfaces to communicate with cards independently of protocols.

Well, we were wrong. Maybe not on the principles, but most definitely in practice. Nobody adopted the RMI protocol, and APDU protocols are still being defined today. In the world of smart cards, this makes sense, since low-level considerations remain a significant challenge (for instance, performance issues in some contactless transactions). In payment, mobile, and identity, APDU standards remain the norm everywhere.

Why did that happen? The answer is complex, but one major factor surely is the fact that Java Card is for obscure security specialists, and that the smart card protocols are only being exposed to a few developers. In such a context, there is no real need to simplify the interfaces exposed to developers.

What about new markets, though? Are there good reasons to define new APIs rather than new protocols? We can here consider two markets: Mobile TEE and IoT. In both cases, the market is sufficiently new to avoid compatibility issues. Even if standards like GlobalPlatform exist, they are not strong enough yet to be real obstacles.

Let’s start by the mobile TEE. The idea is here to isolate the security functionality of sensitive smartphone applications. By first looking at it, this looks like a fairly open model, where a developer would split an application into the security-sensitive part, which goes in the TEE, and the non-sensitive part, which runs on the “normal” OS. Well, that would work if developers had a clue about how to do this, but it sure isn’t the case. That’s why TEE providers are targeting “security providers”, i.e. developers who are working on security products. However, when taking a closer look, the requirements are always the same: authenticating users, protecting stored data (including keys and other credentials), and getting access to a little crypto to implement proprietary protocols. Most developers do not need in any way a full-fledged TEE with applications and more. All they need is a piece of trusted software that they can leverage in their applications without having to dive too deep in the security part. This may sound good for a supporter of APIs, but I am not sure, as “normal” developers need a generic API for their apps. The way in which these APIs are implemented is not their problem, and it is quite likely that at least part of this implementation will be on the “normal” side. In the end, the usual suspects (security specialists) will be designing the interface between the normal applications and the TEE, and they may be tempted to “optimize” things with low-level protocols.

IoT is different, though. It is interesting for at least two reasons: (1) there is no history and backward compatibility requirements, so why bother with APDUs? and (2) there is a wide variety of devices and applications. In addition, in IoT, we are quite likely to see situations where there is no secure element, so new frameworks have to be designed. Not sure where this is going, but I will sure follow what my ex-colleagues from Oracle are doing with this.

CAP file vs. class file

This debate has been very active during the design of Java Card 3.0 Connected, in which the class file ended up being selected. I was against the decision then, and I am still against it today. The class file is very nice, but it has not been designed with smart card and other limited environments in mind, so it makes perfect sense to design a more optimized format for constrained environments: the CAP file.

Here are a few good reasons to do so:

  • Optimization. That’s the obvious one. We can optimize on size by reducing overhead, and we can also optimize on loading and linking times, exploiting Java Card’s limitations on dynamicity.
  • Specialized instruction set. This may be considered as a special kind of optimization, but Java Card uses a specific instruction set, which includes in particular a few combined instructions that make programs smaller. This path has definitely not been fully exploited in Java Card.
  • Enhanced provability. Java binaries are verifiable, and Java Card could benefit from specific verifications, for instance related to the management of objects. However, Java Card has not exploited this at any point, and bytecode verification has remained more of an issue throughout the years.

On the opposite, from my biased point of view, the only reason to use class files would be to use the same format as anybody else. But since we are not running the same programs, what is the point? Well, there is at least one: with class files, there wouldn’t be a debate about the suitability of bytecode verifiers, as no one is discussing the fact that the Java bytecode verifier is correct.

So, I hope that in the future, the CAP file remains, and that it evolves to integrates more aggressive optimizations, as well as a more sophisticated set of analyses that leverage the specific nature of Java Card code.

Interoperability vs. differentiation

This has been a recurrent discussion item at the Java Card Forum. What should be standardized, and what should be left as differentiators between the actors. Here are a few interesting topics here:

  • Basic functionality should be interoperable. That’s been OK for a long time, and nobody dared going back on that one, at least officially. However, there have been some tensions there, in particular around SIM Toolkit. With proactive commands, it is possible to interrupt the execution of a SIM Toolkit routine while waiting for a response from the mobile, and during that time, other commands can be processed. This suspension mechanism, which is essential to the SIM Toolkit spec, has never been formalized in the Java Card platform, leaving an obvious interoperability gap.
  • Performance is for differentiation. Another obvious one, although vendors have regularly stymied any effort to define a benchmark for Java Card, like the Mesure project. Performance is a differentiator, but it is better not to measure it correctly.
  • Size is for differentiation. That sounds obvious too, but it has led to barrels of fun in real life. Obviously different platforms, even when running on the exact same chip, don’t occupy the same space, so the space left for applications and their data differs. Well, that’s expected, and totally static, so it can easily be considered. Things get more fun when we get to objects. Platforms use headers of different sizes, they don’t implement redundancy in the same way, the don’t manage memory in the same way. In the end, the same set of objects doesn’t occupy the same memory space on two different platforms, and this has been a great issue when cards are almost full.
  • Security is not much for differentiation. At first, this looks like a good opportunity for differentiation, as vendors want to be able to claim that their products are “more secure” than their competitor’s products. As an evaluator, I noticed the wide differences in security features between the products, and implicitly supported this view. I now believe that standards are more important for security than differentiation. If we consider the mobile payment vertical, some people will claim that the combination of private platform certifications by Visa and others and of formal Common Criteria certifications bring the desired level of interoperability. Yet, in many cases, certification authorities want to see an application certified on every target platform, and rightly so, because the various platforms do not provide the same guarantees on individual functions; they globally achieve the desired global level of security, but they protect individual functions differently. In order to reduce this, I have pushed for security interoperability to be included in Java Card for many years, first by defining security annotations, and more recently by defining dedicated security APIs. The Protection Profile for Java Card 3.0.5 has not been released yet, so I can’t tell for sure, but I am afraid that the measures included in the latest specification will still not be sufficient to provide sufficient interoperability.

Another interesting thing about interoperability, as we are moving to a world where Java Card may be used on very different platforms: When we defined the Connected Edition, it was very different from the Classic Edition, but we nevertheless stated very stringent interoperability requirements: All Classic applications must run on a Connected platform.

As we may be moving toward another split of Java Card platforms with the rise of RAM-based platforms, I would like to see less stringent requirements. If we consider that Java Card Next will include a Classic and a RAM edition, I would require the following:

  • Java Card Next Classic is just the next version following Java Card 3.0.5 Classic. Therefore, backward compatibility must be strict: all Java Card 3.0.5 Classic applications must run unchanged on a Java Card Next Classic platform, even without a recompile.
  • Java Card Next RAM is a new platform. Applications developed specifically for this platform are only expected to run on that platform.
  • Java Card Next Classic and Java Card Next RAM must be interoperable. It must be possible to write an application that runs on all Java Card Next platforms (Classic and RAM).

For an application provider, this would mean that the following options are available:

  • Keep existing applications, and only support their deployment on Java Card Next Classic platforms (in addition to all previous Classic platforms). This could for instance make perfect sense for payment cards or identity cards, which will remain based on classical smart card chips.
  • Write new applications specifically for the RAM platform, and only run them on Java Card Next platforms. This is suitable for applications without a strong history, which need to leverage the specific features of RAM platforms, for instance on a mobile TEE.
  • Rewrite/write applications that work on both platforms. This is suitable for new applications that target a wide range of devices, like in IoT, where the same application coudl run on many different architectures.

Interoperability discussions are among the most obscure and technical in the Java Card community. It is a bit like politics: it is very easy to make broad statements that sound definitive, but reality is closer to economics, very complex and requires a deep technical understanding (and like economics, it is not a hard science, so debates can be very long). I’ll miss these discussions.

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *