The Off-Card Bytecode Verifier is fine, thank you!

REWRITTEN on 23 Nov. 2013.

A few weeks ago, a friend sent me a link to the Cardis program, with the message “A bug in the verifier?”. Looking at the program, I saw a paper entitled Manipulating frame information with an Underflow attack undetected by the Off-Card Verifier, by Thales Communications and Security. This sounded like bad news, so I got a copy of the paper and read it.

The good news is that there is no bug. Nevertheless, reading the paper made me unhappy. As a consequence, the first version of this blog post was a flame, in which I qualified the paper of “dishonest, not innovative, and misleading”.

Publishing the flame triggered some discussions, with people at Thalès and with other people, in particular with Jean-Louis Lanet of XLIM. They provided me their views on the topic, and I have decided to revise my original flame into a more mundane and constructive post.

Dishonest, or just a bad title.

As mentioned above, the paper’s title is “Manipulating frame information with an Underflow attack undetected by the Off-Card Verifier”. However, section 3.2 of the paper surprisingly starts with the sentence: “The standard Oracle bytecode verifier detects the underflow attack”. There is no other reference to a verifier that would not detect that attack, and that makes the title sound really dishonest, at least the “undetected” part of it.

More specifically, it seems that the objective of the paper is not to point to an issue in the off-card verifier, as the title seems to, but rather to point to the fact that there exist ways to bypass the verification. A revision of the title in the proceedings would do just fine.

Not innovative, or not published.

The paper describes an attack that uses the dup_x instruction to perform a stack underflow. This stack underflow allows the authors to access the content of a stack frame, including the current context, and to change it. The question is: is this a new attack? As a virtual machine implementer and evaluator, I would say no immediately. This attack very easily comes to mind as soon as you know the content of a stack frame (as an implementer does). This attack has therefore been in my “bag of tricks” when I was an evaluator, and was even quite a practical one.

Now, the smart card industry is secretive, and evaluators are even more secretive. Very few actual attacks have been described in scientific papers. It seems that the attack to mention stack underflow is the EMAN2 attack described by Bouffard et al. at Cardis in 2011. However, they only abuse the return address, labeling the current context “unknown value”. In that context, it seems that this paper describes something that has not been published before. For this one, my apologies, as I forgot the secrecy of the industry, and kudos for actually publishing this rather than keeping it secret.

So now, the community knows that, unverified bytecode can lead on some cards to allowing an attacker to run attacker-provided code in a chosen context. This is about as bad as it gets, and now, additional attacks will fall in my “tricks” category, in which an attacker/evaluator uses a new illegal bytecode combination because the previously known ones don’t work on a card. I hope that we won’t see more scientific papers about such tricks (see Jean-Louis Lanet’s second comment about this).

Misleading, or missing its target.

Of course, since the off-card verifier catches all of these logical attacks, they are not supposed to occur in real life, where bytecode verifiers are systematically used.

Let’s make a parallel between two typical components, RSA and a virtual machine. RSA is a crypto algorithm, a function RSA(K,M) that takes as input a key K and a message M, and produces a result. Similarly, a virtual machine is a function VM(P,I), where P is a program (in bytecode) and I some input data, and produces a result. The RSA and VM functions share something important: the key K and the program P need to obey some properties. If the don’t, the algorithms don’t work.

If the RSA prime factors are not prime numbers, or if the program provided to the VM is not valid, their results may not be satisfactory: although they are functionally correct, they will not have the expected security properties. In order to avoid bad keys and bad programs, we use appropriate algorithms to perform primality checks, and to verify bytecode. Just as it is important to ensure that an attacker cannot tamper with the key generation scheme, it is important to ensure that bytecode is verified before to be executed. But is it sufficient?

Well, that may be sufficient if you run your VM or your RSA on a device that is physically protected from attackers. But on a smart card, it isn’t sufficient, at least for RSA. There have been many attacks published on smart card implementations of RSA, using fault induction, power analysis, and more. The VM is just as sensitive to these attacks; in particular, combined attacks can be used to dynamically create illegal code on a card, as mentioned in the paper.

If we continue the analogy, RSA is still used on cards, even though attacks exist. During security evaluations, the robustness of the implementation is verified by a lab, which ensures that well-known attacks don’t work. Naturally, the same thing must be done with the VM: a high-security implementation must include significant countermeasures against attacks on VM, such as combined attacks. The implementers have the choice of the countermeasures; they can include additional checks in the virtual machine, attempt to detect the faults, or both, or anything else, as long as it works.

According to my discussion with Thales, this is more or less the point that the paper tries to make, in particular in its last part. However, the paper insists on the malicious application rather than on the attacks, especially in its last sentence:

Finally, this paper shows that during open platform evaluations, it is necessary to take into account malicious applications, and make detailed analysis of each requirement included in the platform guidance.

I definitely agree on the platform guidance aspect. Requiring bytecode verification is not sufficient, as the management of the verification context is also important to avoid logical attacks. However, during evaluations, malicious applications are not the real issue, attacks on the virtual machine are. In that context, malicious applications are for now tied to combined attacks, where a fault triggers the logical attack. What the community often fails to understand is that, on a smart card, a virtual machine is a component like any other: it can be subject to physical attacks. And that’s the reason why specific countermeasures are required.

So what?

In Cardis, the proceedings are collected after the conference, so authors have an opportunity to revise their paper before publication. After our exchanges, I hope that some revisions will be made before publication, in particular on the title and on the last part (making a stronger point about evaluation requirements). And so, I replaced my flame with this.


Twitter going feudal on security

I have recently experienced security issues with Twitter, as my account was in some way hacked. And I am not happy of the way Twitter handles this situation. First, here are the facts that I know: Two weeks ago, a got an e-mail from a colleague warning me that he just received a spam Direct


Experimenting NFC, things

Following my little NFC rants, I have kept on experimenting with Android NFC applications and reading about the Internet of Things (experimenting remains harder, here). The combination is trendy these days, as this week will see the launch of a new initiative in France with the French chapter of ACM SIGOPS (in French). I won’t


NFC Tags to Empower Users in The Internet of Everything Else

Here is a continuation to my ramblings about the solely private use of NFC tags. I have already mentioned that there would be many benefits in considering some tags as public goods, and now, I wll focus on tags to be associated to things, as owned by companies or individuals. I have pompously called this


NFC tags as Public Goods

I have now seen a number of NFC applications, and they all have something in common: they consider their tags as a private and exclusive property. They believe that they will be the only application using this tag. That may be true in some cases, where tags are deployed inside the premises of a company


POPWings again, after MWC

I now have two POPWings cards, as I made a new one with my professional contact information on Gemalto’s MWC booth yesterday. I also have had the ability to “pop” one or two persons, giving me a better experience of the application. So, I owe an apology to POPWings here. When I first tried their


POPWings is a cool business card, but where is the platform?

UPDATED March 1st, 2013: See follow-up article. I have been quite happy to hear a few weeks ago that Gemalto finally decided to consider NFC as more than secure services, by launching their POPWings service. I immediately ordered one of their business cards, excited to get a new NFC service. So, I got a card


RFID in schools, or Security vs. Transparency

I recently became enthusiastic about how wonderful transparent security would be. I still feel that way, but we also need to define limits on transparency. The example of a girl being expelled from her school because she refuses to wear a RFID badge (through @stoweboyd) is interesting. The issue is rather simple. A school has


Convenience vs. Security vs. (Perceived) Security

Yesterday, @poulpita tweeted a link to a blog explaining that convenience keeps winning against security. The main argument in this blog is about iOS6′s Passbook, which can store credit card numbers, for your convienience. The reasoning goes on with a comparison of the security merits of a credit card number stored on Passbook and a


JavaOne day 0: Strategy keynote

It’s Sunday afternoon in San Francisco, and time to get to work, after a nice bike ride. The first part has been the ride up Nob Hill to the Masonic Center, and that’s been hard. Once seated, we got a lot of news, but most of them not that new. Java ME 3.2, with its