The Off-Card Bytecode Verifier is fine, thank you!

REWRITTEN on 23 Nov. 2013.

A few weeks ago, a friend sent me a link to the Cardis program, with the message “A bug in the verifier?”. Looking at the program, I saw a paper entitled Manipulating frame information with an Underflow attack undetected by the Off-Card Verifier, by Thales Communications and Security. This sounded like bad news, so I got a copy of the paper and read it.

The good news is that there is no bug. Nevertheless, reading the paper made me unhappy. As a consequence, the first version of this blog post was a flame, in which I qualified the paper of “dishonest, not innovative, and misleading”.

Publishing the flame triggered some discussions, with people at Thalès and with other people, in particular with Jean-Louis Lanet of XLIM. They provided me their views on the topic, and I have decided to revise my original flame into a more mundane and constructive post.

Dishonest, or just a bad title.

As mentioned above, the paper’s title is “Manipulating frame information with an Underflow attack undetected by the Off-Card Verifier”. However, section 3.2 of the paper surprisingly starts with the sentence: “The standard Oracle bytecode verifier detects the underflow attack”. There is no other reference to a verifier that would not detect that attack, and that makes the title sound really dishonest, at least the “undetected” part of it.

More specifically, it seems that the objective of the paper is not to point to an issue in the off-card verifier, as the title seems to, but rather to point to the fact that there exist ways to bypass the verification. A revision of the title in the proceedings would do just fine.

Not innovative, or not published.

The paper describes an attack that uses the dup_x instruction to perform a stack underflow. This stack underflow allows the authors to access the content of a stack frame, including the current context, and to change it. The question is: is this a new attack? As a virtual machine implementer and evaluator, I would say no immediately. This attack very easily comes to mind as soon as you know the content of a stack frame (as an implementer does). This attack has therefore been in my “bag of tricks” when I was an evaluator, and was even quite a practical one.

Now, the smart card industry is secretive, and evaluators are even more secretive. Very few actual attacks have been described in scientific papers. It seems that the attack to mention stack underflow is the EMAN2 attack described by Bouffard et al. at Cardis in 2011. However, they only abuse the return address, labeling the current context “unknown value”. In that context, it seems that this paper describes something that has not been published before. For this one, my apologies, as I forgot the secrecy of the industry, and kudos for actually publishing this rather than keeping it secret.

So now, the community knows that, unverified bytecode can lead on some cards to allowing an attacker to run attacker-provided code in a chosen context. This is about as bad as it gets, and now, additional attacks will fall in my “tricks” category, in which an attacker/evaluator uses a new illegal bytecode combination because the previously known ones don’t work on a card. I hope that we won’t see more scientific papers about such tricks (see Jean-Louis Lanet’s second comment about this).

Misleading, or missing its target.

Of course, since the off-card verifier catches all of these logical attacks, they are not supposed to occur in real life, where bytecode verifiers are systematically used.

Let’s make a parallel between two typical components, RSA and a virtual machine. RSA is a crypto algorithm, a function RSA(K,M) that takes as input a key K and a message M, and produces a result. Similarly, a virtual machine is a function VM(P,I), where P is a program (in bytecode) and I some input data, and produces a result. The RSA and VM functions share something important: the key K and the program P need to obey some properties. If the don’t, the algorithms don’t work.

If the RSA prime factors are not prime numbers, or if the program provided to the VM is not valid, their results may not be satisfactory: although they are functionally correct, they will not have the expected security properties. In order to avoid bad keys and bad programs, we use appropriate algorithms to perform primality checks, and to verify bytecode. Just as it is important to ensure that an attacker cannot tamper with the key generation scheme, it is important to ensure that bytecode is verified before to be executed. But is it sufficient?

Well, that may be sufficient if you run your VM or your RSA on a device that is physically protected from attackers. But on a smart card, it isn’t sufficient, at least for RSA. There have been many attacks published on smart card implementations of RSA, using fault induction, power analysis, and more. The VM is just as sensitive to these attacks; in particular, combined attacks can be used to dynamically create illegal code on a card, as mentioned in the paper.

If we continue the analogy, RSA is still used on cards, even though attacks exist. During security evaluations, the robustness of the implementation is verified by a lab, which ensures that well-known attacks don’t work. Naturally, the same thing must be done with the VM: a high-security implementation must include significant countermeasures against attacks on VM, such as combined attacks. The implementers have the choice of the countermeasures; they can include additional checks in the virtual machine, attempt to detect the faults, or both, or anything else, as long as it works.

According to my discussion with Thales, this is more or less the point that the paper tries to make, in particular in its last part. However, the paper insists on the malicious application rather than on the attacks, especially in its last sentence:

Finally, this paper shows that during open platform evaluations, it is necessary to take into account malicious applications, and make detailed analysis of each requirement included in the platform guidance.

I definitely agree on the platform guidance aspect. Requiring bytecode verification is not sufficient, as the management of the verification context is also important to avoid logical attacks. However, during evaluations, malicious applications are not the real issue, attacks on the virtual machine are. In that context, malicious applications are for now tied to combined attacks, where a fault triggers the logical attack. What the community often fails to understand is that, on a smart card, a virtual machine is a component like any other: it can be subject to physical attacks. And that’s the reason why specific countermeasures are required.

So what?

In Cardis, the proceedings are collected after the conference, so authors have an opportunity to revise their paper before publication. After our exchanges, I hope that some revisions will be made before publication, in particular on the title and on the last part (making a stronger point about evaluation requirements). And so, I replaced my flame with this.


  • I just had a discussion with Nathalie Feyt, the boss of Emilie. She asked me to comment or update this blog, and I chose to comment.

    First, she didn’t like the fact that I hqve quqlified the pqper “dishonest”. I stand by the fact that the title can be considered dishonest, but I of course admit that this may not reflect the author’s intention, and could be a bad wording.

    Then, about the lack of innovation, she claims that no attack has been published before that allows the attacker to modify the current context, and that it is important to let the community that this is possible. I am surprised that this can be new for people in the field, but I admit that I haven’t seen it in research publications at this time. However, this attack has been known forever from the industry, and used in many evaluations.

    Finally, and most interestingly, we discussed of the objective of the paper. If I understood well, the objective is here to convince people from the industry that the bytecode verifier is not by itself a sufficient protection for a virtual machine.

    Here, we completely agree. The bytecode verifier checks that an applet functionally complies to the spec. It is somewhat similar to checking that a DES key is not one of the known weak keys. The virtual machine is a component of the smart card operating system. Just like all the DES implementation or all other components, it can be attacked. And just like the DES implementation, a secure virtual machine needs to include countermeasures.

    Nathalie suggested to make editorial changes to the paper, and these would be most welcome. As we agree on the conclusions, let’s make sure that message is brought properly. I will try to reinforce soon with another blog.

  • Dear Eric,

    I have had the opportunity to read also the paper. You are right the title does not reflect the content. They do not bypass the BCV. On that point I agree with you.

    But I disagree on the NOT INNOVATIVE section. I never see any publication on it except the paper on the EMAN2 attack. At that time, we paid a lot of attention on the state of the art without finding any paper related to an underflow attack on Java Card. Perhaps, I did not use the google scholar enough, but I search also in the Barbu’s PhD thesis without any success. Probably in the industry you share such an information, but from an academic point of view it is an unpublished work. A trick you said ? Humm, I tried it, and it runs on several cards. Maybe a generic trick ? Counter measure are known you said, but it looks like that some Card designers forgot to implement them ? No ?


  • Jean-Louis,

    Thanks for your comment. On the innovative section, I still consider that the paper introduces little innovation on top of EMAN2.

    And I also agree that not all cards include significant security measures on the VM. But then, not all cards are designed to withstand the most sophisticated attacks.

  • Jean-Louis Lanet wrote:


    We have still a good lesson to learn from this paper: think globally not locally. I explain. The EMAN2 attack used the absence of check on the local variables. But the problem was not about locals. We were using what Aurelien pointed out in his PhD : ROP (Return Oriented Programming) modify the control flow of the return mechanism to jump where you need. In fact, there is an asset to protect in integrity i.e. the return address register. With our preferred Java Card (from attacker POV), in 2012 the smart card manufacturer modified its firmware in such a way that our attack did not run anymore. Bad news ? Not at all. They just modify the card in a way that the attack path on the locals is now a dead end. A trick. No more no less. We just found another way to access the return address with less knowledge from the attacker and it runs again. A weak counter measure… a trick. Designing counter measure in a bottom up approach is not a good idea. Think top down, think globally. Find the asset and protect it whatever the path is. Maybe there are some implementation or economic constraints that lead card designers to think like this. But from a scientific POV it is not satisfactory. This is the real lesson from this paper.

Leave a Reply

Your email is never shared.Required fields are marked *