REWRITTEN on 23 Nov. 2013.
A few weeks ago, a friend sent me a link to the Cardis program, with the message “A bug in the verifier?”. Looking at the program, I saw a paper entitled Manipulating frame information with an Underflow attack undetected by the Off-Card Verifier, by Thales Communications and Security. This sounded like bad news, so I got a copy of the paper and read it.
The good news is that there is no bug. Nevertheless, reading the paper made me unhappy. As a consequence, the first version of this blog post was a flame, in which I qualified the paper of “dishonest, not innovative, and misleading”.
Publishing the flame triggered some discussions, with people at Thalès and with other people, in particular with Jean-Louis Lanet of XLIM. They provided me their views on the topic, and I have decided to revise my original flame into a more mundane and constructive post.
Dishonest, or just a bad title.
As mentioned above, the paper’s title is “Manipulating frame information with an Underflow attack undetected by the Off-Card Verifier”. However, section 3.2 of the paper surprisingly starts with the sentence: “The standard Oracle bytecode verifier detects the underflow attack”. There is no other reference to a verifier that would not detect that attack, and that makes the title sound really dishonest, at least the “undetected” part of it.
More specifically, it seems that the objective of the paper is not to point to an issue in the off-card verifier, as the title seems to, but rather to point to the fact that there exist ways to bypass the verification. A revision of the title in the proceedings would do just fine.
Not innovative, or not published.
The paper describes an attack that uses the
dup_x instruction to perform a stack underflow. This stack underflow allows the authors to access the content of a stack frame, including the current context, and to change it. The question is: is this a new attack? As a virtual machine implementer and evaluator, I would say no immediately. This attack very easily comes to mind as soon as you know the content of a stack frame (as an implementer does). This attack has therefore been in my “bag of tricks” when I was an evaluator, and was even quite a practical one.
Now, the smart card industry is secretive, and evaluators are even more secretive. Very few actual attacks have been described in scientific papers. It seems that the attack to mention stack underflow is the EMAN2 attack described by Bouffard et al. at Cardis in 2011. However, they only abuse the return address, labeling the current context “unknown value”. In that context, it seems that this paper describes something that has not been published before. For this one, my apologies, as I forgot the secrecy of the industry, and kudos for actually publishing this rather than keeping it secret.
So now, the community knows that, unverified bytecode can lead on some cards to allowing an attacker to run attacker-provided code in a chosen context. This is about as bad as it gets, and now, additional attacks will fall in my “tricks” category, in which an attacker/evaluator uses a new illegal bytecode combination because the previously known ones don’t work on a card. I hope that we won’t see more scientific papers about such tricks (see Jean-Louis Lanet’s second comment about this).
Misleading, or missing its target.
Of course, since the off-card verifier catches all of these logical attacks, they are not supposed to occur in real life, where bytecode verifiers are systematically used.
Let’s make a parallel between two typical components, RSA and a virtual machine. RSA is a crypto algorithm, a function RSA(K,M) that takes as input a key K and a message M, and produces a result. Similarly, a virtual machine is a function VM(P,I), where P is a program (in bytecode) and I some input data, and produces a result. The RSA and VM functions share something important: the key K and the program P need to obey some properties. If the don’t, the algorithms don’t work.
If the RSA prime factors are not prime numbers, or if the program provided to the VM is not valid, their results may not be satisfactory: although they are functionally correct, they will not have the expected security properties. In order to avoid bad keys and bad programs, we use appropriate algorithms to perform primality checks, and to verify bytecode. Just as it is important to ensure that an attacker cannot tamper with the key generation scheme, it is important to ensure that bytecode is verified before to be executed. But is it sufficient?
Well, that may be sufficient if you run your VM or your RSA on a device that is physically protected from attackers. But on a smart card, it isn’t sufficient, at least for RSA. There have been many attacks published on smart card implementations of RSA, using fault induction, power analysis, and more. The VM is just as sensitive to these attacks; in particular, combined attacks can be used to dynamically create illegal code on a card, as mentioned in the paper.
If we continue the analogy, RSA is still used on cards, even though attacks exist. During security evaluations, the robustness of the implementation is verified by a lab, which ensures that well-known attacks don’t work. Naturally, the same thing must be done with the VM: a high-security implementation must include significant countermeasures against attacks on VM, such as combined attacks. The implementers have the choice of the countermeasures; they can include additional checks in the virtual machine, attempt to detect the faults, or both, or anything else, as long as it works.
According to my discussion with Thales, this is more or less the point that the paper tries to make, in particular in its last part. However, the paper insists on the malicious application rather than on the attacks, especially in its last sentence:
Finally, this paper shows that during open platform evaluations, it is necessary to take into account malicious applications, and make detailed analysis of each requirement included in the platform guidance.
I definitely agree on the platform guidance aspect. Requiring bytecode verification is not sufficient, as the management of the verification context is also important to avoid logical attacks. However, during evaluations, malicious applications are not the real issue, attacks on the virtual machine are. In that context, malicious applications are for now tied to combined attacks, where a fault triggers the logical attack. What the community often fails to understand is that, on a smart card, a virtual machine is a component like any other: it can be subject to physical attacks. And that’s the reason why specific countermeasures are required.
In Cardis, the proceedings are collected after the conference, so authors have an opportunity to revise their paper before publication. After our exchanges, I hope that some revisions will be made before publication, in particular on the title and on the last part (making a stronger point about evaluation requirements). And so, I replaced my flame with this.