Java Card software attacks

There have been two papers at SSTIC’16 that outline the limits of bytecode verification in the context of Java Card. One of the papers, by Guillaume Bouffard and Julien Lancia, describes a bug found in Oracle’s bytecode verifier through fuzzing (yes, it’s been fixed). The second one, by Jean Dubreuil, outlines several logical and combined attacks through legal applets.

I will refrain from commenting on the first paper because of my previous involvement with Oracle’s Java Card technology, but I would like to make a few comments on the second one.

First, this paper is very well-written, and the attacks are really easy to understand, at least if you have the appropriate background. The attacks are all interesting, and some of them, in particular on the logical part, seem a little bit too easy… Maybe some quality issues in some places.

I have two remarks, though.

First, the general comment about logical attacks that

the correctness implementation of the JCVM embedded on sensitive products must be carefully checked

is OK, but the following statement about the bytecode verifier (BCV) sounds strange:

Today, a huge part of the security of the products relies on the robustness of the BCV but these attacks demonstrate that the BCV is not always sufficient enough to protect the JCVM against this type of software attacks.

A BCV reasons on the application, and makes no claim that the applications will run securely on a buggy JCVM implementation. There is nothing to demonstrate here, because the BCV has never been intended to protect against this type of software attacks. We need to stress that there is nothing to be done at the level of the BCV to counter these attacks. Here, we need to verify the JCVM implementations.

The second remark is about a sentence in the conclusion, which states that

tests like TCK are not designed to find this kind of bugs in implementations.

That’s true for the stack overflow bug, which is an implementation bug. However, it is not obvious for the two other bugs:

  • The firewall check on a cast should be tested by the TCK. I am actually surprised that it isn’t, but I don’t have access to the TCK these days.
  • Similarly, the implementation bug on the switch instructions is a also a compliance issue, and should be somehow caught by the TCK (it is more of a corner case, though, so it raises the question of required TCK coverage).

In the end, I agree on the fact that labs should have a library of malicious applets of all kinds that they use in evaluations, but compliance issues should be as much as possible checked by the TCK.

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *