JC101-13C: Testing Java Card applications

My first intention was to continue on my security tutorial, by showing how we can include countermeasures in the password manager. However, since then, my colleagues tried to use this application in a training session, and I had to face a hard truth: my programs are buggy. Some of their bugs are not even subtle; some others are not obvious.

The reason for this is well-known: I haven’t taken the time to test the applications. In fact, this was done on purpose, because I wanted to have a few bugs to hunt when I would come to this post about testing. It seems that I have done a good job at leaving bugs, so testing will come before security. In addition, I have to admit that this is what I actually recommend to all developers. Since adding security countermeasures makes code difficult to read (and therefore to debug), it is better to do as much debugging as possible before to insert these countermeasures.

So, let’s discuss how smart card applications are tested today, and how Java Card applications should be tested. There are may different kinds of tests, depending on their intended uses:

  • Developer/regression tests. These tests are usually made by developers for their own use. They don’t include much diagnostics, because they are not expected to fail often. In a typical testing session, only a few tests (corresponding the the modified features) are expected to fail, and the cause is usually simple to find.
  • Compliance tests. These tests are usually intended to test compliance to a specification. Their coverage varies with the level of paranoia of their author, but they typically focus mostly on nominal cases, i.e., on expected commands, successful or not. These tests are intended to be run on unknown implementations, so failures must be easy to analyze.
  • Security tests. These tests go beyond compliance tests, and their objective is to identify a vulnerability that can be exploited in attacks. The focus of these tests therefore is on failure cases, usually with unexpected inputs, combinations, sequences, etc.

We will here look at average functional tests, that can be analyzed, and that provide a good level of coverage.

Typical smart card tests

Typical smart card test suites are made of a series of APDU commands to be sent to the application, combined with an expected response. The testing process simply consists in sending the APDU command and then checking the response.

However, this process has some limits:

  • Smart card protocols often involve cryptographic computations with diversified keys, so precomputing commands and responses is not possible.
  • Some tests require the application to be in a given state, or needs to be inserted in a sequence, which means that several commands may be required.
  • Some commands have an effect beyond the response (for instance, a PUT KEY command is expected to modify a key; testing the command also involves verifying its effect through other commands

All these problems are rather easy to deal with:

  • Script languages are used in order to wrap the commands and unwrap the responses, to deal with the cryptographic protocols, and to parse responses.
  • Each test case includes a preamble, which is a sequence of commands that sets the application in the appropriate initial state before to run the test.
  • Each test case also includes a postamble, which is a sequence of commands that tests the result of the test case, and puts the application back to a well-known state.

Of course, there can be many outcomes to running such test cases:

  • The test can succeed if all commands in the sequence have the expected result.
  • The test can fail if the result of the tested command is an expected incorrect result (usually, the wrong status word).
  • The test can fail if the result of the tested command is an unexpected incorrect result.
  • The test can fail if the result of the postamble commands that check the effect of the command is incorrect.
  • The test can be inconclusive if one of the commands on the preamble or of the postamble fails or returns an unexpected result.

Just like the specification of the application, it is recommended to detail the definition of test cases, especially if they are expected to be used on several implementations. The main reason is here that it makes it easier to interpret a failure. For a testing lab, the precision of these definitions is very important.

Java Card testing

Java Card simply is a way to develop smart card applications. Most Java Card testing therefore consists of standard smart card testing: each test case consists of a sequence of APDUs that are sent to the card. This works quite well, but the fact that the application is written in Java opens new ways for testing, especially for developers.

With Java Card, unit testing becomes possible. Of course, there is no real hope to run JUnit on a card, because Java Card is missing many features that are required to run JUnit. Nevertheless, it is possible to write unit tests for Java Card. The main issue is here to deal with the generation of appropriate CAP files, and with the management of code. Test code must be in the same CAP file for testing, but it must also be excluded from the final code. This would be quite easy with the appropriate development tool, but these tools are not really available for Java Card.

Nevertheless, in the next post, we will look at the definition of tests for our Java Card application, including unit tests, and standard smart card test scripts.


  • Shawn Willden wrote:

    So, it looks like you never got around to writing those columns on testing? Too bad… I’d love some good ideas about how to test Javacard applets right about now.

  • ch murali wrote:

    Good information, can u share me the next post pls

  • I am back at it, and this incomplete testing section bugs me. Yet, it will take a few months before I publish the end of this…

  • ch murali wrote:

    And good to have some information for Black Box testers…how to test these applications.

Leave a Reply

Your email is never shared.Required fields are marked *