In the few coming posts, we will define a secure channel protocol from scratch as an example, and provide an implementation for it. This example will also be used as a way to introduce the cryptographic mechanisms that exist in Java Card.
Be careful, this is not a tutorial on cryptography. I am not a cryptography expert, and I will just repeat basic ideas. If you want to learn about cryptograpy, consider getting a book like this Bruce Schneier Compilation
The shared secret
An essential part of most secure channel protocol is the shared secret (in our case a cryptographic key), and the way in which this secret is protected from disclosure. An attacker is likely to try to get the secret in order to break the protocol. There are two very important properties of the protocol and its keys:
- Strength. The key strength is usually measured by the number of possible values for the key, which can be considered proportional to the time required to break a single key.
- Resilience. The resilience of a protocol is about the consequences of a successful attack. If Charlie is able to find the encryption key for a message, will he be able to decipher all messages between Alice and Bob? Will he also be able to decipher the messages exchanged by all users of the same protocol? If a protocol is able to limit the consequences of a successful attack, it is considered as resilient.
Let’s start by the strength; by studying an example. The DES algorithm uses a 8-byte key, in which only 7 bits per byte are used (the last one is used as checksum, or parity bit). This make 56 bits, and the number of possible keys is:
2^56 = 72057594037927936 ~ 7.2 x 10^16
This looks like a large number, but today’s computers are fast. If we consider that it takes one microsecond to try a key on a message, we then need only 7.2×10^10 second, or 2×10^7 hours, or about 2284 years. At first, this sounds good enough, as nobody expects their application to last that long. But then, think about a hacker who controls a botnet with 100,000 computers (apparently, these are quite common). With that many computers doing the work, it will take less than 10 days to break a key. And that is way too small for any serious application.
That’s where resilience becomes important. Let’ say that that Charlie has discovered the key used by Alice to encipher a message. If the same key is used for every communication by the protocol, he may be able to decipher all messages exchanged by anybody who uses the protocol. That would definitely be a bad protocol, and an interesting way for an attacker to keep a botnet busy. There are two classical ways to reduce these consequences and make the protocol more resilient:
- Session keys. The idea is here to generate a new key for every session, that will be somehow derived from the shared secret. That way, if Charlie breaks a session key, it will only apply to a single session. If measure are taken to ensure that a session only lasts a limited time (a few seconds are sufficient for most sessions), then the attacker has to be very fast (in our example, it would take a botnet of a billion computers to break a DES key in 20 seconds; and that’s not realistic today).
- Diversified keys. The idea is here to generate a master key that is known by a centralized entity (let’s say, a bank), and to compute a diversified key for every card, for instance by using the card’s number. That way, if Charlie breaks a message sent by Alice, the secret that he will get will only be good for the messages sent by Alice, and not for those sent by anybody else.
Principles of the solution
Now, let’s see what we can do in our case. We will combine a few measures:
- We will use a triple-DES key, i.e., a 16-byte key from which 112 bits are used (you can do the computation that shows that such a key is quite difficult to break).
- We will use diversified keys, i.e., we will consider that the key used by the each card application instance is in fact computed from a master key and from an instance identification number.
- We will use session keys, i.e., we will compute a specific key for each session, in order to minimize the use of the actual card keys.
Stating this is a good start, but we all have read horror stories about little details that led to the demise of some cryptographic protocol, so there are more things to consider.
The objective of key diversification is to use a single “big” secret to secure a large number of application instances. As mentioned previously, the typical example is a bank, or payment system, in which the master secret is known by the bank, but individual customers only get a diversified key, computed from the master secret and from their account number.
Of course, in a case like that, an attacker may try to guess the master secret by breaking the diversification function. Since the account number (or any number used to diversify a key) must be considered public, the complexity of breaking the master secret only depends on the size of the secret and the strength of the diversification function. Let’s consider a few functions:
- Triple-DES encryption. The idea is here that the diversified key is the result of the encryption of the diversification data by the master key. With today’s technology, this is not bad.
- Triple-DES encryption, after XOR with a secret random number (or any other transformation of the diversification data). This sounds like a good idea, because it makes it impossible for an attacker to check that they have indeed found the appropriate key (since they cannot compare the result of the decrypted diversified key with the diversification data). In fact, it simply forces them to attack two diversified keys at once exercise.
- RSA signature. The idea is the same as the first one, except that the encryption is here performed using a RSA private key. Making the public key available would allow anybody to verify the validity of a diversified key, but this is not interesting (these keys are supposed to be secret). It is therefore better not to disclose the public key.
We have here seen that it is not as obvious as it seems to design such a function. Among the three solutions presented above, I would personally prefer the third one, mostly because it relies on a different technology than the rest of the protocol, which could make it more resilient: if the triple-DES technology is broken, it will have no consequences on RSA, so the diversification algorithm will still be valid.
Anyway, the diversification algorithm is out of the scope of our discussion here, since we are only going to implement the card side of the algorithm, where the key is just considered diversified.
Session key computation
Some of the security issues for session key computation algorithms are similar to those encountered for key diversification algorithms, and in particular the fact that it must be difficult to get to the instance key from a session key.
In addition, session keys are also used to ensure that it is not possible to “replay” a transaction, which means that it must not be possible to generate the same key twice. Since such a replay attack must be impossible for both parties, this means that we will need to use some random data coming from each party.
In addition, the computation of the session keys is normally used at the beginning of a session in order to perform the mutual authentication. Since the computation of the key requires both sides to perform computations using the shared secret, this works out fine.
Next time, we will try to map our protocol to APDU commands.