Fiction (maybe): Who will refuse to break a secure element?

Apple is refusing to break an iPhone for the FBI. I believe that they are right to do so, but also that this position isn’t that easy to stand for everybody. So, here is a little fiction (well, I think it is fiction) about this.

The iPhone is a secure device, so the best way for Apple to refuse breaking the phone is to claim that they can’t do it. Here is the story line:

  • The iPhone includes a secure element.
  • The security code is stored and verified in the secure element.
  • The phone encryption key also is stored in the secure element.
  • In order to get that key from the secure element, the security code must be presented.
  • We don’t know how to break into the secure element, so we can’t bypass that security mechanism.

It is a bit technical, but it is true from the point of view of Apple and “standard” developers. But the last statement (we don’t know how to break into the secure element) may not work for everybody.

First, regardless of the secure element, physical attacks are feasible, and there are many kinds. Some of them, like power analysis, do not even require to destroy the chip in any way. Of course, many such attacks will require “opening” the secure element to expose the chip and do other things that may destroy it. In the case of the FBI, who wants to attack a single chip that they must not destroy, this is not really nice.

Then, the secure element on the iPhone is most likely based on Java Card. This means that it is possible to load applications on it. Of course, Java Card includes security measures, so the application needs to be a malicious one; well, there are malicious applications around, and here, we are working with software. This makes a big difference. Now, here is the checklist for FBI:

  1. Get Apple to provide 100 secure elements configured exactly like the targeted one, with the card management keys (required to load an app).
  2. Get some hacker to develop a malicious application that gets the value of the code, or bypasses the code check, or gets the value of the encryption key, …
  3. Establish an attack procedure and test it on actual phones.
  4. Get Apple to provide the management key for the targeted phone.
  5. Run the attack on the targeted phone, and bingo!

This is slightly different. Apple still needs to collaborate, but not at the same level. Step 4, in particular, is about providing a key that can only be used on the terrorist’s phone. Step 1 is a bit more difficult, since Apple needs to support hacking efforts on their phones.

There is also a need to get a hacker. This may not be as difficult as it sounds, because as far as I know, the population of hackers capable of doing such a thing is small but includes:

  • security evaluators working at labs who necessarily have ties to NIST (very close to NSA, here), or to the equivalent in another country;
  • people working in various companies and institutions who depend greatly on government contracts;
  • and a few real hackers, who could do some work for money, fame, or both.

All these people, for various reasons, are far more vulnerable than Apple, and it is quite likely that FBI will be able to find one to cooperate with them.

And then, what’s left? Well, the security of some secure elements and Java Card implementations are really really good, and they will be protected against more attacks, performed by most attackers. Luckily, the handful of guys who may be able to break these implementations may not be willing to do so.

Still, it sounds like a good idea to support Apple here.

Final thought

This is now a problem that every security professional/company who is involved in the development, testing, or evaluation of consumer or industrial devices must think about: How will I react to injunctions from law enforcement? Should I make sure that I don’t know how to hack my device?

3 Comments

Leave a Reply

Your email is never shared.Required fields are marked *