A Few Thoughts on Cryptographic Engineering

If you’re interested in technology/privacy issues then you probably heard last week’s big news out of the Boston Marathon case. It comes by way of former FBI agent Tim Clemente, who insists that our government routinely records all domestic phone calls.

Clemente’s claim generated lots of healthy skepticism. This isn’t because the project is technically infeasible (the numbers mostly add up), or because there’s no precedent for warrantless wiretapping. To me the most convincing objection was simple: it’d be hard to keep secret.* Mostly for boring phone company reasons.

But this led to another interesting discussion. What if we forget local phone eavesdropping and focus on an ‘easier’ problem: tapping only cellular phone calls.

Cellular eavesdropping seems a lot more tractable, if only because mobile calls are conducted on a broadcast channel. That means you can wiretap with almost no carrier involvement. In fact there’s circumstancial evidence that this already happening — just by different parties than you’d think. According to a new book by reporters Marc Ambinder and Dave Brown:

The FBI has quietly removed from several Washington, D.C.–area cell phone towers, transmitters that fed all data to wire rooms at foreign embassies.

This raises a few questions: once you’ve tapped someone’s cellular signals, what do you do with the data? Isn’t it all encrypted? And how easy would it be for the US or some foreign government to lay their hands on the plaintext?

All of this which serves as a wonderful excuse to noodle about the state of modern cellular encryption. Be warned that this is not going to be a short post! For those who don’t like long articles, here’s the TL;DR: cellular encryption is a whole lot worse than you think.GSMGSM is the granddaddy of all digital cellular protocols, and it remains of the most popular protocols in the world. One thing that makes GSM special is its call encryption capability: the protocol is designed to encrypt all calls in between the handset and the local tower.

Call encryption is facilitated by a long-term secret key (call it K) that’s stored within the tamper-resistant SIM card in your GSM phone. Your carrier also has a copy of this key. When your GSM phone connects to a tower, the parties execute the following authentication and key agreement protocol:

GSM authentication and key agreement protocol (source). MS represents the ‘mobile station’ (phone) and HLR is the ‘home location register’, a central database. The MS and HLR combine a long-term secret Ki with a random nonce RAND to create the shared communication key Kc. A3 and A8 are typically implemented using the COMP128 function.

The interaction above serves two purposes: first, both the phone and carrier (HLR) derive a pair of values that will be used for authentication and key agreement. This includes a session key Kc as well as a short authentication token SRES that the phone will use to authenticate itself to the tower. Derivation is performed using two functions (A3 and A8) that accept the long-term secret K and with a random nonce RAND.

There are a handful of well-known problems with the GSM security protocols. In no particular order, they are:

  1. Lack of tower authentication. GSM phones authenticate to the tower, but the tower doesn’t authenticate back. This means that anyone can create a ‘fake’ tower that your phone will connect to. The major problem here is that in GSM, the tower gets to pick the encryption algorithm! That means your attacker can simply turn encryption off (by setting encryption ‘algorithm’ A5/0) and simply route the cleartext data itself.In theory your phone is supposed to alert you to this kind of attack, but the SIM chip contains a bit that can de-active the warning. And (as researcher Chris Paget discovered) carriers often set this bit.
  2. Bad key derivation algorithms. The GSM ciphers were developed using the ‘make stuff up and pray nobody sees it‘ school of cryptographic algorithm design. This is a bad approach, since it’s incredibly hard to keep algorithms secret — and when they do leak, they tend to break badly. This was the case for the original A3/A8 algorithms, which are both implemented using single function called COMP128-1. Unfortunately COMP128 turns out to be seriously broken — to the point where you can clone a user’s SIM key in as few as 8 queries.
  3. Bad encryption algorithms. Fortunately it’s easy to replace COMP128-1 by swapping out your SIM card. Unfortunately the situation is much worse for GSM’s A5/1 call encryption cipher, which is embedded in the hardware of most handsets and tower equipment. A5/1 was leaked around the same time as COMP128 and rapidly succumbed to a series of increasingly powerful attacks. Today it’s possible to conduct an efficient known-plaintext attack on A5/1 using a series of rainbow tables you can obtain from BitTorrent. The upshot is that most A5/1 calls can now be decrypted on a high-end PC.
  4. Terrible encryption algorithms. But it’s actually worse than that. GSM phones support an ‘export weakened‘ variant called A5/2, which is so weak you can break it in real time. The worst part is that A5/2 uses the same key as A5/1 — which means an active attacker (see #1 above) can briefly activate the A5/2 mode, crack to recover the encryption key, then switch back to A5/1 with a known key. This is much faster than attacking A5/1 directly, and allows eavesdroppers to intercept incoming phone calls from a legitimate (A5/1 supporting) tower.
Alleged ‘Stingray’ tracking device mounted on an SUV (source).

Another unfortunate aspect of the GSM protocol is that you don’t need to attack the crypto to do useful things. For example, if all you want to do is determine which devices area in an area, you simply present yourself as a valid tower — and see which phones connect to you (by sending their IMSI values). This is the approach taken by IMSI-catchers like Stingray. Now the ‘good news’ here is that attacks (1)(2) and (4) require active involvement by the attacker. This means they have to be after you specifically and — at least in principal — they’re detectable if you know what to look for. (You don’t) However, the weaknesses in A5/1 are a whole different kettle of fish. They permit decryption of even passively recorded A5/1 GSM calls (in real time, or after the fact) even to an attacker with modest resources.3G/4G/LTE.

A valid response to the points above is to note that GSM is nearly 30 years old. You probably wouldn’t blame today’s Ford execs for the crash performance of a 1982 Ford Escort, and similarly you shouldn’t hold the GSM designers responsible for a 1980s protocol — even if billions of people still rely on it.

Overview of the 3G AKA protocol. Look familiar?

The great news is that modern phones often support the improved ‘3G’ (e.g., UMTS) or LTE standards. These offer a bundle of improvements that substantially improve security over the original GSM. These can be summed up as follows:

  1. Mutual authentication. The 3G protocols use a new ‘Authentication and Key Agreement‘ (AKA) protocol, which adds mutual authentication to the tower connection. To validate that the phone is speaking to a legitimate tower, the carrier now computes a MAC that the phone can verify before initiating a connection. This prevents many of the uglier protocol attacks that plagued GSM.
  2. Better authentication algorithms. The session keys and authentication tags are still computed using proprietary algorithms — now called f1-f5, f5* — but the algorithms are purportedly much stronger. Since their design is carrier-specific it’s not easy to say exactly how they work. However this 3GPP recommendation indicates that they might be based on a block cipher like AES.
  3. Better encryption. Call encryption in 3G uses a proprietary block cipher called KASUMI. KASUMI is based off of a Mitsubishi proposal called MISTY1, which was heavy customized to make it faster in cellular hardware.

The biggest source of concern for 3G/LTE is that you may not be using it. Most phones are programmed to gracefully ‘fail over’ to GSM when a 3G/4G connection seems unavailable. Active attackers exploit this feature to implement a rollback attack — jamming 3G/4G connections, and thus re-activating all of the GSM attacks described above. A more subtle concern is the weakness of the KASUMI cipher. Unfortunately KASUMI seemsmuch weaker than the original MISTY1 algorithm — so much weaker that in 2010 Dunkelman, Keller and Shamir were able to implement a related-key attack that recovered a full 128 bit call key in just under two hours! Now before you panic, you should know that executing this attack requires a substantial amount of data, all of which must all be encrypted under highly unrealistic attack conditions. Still, it’s interesting to note that the same attack fails completely when applied to the original MISTY1 design.

Top: generation of keys (CK, IK) and authentication tags AUTN, XRES using the functions f1-f5. Bottom: one proposed implementation of those functions using a 128-bit block cipher Ek. This could be AES (source). And yes, it looks complicated to me, too.

What if you have already the keys?

The encryption flaws above seem pretty significant, and they really are — if you’re a private eavesdropper or a foreign government. For US national security organizations they’re probably beside the point. It seems unlikely that the NSA would have to ‘break’ any crypto at all. This is because both the GSM and AKA protocols lack an important property known as forward secrecy. What this means is that if I can record an encrypted call, and later obtain the long-term key K for that phone, then I can still reliably decrypt the whole communication — even months or years later. (Protocols such as Diffie-Hellman or ECMQV prevent this.) Worse, for cellular conversations I can do it even if I only have one half (the tower side) of the communication channel. Unfortunately I don’t think you have to be a conspiracy theorist to suppose that the carriers’ key databases are probably stored somewhere in Ft. Meade. Thus to the truly paranoid: stop talking on cellphones.

In conclusion

Sometimes conspiracy theories can be fun. Sometimes they’re downright creepy.

For me this discussion veers towards the ‘creepy’. Not so much because I think the NSA really is tapping all our cellphones (I suspect they just read our Facebook). Rather, the creepiness is in seeing just how vulnerable our privacy infrastructure is, even to people who are far less capable than the NSA.

As a result, your data isn’t just available to nation states: it’s also potentially available to that goofball neighbor who bought an IMSI-catcher off the Internet. Or to your business competitor. Or even to that one girl who finally got GnuRadio to compile.

We’re rapidly approaching a technological crossroads, one where we need to decide if we’re going to keep trusting others to protect our data — or if we’re going to take charge of it and treat carriers as the dumb and insecure pipes that they are. I suspect that we’re well on our way towards this world — and sadly, so does the FBI. It’ll be interesting to see how things play out. Notes:The argument is that recording all local calls in real time is a bigger job than just splitting major trunks or re-routing a few specific targets. It would seem to require significant changes to the US phone infrastructure. This is particularly true at the local office, where you’d need to upgrade and provision thousands of devices to record all calls passing through them. Building this capability is possible, but would require a lot of effort — not to mention the complicity of hundreds (or thousands) of collaborators at the local telcos. And that means a whole lot of people keeping a very big secret. People aren’t so good at that.

Source: Cryptographic Engineering