ROBOT Attack - Specification vs Implementation
This week saw the release of a new/old attack targeting communications encrypted with TLS - the main encryption standard for almost all end-user communications on the internet. ROBOT stands for “Return Of Bleichenbacher's Oracle Threat” and as the name implies, it is the rediscovery of an old attack that surprisingly still affects modern systems. Using the attack, hackers can potentially eavesdrop on encrypted web surfing and many other encrypted communications.
In this briefing we will take a look at the ROBOT attack, the underlying Bleichenbacher Attack and why conceptual vulnerabilities are hard to fix.
The Bleichenbacher attack is a practical implementation of a so-called “Adaptive chosen-ciphertext attack” against SSL demonstrated by its namesake Daniel Bleichenbacher in 1998. While the cryptographic details of an Adaptive Chosen Ciphertext Attack are beyond the scope of this briefing, a short summary of the attack’s logic is as follows:
What is the Bleichenbacher Attack?
In most encryption schemes, messages must be of specific lengths.
Messages that are too short are padded to the correct length.
When a server is programmed to react to an encrypted message (such as a webserver responding to a request) with different errors for “invalid key” and “invalid padding” it becomes possible to systematically inject data into the encrypted message and slowly but surely decrypt it by observing which injections lead to decryption errors and which lead to padding errors.
After the attack was demonstrated, the SSL successor TLS was specifically designed to prevent such vulnerabilities.
What is the ROBOT Attack?As the name implies, the ROBOT attack is the discovery that after almost 20 years, implementations by major vendors are once again vulnerable to the same attack. Affected vendors include industry heavyweights such as F5, Cisco and Citrix and popular programming languages such as Java and Erlang. Using the attack, an attacker can eavesdrop on TLS encrypted communications and then later on decrypt them by replaying packets to a vulnerable server and performing the required injections. Traffic encrypted with TLS includes most web, mail and instant messaging traffic.
While the answer to this question differs from vendor to vendor, the two core elements are as follows.
How could this happen?
1) Cryptography is complicated
Even within the field of information security, few people fully understand cryptography on the highest level. Since cryptography is a field where mathematics and IT overlap, it is both vast and very complex.
2) Memories are short
In most fields, 19 years are a reasonably short interval. In terms of information security however, it is an incredibly long timespan. While information security is a field of its own in 2017, it was considered part of programming and computer science in 1998. Very few of the experts from back then still work in information security today.
Both of these factors taken together allow old vulnerabilities to slowly sneak back into new implementations. As the details of old attacks are forgotten, safeguards meant to prevent them are ignored or mis-implemented, making the attack possible once again.
Users can do very little to protect themselves against the ROBOT attack. The good news is that patches seem to be deployed relatively quickly and that most implementations of TLS in modern browsers and other software will default to using cipher suites with forward secrecy which make the ROBOT attack significantly harder or even impossible.
How can I protect myself?
As an additional security measure it is advisable to make it harder for attackers to eavesdrop on your encrypted data - for example by using a secure VPN when accessing the internet though public hotspots.