A few days ago, Metasploit has announced that their famous tool is now available to car hackers, and soon for any connected object.
Metasploit is a well-known tool for web apps, and extending it to objects simply makes these objects as easy to hack as web apps. Indeed, there are many aspects in common between a Web App and a Connected Object: they can be reached from internet, they run complex software, they can be targeted by ransomware, and they can be misused.
Then, there are differences, and these differences make objects easier to hack than web apps. The most commonly cited is the lack of supervision of connected objects. This has allowed the creation of very large botnets of connected objects, and will most likely continue and worsen in the next few years, because few people realize that their fridge is in a botnet.
There is another difference, more subtle, but with great consequences on security: the objects’ availability. When targeting a web app, an attacker may know some of the software that has been used to build the app. He may even be aware of some vulnerabilities in this software. However, the attacker will know nothing of the product’s configuration, or of the security products and countermeasures deployed to protect it.
In order to design an attack, information must be gathered by probing the web app “live”, taking the risk of being detected by a sophisticated IDS or IPS, or of hitting a honeypot that will record his attack techniques. Metasploit helps by providing tools and a database of known issues, but such attacks still require great skills.
Now, let’s consider a connected object, even a complex one like a car. The attacker can tear the object apart, get access to memory chips, dump their content, and analyze it. This analysis will take time and resources, but there is no risk of getting caught, and countermeasures will eventually be exposed together with vulnerabilities. Metasploit will make the task even easier if the device includes known vulnerabilities. In the end, the attacker will obtain a viable attack path, by working in the security of a lab.
Detection is very difficult in such conditions. An IDS will protect a connected car, for instance, against random remote attacks. But a skilled attacker will only perform a live attack on a Connected Car after verifying that the IDS doesn’t catch it.
And this is only the beginning. Security research on connected objects is rather new, and focuses on the easiest targets. Skills will improve over time, making more sophisticated targets vulnerable. Now, let’s add to this equation an AI to assist in the reverse engineering and the identification of vulnerabilities. Traditional defenses are toast.
In a few years, if there is a vulnerability in a connected object, someone will find it and maybe exploit it. Encryption will provide a temporary protection. Trusted Execution Environments will add some hurdles. But these are just new countermeasures, not game changers.
That’s what I like in formally proven software. A mathematical proof that a piece of software does not leak information is a good countermeasure against reverse engineering AIs. With such a proof, finding a vulnerability is not about finding a software bug, it’s about finding an issue in a formal model that could lead to a wrong proof that could hide a bug. And that’s orders of magnitude more difficult, even for an AI.
Originally published on LinkedIn.