A risk analysis is a great tool when planning the security of a product. This is typically done with a top-down methodology: You first define assets, then identify threats or risks on these assets, followed by attack strategies and attack objectives, countermeasures, getting finer and finer.
These methodologies present many advantages, and one of the most obvious ones is their ability to have a quantitative side, where a value can be assigned to a risk or threat, or to a countermeasure, allowing management decisions to be taken.
Top-down is an owner’s view, not an attacker’s view
However, the top-down approach has some limits, especially when doing a threat analysis, and when getting closer to implementation. The top-down approach has two big problems, which limit how it models reality.
First, a top-down approach doesn’t faithfully represent the way in which attackers work, which is often much more opportunistic. An attacker often has an abstract goal, for instance to break into a system or harm some company’s reputation. This is because the attacker is in fact an attacker ecosystem, with many actors involved. Some people actually perform attacks on actual targets, but other actors identify vulnerabilities attack paths by investing in security research. The market for 0-day attacks on major software components is an example of this market. In that case, the researcher does not have any idea of the way in which the attack will be exploited, and doesn’t care.
Second, and maybe more importantly, a top-down analysis privileges intent over reality, and it can’t represent the blatant bugs that essential to many vulnerabilities. When a developer assesses the difficulty or cost of an attack, his view is necessarily optimistic. The attacker will not consider that he will make a stupid mistake, or that his algorithm is flawed. In real life, though, many vulnerabilities are linked to such situations.
Attackers are opportunistic
Let’s consider an example. We use an attack graph, because it better matches the reality of an attacker ecosystem, building complete attack paths from smaller attacks and vulnerabilities. Graphs can show how attacks are interconnected. If individual attack edges are weighted with the cost of the attack, then the attacker’s job is to identify the path with the smallest cost, or at least one of the smallest costs. This is shown on the left, with two paths that are better than others (total costs of 8 and 9, respectively), but the differences are not that great, with the costliest path at 11.
Now, look on the right. The graph has been corrected by simply taking into consideration one single stupid bug that makes one attack much easier than expected. And suddenly, a new path appears, that was not initially considered, because it was supposed unlikely, with a much lower total cost of 5. This minor change can lead developers to reconsider some of their hypotheses about threats.
And we have been nice, here. In real life, new attack edges and nodes are likely to be added to the attack graph, defining shortcuts to existing attacks, and often leading to completely unexpected attack paths.
Build attacks from low-level vulnerabilities
That’s where a bottom-up analysis can greatly help. The idea is to look at individual components and security measures, identify how they could be abused, what vulnerabilities are likely, in the most practical way possible. Such an analysis can be a bit messy, with many directions, so they will not yield a nice set of slides to show to a manager or customer. However, they are likely to identify a few interesting threats and attacks that cannot be identified through a top-down approach.
Later in the development cycle, similar results can be achieved through a security evaluation, especially in black-box testing. As the evaluators look at their target, they will have an opportunistic approach by first trying to find a few easy vulnerabilities.
[EDITED] Eric Diehl published a very good article about white-box testing vs. black-box testing, that I have to agree with. It changes a bit my last reference about black-box testing: white-box testing is generally more efficient than black-box testing, and if you are in a position to use it, a bounty program also is more efficient than black-box testing. Yet, I still believe that a foray into black-box testing can be useful to get a different insight into a product’s security.
If you are not familiar with threat analysis or modeling, try Adam Shostack’s Threat Modeling: Designing for Security. It really helped me when I started working on threats, and it provides good tools for a bottom-up analysis.