Threat Modeling


How to Waste Time and Reduce Productivity.

Victor’s very excited. He just took apart the latest IoT home camera system and - *gasp* - found a JTAG port. He reflashed the Linux-based firmware with a root reverse shell and decided that he found a CVE 9.6 remote vulnerability. Victor sets up a web site and comes up with a catchy name for his vulnerability, and even hires artists to work up a logo and theme song.

The fundamental problem, though, is that Victor redefined “threat” to mean “anyone who can get physical access to the device, open it up, connect a sophisticated hardware debugger, and make sophisticated changes to the device.” Now, sure, he doesn’t see the problem - the tutorials are on the web, the software is open source, and the debugger hardware costs less than $5 drop-shipped straight to your door from Hong Kong.

In some sense, Victor’s even justified - “the insider threat is the biggest threat”, after all. Who’s to blame him, after all the insiders involved in major corporate compromises and the ready-willingness of consumers to click on hostile JavaScript files sent to them by “Amaz0n”? Victor is, at least, internally consistent.

The problem is that this philosophy generates more noise than signal. Sure, we could demand that every embedded device be stripped of JTAG ports, and that bootloaders are stored in ROM and require signed images, and … And that’s exactly what we do. While there is certainly a plethora of deep, meaningful, and impactful analysis coming out of the Threat Modeling community, there’s far more garbage.

At the end of the day, security holes are just bugs, and threat modeling is going to end up relegated to the same role as using antivirus: we don’t want it, but we don’t want to get caught with our pants down because we didn’t do it. Both are going to be self-inflicted wounds until we figure out better tools, better runtimes, and better incentives.