Should we force automated patching?


Why I don’t patch1

In the wake of WannaCry, security experts are deploring businesses and consumers that don’t update and patch systems regularly. Security fixes were available eight weeks before the emergence of WannaCry - plenty of time for everyone to update, or at least mitigate, right?

But we didn’t and we got stung. WannaCry exploited a vulnerability in SMB, a core technology that enables networked file sharing in Windows. Security experts were left scratching their heads again - why didn’t we patch, and if not that, why didn’t we apply the simple mitigation of disabling an older version of the protocol that is hardly used by anyone? Some in the security community assume that there must have been some high-level decision made that patching was somehow not the right decision - maybe there are risks and outcomes that aren’t understood.

That’s not necessarily the case - those decisions don’t always happen consciously among senior leadership. Many times, the decisions have been made at more visceral levels.

Vendors have repeatedly let down their customers in terms of the quality and effectiveness of patches and upgrades. Windows client installations of Java was one of the worst offenders, where a security update would come with hidden bloatware like browser toolbars. But Microsoft and Apple are often no different insofar as neither has effectively segregated security fixes from other updates to the operating system (aside from special efforts employed by a small number of customers and systems). Even when there’s no explicit bundling of security-related fixes with non-security updates, the automated update mechanisms often don’t behave as intended, installing patches and forcing reboots in the middle of workloads.

Most people don’t have horror stories about being hacked, but many engineers have dealt with the consequences of broken security patches, and nearly all senior leaders have visibility of this problem at some level.

Security experts might think that the consequence of getting hacked is higher than the risk of disruptive updates, but broken updates seem to be far more common than intrusions. At the end of the day, this results in gut decision-making based on anecdotal evidence. When that’s not the case and a more formal risk management process is employed by management, leaders will think about the issue in terms of likelihood as well as consequence. A security engineer might shout, “What about Home Depot? What about OPM?” But their managers don’t believe they are explicitly exposed to the risk. The security community’s risk assessments are usually neither succinct nor accurate. The consequences of intrusions, for many people, amount to little more than a bit of frustration - a new credit card, or monthly reports from an identity protection monitoring agency. The consequences just aren’t personal, but the costs of prevention are.

Trust for software vendors’ ability to support their systems is so low that people don’t care about hackers and intrusion.

Open source operating systems like Debian and FreeBSD moved in the right direction years ago. Security updates are isolated out-of-band from feature updates. That new whizbang framework that came out this month doesn’t affect the workflow of the users of most systems. The framework might be necessary to support some new product, but if the user doesn’t own that product, that framework or API update is unnecessary. Pushing that sort of stuff as part of automatic updates often amounts to little more than treating users as beta testers. This might be one of the reasons people were so reluctant to accept Microsoft’s free upgrade to Windows 10 - because Windows 7 had aged to the point where Microsoft finally stopped pushing disruptive updates, and the workflow-breaking Windows 8/Metro changes aren’t remembered fondly. Beyond that, there are a number of technologies that have so many vulnerabilities that they seem broken by design. This is one reason I am very happy that Chrome and Edge both have built-in PDF viewers - so that users don’t have to install Acrobat. Likewise, now that we have ubiquitous support for WebAssembly, ES6, and WebGL, there’s no reason to support Flash or Java.

Lastly, there’s the configuration management perspective. Plenty of patch advocates talk about how organizations can set up deployment life cycles where patches first enter test environments, go through validation, and then are implemented via a rolling push. Not only is this approach difficult to implement, once implemented, it usually has the same problems as automated patching. Simply put, most IT groups are not able to do a good job of identifying whether a patch breaks a workflow no matter how much time they are given. It’s a simple matter that system administrators generally lack the domain expertise of the users they support. There are exceptions to this, of course, but my experience is that the majority of security engineers don’t have the right skill set - a Security+/CISSP does not quality someone to determine that the latest Patch Tuesday doesn’t break Vivado.

Vendors must learn to earn trust from the users. This might require government regulation - for example, requiring vendors to provide security patches for certain classes of software (like commercial off the shelf and ad-supported proprietary software) via a channel that only addresses security vulnerabilities - a sort of “maintenance” channel (like some of the UNIX systems of yore) that is freely accessible, enabled by default, and guaranteed to be available for some period of time (say, 7 years) after the product is last sold. This might also mean that governmental organizations could be more actively involved in absorbing the costs of finding and testing fixes, meanwhile vendors need to charge prices for products commensurate with the appropriate level of support required to solve this problem.

SUMMARY: Systems don’t get patched because vendors (and system administrators) have repeatedly broken trust with their customers by repeatedly shipping and deploying unnecessarily disruptive updates. Not patching is a rational decision, and fixing a cycle of broken trust is paramount to changing that decision.


  1. I do patch. But I also try to only use technologies where patching does not routinely break workflow.