Light Mode Dark Mode
July 11, 20254 min read

The Immutable Laws of Patching: An Interview with Dan Richings, Pt. 1

2507_9535502050_QA_Laws_of_Patching_blog_m1-01

Gone are the days of treating patching as a routine, reactive task. Today, organizations face an always-on threat environment where speed, visibility, and automation are essential to staying secure. As vulnerabilities grow in volume and IT environments become increasingly complex, effective patch management has never been more critical. This begs the question: What are the immutable laws of patching? 

There are principles every organization must follow to manage patches effectively. From identifying what to patch to prioritizing which patches to deploy first, these “immutable laws” form the foundation of a modern, proactive patching strategy.

AI and real-time threat intelligence are transforming patch management into a continuous, intelligent process. Whether you're a CISO, IT leader, or security professional, mastering these principles is key to reducing risk and building a more secure infrastructure. 

In part one of our Q&A series with Dan Richings, Senior Vice President of Global Customer Success and Solutions at Adaptiva, we explore what it takes to patch with purpose – and stay protected.

 

Dan, what are the core, unchanging truths or “immutable laws” of effective patching that every organization should follow?

Dan: Firstly, patching is not optional. If you have devices on a network, they are potentially exposed, so patching is essential for any business. 

Secondly, you can only patch what you know about – so having an effective scanning agent or system that can detect the software, identify available patches, and determine what is missing is crucial for businesses operating in today’s threat landscape. 

Not long ago, patching was largely an afterthought for many organizations. Beyond responding to Microsoft’s Patch Tuesday updates, most lacked a proactive strategy for identifying vulnerabilities and deploying patches. Today, with most vendors operating a Software-as-a-Service (SaaS) model, patching is continuous. Thousands of vendors are releasing updates daily, whether it's new agents or new versions of apps, and there are higher expectations around the security of releases. 

This brings me to my final immutable law, which is that autonomous patching processes are necessary in today’s risk environment. Keeping up with the volume of patches and the speed at which they need to be applied exceeds the capabilities of even the largest IT and security teams, so automation is a must.

 

Why do so many companies still struggle with patching, even when they understand the risks of falling behind?

Dan: Unless you have an established autonomous process that you trust, you have to patch manually. If you patch manually, it forces IT and security teams to compromise and prioritize which patches are deployed, because you can’t apply every patch manually, as we established earlier. 

Let’s take a closer look at the risk. Suppose you decide to patch manually and make a compromise by only applying the high-severity patches while leaving all of the lower criticality vulnerabilities unpatched. In that case, you risk exposure to these vulnerabilities, which malicious actors typically target because they know they are generally not thoroughly patched, so it's a lose-lose outcome.

No two environments are the same, and they all have unique complexities, with many different operating systems, teams and requirements. Still, some larger companies have no option but to adopt a one-size-fits-all strategy, which doesn’t always work in the environment and leads back to making compromises. 

 

How do speed, scale, and reliability factor into a successful patching strategy? Why is it so hard to get all three right?

Dan: Speed comes down to mean time to remediation (MTTR) - there is already a very small window between discovering an exploit and using it for malicious purposes. The quicker you close this window, the better, so reducing the length of exposure is critical in today’s threat landscape. 

Patching at scale requires addressing vulnerabilities across multiple platforms and operating systems, from Windows and Mac to Linux, from a single solution. Large enterprises operate upwards of 100,000 seats, so patches need to be applied quickly and across the board - from critical systems to lower-priority ones. 

While critical systems are usually the ones that are protected, sitting in a data center secured with expensive anti-malware software, it's the less essential systems that malicious actors often target, so scale is crucial to establish a broad reach of coverage.

Patching solutions need to be reliable - if systems aren't trustworthy to install patches, they are not doing their core job. Ensuring that patching software is capable of reliably testing and providing real-time feedback to determine if rolling back patches is necessary is a crucial capability for organizations to be sure that vulnerabilities are secured. 

While it can be challenging to get speed, scale, and reliability together without compromising one, they are all vital elements of an effective patching strategy, and organizations need a solution that delivers all these features together. 


In part two of our conversation, Dan shares why automation is essential (but not a silver bullet), why vulnerability prioritization needs a holistic, threat-driven approach, and why legacy systems are too slow for today’s security needs.

AdobeStock_488605053

Ready to Get Started?

Schedule a one-on-one demo today.

Request a Demo