The whole discussion around patching and vulnerability management is a big problem in general, but typically exacerbated by compliance initiatives like PCI DSS. Companies want to be secure, in general, but they have different risk procedures that can change the manner in which they do things like patching or how they lock down desktop controls.

Patched Tube, by Morten Liebach

A good friend of mine turned me on to a presentation that happened at the San Diego ToorCon this past weekend that I am curious about. The abstract pushes us into dangerous territory, that of interpretation of QSAs (something we have often chatted about here).

In the abstract, the presenter takes the opinion that rushing to patch is undesirable (potentially agree) and that the language added in PCI DSS 2.0 around risk-based approaches to patching open up a loophole for organizations to forgo patches. I didn’t see the presentation, so this may be downplayed, but I find this a dangerous view when it comes to PCI DSS. Patching has been a big issue around PCI DSS with the changes in 2.0 and the ASV Program Guide. Most of the questions come down to operational technicalities that QSAs should not be dealing with. That said, the risk-based approach on patches is something welcome to operations and something I fully support.

When facing a vulnerability where a patch is available, organizations have a number of options they can take to comply with PCI DSS and keep their enterprises safe.

  1. Patch the system. Yes, this is a simple one, we now live in a world where IT has to be nimble. Look at the adoption of DevOps and Agile systems. There is no reason why an organization striving to do one code push per day can’t strive to do one patch per day. There isn’t much difference operationally (though the mechanics can differ). So invest in beefing up your capabilities to react quickly. If not, you are leaving yourself open to a successful attack because you are simply not good at IT.
  2. Remove the offending software. In some cases, like the old zero-day against .art files in IE, you could simply remove the DLL in question as it was a rarely used piece of code that had minimal impact to most users. Easy enough, but theoretically the mechanics here are not much different than applying the patch in the first place. The question to ask yourself is what is the difference in effort with this band-aid versus installing the patch?
  3. Remove the system from scope. This one is much harder on a grand scale, but for a singular system it could be a pretty easy fix. The way to remove it from scope is to ensure it is not connected to the CDE and there is no cardholder data on it (encrypted or not).
  4. Apply the patch later, if the risk supports it. This one can happen, but it is not always the best choice. You must understand that this is NOT an option if this is a critical security patch. Your definitions may vary, but I would call anything critical that could lead to a breach of cardholder data.

So with that list above, which do you think is the easiest to assess and therefore will have the least amount of questions from a QSA? Obviously the first one. But we shouldn’t necessarily¬† manage our businesses to speed through audits. So the other options may be better for your business, but it has a higher potential for interpretation problems, so ensure you are aware of the repercussions if you choose another way.

This post originally appeared on