The PCI DSS instructs assessors to sample certain parts of the population when validating compliance. According to the PCI DSS, the sample “must be a representative selection of all of the types and locations of business facilities as well as types of system components, and must be sufficiently large to provide the assessor with assurance that controls are implemented as expected.” That often leads to the next two questions—the answers to which tend to vary among assessors:
- What do you mean by representative selection (or how many is representative)?
- What do you consider sufficiently large to gain assurance?
In the audit world, internal auditors that review IT systems will look to statistically valid samples as a method to determine how big the sample size should be. This is not as cut and dry as it sounds. Even though there are formulas to calculate statistically valid samples, they rely on variables such as error rate and confidence, and altering either can dramatically affect the ultimate size of the sample. These settings can also vary between disciplines, leading to spirited debates among auditors and assessors alike.
Google gives us a couple of nuggets, one being several spreadsheets that can help us, and a website that will do the work for us. I’ve also touched on sampling in the past here in the blog.
The Council trained assessors to use Selective Sampling to determine sample size in the past. This loosely translates to start with a few, see if they comply. If so, stop. If not, dig deeper. This method caused wide variations among assessors—some assessors electing to use a sample size of zero! Most assessors using this method generally sampled two to three items in each population.
Selective sampling is no longer included in the PCI DSS, meaning that assessors are free to choose the most appropriate sampling methodology for the environment. In order for ROCs to be complete (and achieve the best possible score during the Q/A review), assessors must describe their sampling methodology. Some assessors choose to use statistically valid samples, some choose to do selective sampling. Both are acceptable, it just needs to be documented.
So what should YOU do? If you are facing a mid-year review of your PCI compliance and need to generate some samples, how should you go about doing this to get a realistic result?
The answer to this depends on your environment and tolerance for risk. Many of our customers will turn this process over to their internal audit team whereby the corporate standard method of sampling will be used (typically some kind of statistically valid sample). If you have a large environment and are not confident that you have kept variance inside the population to a minimum, try choosing a statistically valid sample with a higher error rate to boost your sample size. If you have well defined processes and can say definitively that all machines are built identical and to pre-determined corporate standards (hint, this is PROBABLY not you) or limited resources force you to go with smaller sample sizes, try selective sampling or punching in a lower error rate.
If you choose to use statistically valid samples, use a 95% confidence rate and default to a 5% error rate (fairly common values). The most important part of sampling is to document what you did to determine the sample size, characteristics about the population, and all of your findings for each element in the sample. If you are in doubt, consult with your internal audit team. They may already have formulas ready for you to use!
Look for posts next week from the PCI Europe meeting in Prague!
Possibly Related Posts:
- Let’s Encrypt for non-webservers
- Selective Domain Filtering with Postfix and a SPAM Filtering Service
- PCI DSS 4.0 Released plus BOOK DETAILS!
- Preventing Account Takeover, Enable MFA!
- Proofpoint Patches URL Sandbox Bypass Bug